id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.15757 | Healing Unsafe Dialogue Responses with Weak Supervision Signals | Recent years have seen increasing concerns about the unsafe response
generation of large-scale dialogue systems, where agents will learn offensive
or biased behaviors from the real-world corpus. Some methods are proposed to
address the above issue by detecting and replacing unsafe training examples in
a pipeline style. Though effective, they suffer from a high annotation cost and
adapt poorly to unseen scenarios as well as adversarial attacks. Besides, the
neglect of providing safe responses (e.g. simply replacing with templates) will
cause the information-missing problem of dialogues. To address these issues, we
propose an unsupervised pseudo-label sampling method, TEMP, that can
automatically assign potential safe responses. Specifically, our TEMP method
groups responses into several clusters and samples multiple labels with an
adaptively sharpened sampling strategy, inspired by the observation that unsafe
samples in the clusters are usually few and distribute in the tail. Extensive
experiments in chitchat and task-oriented dialogues show that our TEMP
outperforms state-of-the-art models with weak supervision signals and obtains
comparable results under unsupervised learning settings. | Zi Liang, Pinghui Wang, Ruofei Zhang, Shuo Zhang, Xiaofan Ye Yi Huang, Junlan Feng | 2023-05-25T06:15:53Z | http://arxiv.org/abs/2305.15757v1 | # Healing Unsafe Dialogue Responses with Weak Supervision Signals
###### Abstract
Recent years have seen increasing concerns about the unsafe response generation of large-scale dialogue systems, where agents will learn offensive or biased behaviors from the real-world corpus. Some methods are proposed to address the above issue by detecting and replacing unsafe training examples in a pipeline style. Though effective, they suffer from a high annotation cost and adapt poorly to unseen scenarios as well as adversarial attacks. Besides, the neglect of providing safe responses (e.g. simply replacing with templates) will cause the information-missing problem of dialogues. To address these issues, we propose an unsupervised pseudo-label sampling method, TEMP, that can automatically assign potential safe responses. Specifically, our TEMP method groups responses into several clusters and samples multiple labels with an adaptively sharpened sampling strategy, inspired by the observation that unsafe samples in the clusters are usually few and distribute in the tail. Extensive experiments in chitchat and task-oriented dialogues show that our TEMP outperforms state-of-the-art models with weak supervision signals and obtains comparable results under unsupervised learning settings.
## 1 Introduction
Recently, generative dialogue systems based on pre-trained language models (e.g. GPT-2 Radford et al. (2019)) have attracted significant attention due to the wide real-world applications Ni et al. (2021); Zhang et al. (2020) in chit-chat Zhang et al. (2020); Roller et al. (2020), information-seeking Glaese et al. (2022), and task-oriented business Peng et al. (2020); Hosseini-Asl et al. (2020); Yang et al. (2021). However, the industrial applications of these models are limited by the problem of **unsafe response generation**, i.e., conversational models will generate offensive, politicis sensitive, unprofessional, or biased sentences, especially under the prompts of hostile user utterances. For example, chatbots such as Weibo XiaoIce1, Twitter bot Tay Wolf et al. (2017) and Blenderbot 3.02 have been found offensive and racist responses after release, and for task-oriented dialogues (TOD) some works begin to focus on politeness transferring Silva et al. (2022) of the real-world corpus.
Footnote 1: [http://news.sohu.com/20140625/n401381647.shtml](http://news.sohu.com/20140625/n401381647.shtml)
Footnote 2: [https://www.spiceworks.com/tech/artificial-intelligence/news/meta-blender-bot-3-controversy/](https://www.spiceworks.com/tech/artificial-intelligence/news/meta-blender-bot-3-controversy/)
As illustrated in Figure 1, some recent work detoxify dialogue models in a supervised pipeline with three steps: 1) _training safety classifiers_ based on annotated dialogue safety corpora Sun et al. (2021); Dinan et al. (2019); Roller et al. (2020); Bacheti et al. (2021); van Aken et al. (2018); 2) _detecting unsafe dialogues, and replacing them_ with human-rewriting Ung et al. (2022) or universal templates; 3) _safety baked-in_Roller et al. (2020), i.e. fine-tuning dialogue models on a detoxified dataset with the conditional generation, or controlled text gener
Figure 1: **Comparison between existing dialogue detoxifying methods (gray) and our methods (red).** While the supervised pipeline (left) and the reinforcement learning (right) rely on human annotation (dashed line with gray background) for training classifiers or detoxifying responses, TEMP (middle) aim to heal unsafe examples by remapping those potentially unsafe responses (red) to their majority neighbors under the similar topic.
ations (CTG). Besides, some works (Glaese et al., 2022) build dialogue models by reinforcement learning from human feedback (RLHF), which let annotators check the safety of generated responses, and return a safety-related reward.
However, the applications of these methods are limited by the heavy requirements of human annotation. For RLHF, collecting online hand-crafted rewards is too time-consuming and inefficient, while RL models usually have a longer training period. For a supervised pipeline, we must recollect new subsets for new safety topics or scenarios due to the adversarial evolution phenomenon (Shachaf and Hara, 2010; Dinan et al., 2019). To alleviate the annotation cost, some work (Roller et al., 2020) uses overriding templates to replace human rewriting. However, overriding with universal templates will lead to the _trivial response_(Bao et al., 2020) problem that will hurt user experience. Hence, a label-few or label-free algorithm that can generate context-aware safe responses is vital for the problem of unsafe response generation.
To address the above issues, we propose _TEMP_, a simple yet effective solution **without** (with less) human annotation requirements (see Table 1). By analyzing real-world corpora we find that the unsafe responses are few and quite different in the corpus, which inspires us to replace unsafe labels with the majority of examples. Hence, our TEMP will first group examples with similar context information (e.g. topics), and then sample responses from those examples, as its replacements. In detail, we assume there exists a long-tail distribution and design a multi-target adaptive distribution sharpening method to select potential safe responses from the head clusters.
We have evaluated TEMP on several benchmarks in both chitchat and task-oriented scenarios. In chitchat, we compare our TEMP with several state-of-the-art safety solutions, and the experimental results demonstrate that our method can obtain more diverse (0.03 in DIST-2 and 0.62 in Entropy) and contextual-aware (0.02 in perplexity) responses with comparable safety scores. Also, based on the polluted version of MultiWoz 2.1, we compared the performances of current TOD models before and after following TEMP. The experimental results show that our TEMP decreases 85% offensive responses for SOLOIST, 84% for SimpleTOD, and 89% for independent SCGPT models with about 1% success losses.
## 2 Related Works
**Dialogue System** Dialogue system aims to simulate the chatting and communication ability of human, which consists of chit-chat, task-oriented, as well as hybrid dialogue systems. For chitchat, the mainstream solution is to construct end-to-end models that take in the dialogue history and generate a response, with some special designs in topic-aware (Xing et al., 2017; Liu et al., 2020), knowledge-grounded (Li et al., 2020; Jung et al., 2020), and empathy (Ma et al., 2020; Song et al., 2019). Different from chitchat, task-oriented dialogues usually build a standard pipeline with three components: 1) dialogue understanding and state tracking(Lin et al., 2020; Heck et al., 2020; Yu et al., 2021; Budzianowski and Vulic, 2019; Kale and Rastogi, 2020) for obtaining task states with intents, domains and entity attributes (i.e. slots); 2) dialogue policy for deciding how to respond at next step; and 3) dialogue response generation for transferring the decision results (i.e. actions) into natural languages. Recently, Su et al. (2021); Yang
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Models}} & \multicolumn{4}{c}{Functionality} & \multicolumn{2}{c}{No Annotation Requirements of} & \multicolumn{2}{c}{Suitable Aspects} \\ \cline{2-9} & Detec. & Gen. & (context-aware) & Safety (Online) & Safety (Offline) & Response & Chitchat & Task-oriented \\ \hline Detcify (Haun, 2020) & β & β & β & β & β & NA & β & β \\ PerspectiveAPI (Lees et al., 2022) & β & β & β & β & β & NA & β & β \\ BBF (Dinan et al., 2019) & β & β & β & β & β & NA & β & β \\ BAD (Roller et al., 2020) & β & β & β & β & NA & β & β & β \\ SafePhiloague (Ung et al., 2022) & NA & β & β & β & NA & β & β \\ Sparrow (Glaese et al., 2022) & β & β & β & NA & NA & β & β \\ \hline TEMP-Chitchat (ours) & NA & β & β & β & β & β & β \\ TEMP-Variant-Chitchat (ours) & NA & β & β & β & β & β & β & β \\ TEMP-TOD (ours) & NA & β & β & β & β & β & β & β \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of dialogue safety solutions (NA as Not Applicable), where Detec. and Gen. denotes the detection and safety response generation ability, respectively. For annotation, Safety (Offline) means the model requires safety labels, while Safety (Online) requests the annotators to provide feedback online. Besides, βResponseβ denotes the requirements of response labels.**
et al. (2021); Peng et al. (2020); Hosseini-Asl et al. (2020) migrate this pipeline into language models (e.g. GPT-2), as an auto-regressive generation task. Besides, some hybrid work Zhao et al. (2022); Sun et al. (2021) attempt to unify chitchat and TOD here. <3.
We use all these three models face the unsafe response generation problem of language models.
**Dialogue Safety** Dialogue safety ensures that the dialogue models generate polite, professional, and unbiased responses while talking to users. Similar to hate speech detection, some work collects dialogue corpus van Aken et al. (2018); Baheti et al. (2021); Sun et al. (2021) from online comments on Reddit and Twitter. On the contrary, adversarial human-machine dialogues Dinan et al. (2019); Roller et al. (2020) were collected to repair the failure of dialogue models. These corpora are used in "baked-in" fine-tuning Roller et al. (2020), or as the reward signals for reinforcement learning Glaese et al. (2022). Besides, some work Silva et al. (2022) treats dialogue detoxify as a style transferring task, which seeks and replaces the unsafe keywords. In addition, there are ample works focusing on some concrete safety fields. For gender and racist bias, Liu et al. (2020) have constructed contrastive corpora between groups, and then they have trained classifiers and the GAN model Liu et al. (2020) to remove the bias. Also, there are other safety risks. Baheti et al. (2021) has found the risks of dialogue stance, while Sheng et al. (2021) pointed out that the persona in chitchat will also cause the bias of dialogues.
## 3 Problem Formulation
We treat unsafe dialogue response healing as a weak-supervised or unsupervised learning task. Given dialogue context \(u_{i}\) and the unsafe response \(r^{\prime}_{i}\), TEMP aims to generate a _context-safe_ response \(\hat{r}^{\hat{s}}_{i}\) with the rephrasing model \(p(\hat{r}^{s}_{i}|r^{\prime}_{i},\theta_{p})\). Besides, we want \(\hat{r}^{\hat{s}}_{i}\) to be context-related to \(u_{i}\), in which it should talk about similar topics in chitchat, or convey the same dialogue information (i.e. actions) in TODs. In addition, following Kulhanek et al. (2021), in TOD we consider healing between delexicalized responses, e.g. "The phone number is [phone].".
Unlike supervised learning with human-created examples \((u_{i},r^{\prime}_{i},\hat{r}^{s}_{i})\), our TEMP attempts to train \(p(\hat{r}^{s}_{i}|r^{\prime}_{i})\) with raw dialogue examples \((u_{i},r_{i})\). We also propose the variant of TEMP that trains with \((u_{i},r_{i},y_{i})\), where \(y_{i}\in\{0,1\}\) is the safety classification label denoting whether response \(r_{i}\) is safe or not.
## 4 Methodology
We propose TEMP to address unsafe dialogue response healing with training sets \((u_{i},r_{i})\) and \((u_{i},r_{i},y_{i})\), respectively. For unsupervised learning, we built TEMP based on the observation that response clusters obey the long-tail distribution and in head clusters most of the responses are safe. Be
Figure 2: **Pseudo labels sampling of our TEMP, which aims to sample multiple potential safe labels (bottom left) for each dialogue example (upper left), by the two-step clustering (upper right) and adaptively sharpening methods (bottom right).**
sides, we provide a variant version of TEMP for weak-supervised learning with classification labels \(y_{i}\).
### Vanilla Pseudo Response Sampling
Pseudo-response sampling aims to sample the response that is more likely to be safe. As shown in Algorithm 1, it consists of three stages, including _context clustering_, _content clustering_, and _response sampling_.
**Context Clustering.** For raw training set \(\mathcal{D}_{tr}\), we first cluster responses by their context information, for gathering responses with similar topics. In detail, in chitchat, we use the representation of dialogue utterance as context embedding, and in TOD we simply gather the responses with the same intent-slot combinations, e.g. "Inform-Phone; Request-Price".
**Content Clustering.** For the response set \(\mathcal{R}_{c}\) of topic \(c\), we cluster these responses again, but depending on the semantic representations of themselves, for separating them with different statements. In detail, in chitchat, we construct clusters depending on the sentence embeddings of responses, and in TOD only the same responses belong to the same cluster, because of the repetitions of responses.
**Response Sampling.** For response \(r_{i}\), we want to sample a safer response \(r_{i}^{s}\) from the similar context cluster \(\mathcal{R}_{c}\). However, conventional sampling methods like _random sampling_ might not work well (see Theorem 1 in Appendix C) on real-world training corpus. Therefore, to sharpen the gap between potential safe and unsafe response clusters, we use convex functions (see Theorem 2 in Appendix C) to warp the cluster distribution \(M(\mathcal{R}_{c})\) into the sampling distribution. In detail, _TEMP_ uses two normalized functions to sharpen the distribution gap, including temperature-based _softmax_ and _max_ function3.
Footnote 3: \(max\) function may reduce the diversity of dialogue models. In general, Diversity is important for chitchat.
Based on the sampled cluster, we can select a response \(r_{c,i}^{p}\) from it, as the expected response. Hence, the rephrasing loss can be formatted as:
\[\begin{split}\mathcal{L}_{p}&=\sum_{i}\text{log}\ p_{ \theta}(r_{c,i}^{p}|r_{i})\\ &=\sum_{i}\sum_{t=1}\text{log}p_{\theta}(x_{i,t}|r_{i},x_{i,<t}), \end{split} \tag{1}\]
where \(x_{i},x\) denotes the \(x\)-th token of \(r_{c,i}^{p}\).
### Tempering Sampling
Based on Equation (1), we propose some extra tricks on vanilla sampling, including _adaptively sharpening_, _tempering training_, and _multi-target training_.
**Adaptively Sharpening.** which sharpens greatly if the head cluster concentrates most of the samples, while it relaxes the distribution if the head cluster is as normal as other clusters. Inspired by the dynamic threshold trick (Xu et al., 2021), we first estimate the steepness of \(M(\mathcal{R}_{c})\) with a sensitive indicator \(SI\):
\[SI=\frac{N_{1}-N_{2}}{\max(N_{1}-N_{2},\epsilon)}, \tag{2}\]
where \(N_{1}\) and \(N_{2}\) denote the length of top-2 clusters, and \(\epsilon=10^{-3}\) is a small number. Then, we can modify the original softmax sampling to
\[f_{exp}^{\prime}(M(\mathcal{R}_{c}))=\text{softmax}\frac{M(\mathcal{R}_{c})} {SI\cdot\tau}=\frac{\text{exp}(\frac{N_{i}}{SI\cdot\tau})}{\sum_{j}\text{exp} (\frac{N_{j}}{SI\cdot\tau})}. \tag{3}\]
**Tempering Training.** Illustrated as Algorithm 2, we divide the training procedure into a series of sub-training stages. In this way, a dialogue example will sample different pseudo labels in different substage, which assure models to learn generalized ability rather than memorizing specific sampled pseudo responses.
**Multi-target Training.** Multi-target training aims to force dialogue models generating not only one pseudo label but several target responses, which indicates that it expects the model to generate a nonexistent center response. Formally, the loss function in Equation (1) can be modified as:
\[\begin{split}\mathcal{L}_{mp}&=\frac{1}{M}\sum_{l} \mathcal{L}_{p,l}\\ &=\frac{1}{M}\sum_{i}\sum_{l}^{M}\text{log}\ p_{\theta}(r_{c,i}^ {p}|r_{i}),\end{split} \tag{4}\]
where \(r^{p}_{c,i,l}\) denotes the \(l\)-th target pseudo response of \(r_{i}\), and \(M\) denotes the number of targets.
### Sampling with Safety Labels
We propose a variant of TEMP for dialogue dataset \((u_{i},r_{i},y_{i})\) which provides the safe classification labels \(y_{i}\). With the classifier \(f_{d}(c_{i},r_{i})\rightarrow\hat{y}_{i}\), we filter all unsafe responses before context clustering, which leads to safer labels than before, and more diverse than template-based solutions.
## 5 Experiments
### Settings
**Datasets.** We use DiaSafety Sun et al. (2021), a comprehensive dialogue safety dataset as our evaluation benchmark. It consists of 11K contextual dialogues under 7 unsafe subaspects in chitchat. Besides, we construct a polluted version of MultiWoz 2.1 Eric et al. (2020), to quantify the information missing in dialogue healing. The details of corpus pollution can be seen in Appendix B.
**Baselines.** We compared TEMP with two types of models, including existing dialogue models and safety layers. For dialogue models, Blenderbott-40M Roller et al. (2020) and DialoGPT Zhang et al. (2020) are invited in chitchat, where some models like AuGPT Kulhanek et al. (2021), SimpleTOD Hosseini-Asl et al. (2020), SOLOIST Peng et al. (2020), SCGPT Peng et al. (2020), and SCLSTM Wen et al. (2015) are compared in task-oriented scenarios. Specifically, we compared TEMP with several SOTA safety models and APIs, including Detoxify Hanu (2020), Perspective API Lees et al. (2022), BBF Dinan et al. (2019), and BAD Roller et al. (2020).
**Evaluation Metrics.** We evaluate TEMP in three dimensions, i.e., safety, quality, and information. For safety, we use the fraction of safety responses after detoxifying as the Safety score, and calculate the unsafe rate in dialogue and turn level, as D-Unsafe (DPR) and R-Unsafe (RPR), respectively. For information, following Kale and Rastogi (2020), we use Success and BLEU for TODs, and the conditional perplexity (Forward-PPL and Backward-PPL) to evaluate the context correlation in chitchat. For quality, we measure the Diversity,
\begin{table}
\begin{tabular}{l|c|c c c c c|c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Safety\(\uparrow\)} & \multicolumn{5}{c|}{Quality} & \multicolumn{3}{c}{Informativeness} \\ \cline{3-10} & & & Accept. \(\uparrow\) & Engage. \(\uparrow\) & AvgLen \(\uparrow\) & DIST2 \(\uparrow\) & Entropy \(\uparrow\) & F-PPL \(\downarrow\) & B-PPL \(\downarrow\) \\ \hline Raw Sun et al. (2021) & 54.25 & 85.41 & 42.35 & 14.64 & 0.63 & 9.14 & 45.78 & 79.51 \\ +Detoxify Hanu (2020) & 75.16 & 89.55 & 27.62 & 13.35 & 0.53 & 8.30 & 43.89 & 79.98 \\ +PersAPI Lees et al. (2022) & 76.80 & 90.64 & 26.31 & 13.20 & 0.51 & 8.15 & 43.26 & 80.05 \\ +BBF Dinan et al. (2019) & **79.63** & **90.83** & 25.53 & 13.21 & 0.51 & 8.08 & 43.39 & 80.09 \\ +BAD Roller et al. (2020) & 77.99 & 90.58 & 26.78 & 13.12 & 0.52 & 8.17 & 43.14 & 80.04 \\ \hline +Detoxify+TEMP (ours) & 73.70 & 88.74 & 46.67 & 14.06 & **0.55** & **8.79** & 40.75 & **79.76** \\ +BBF+TEMP (ours) & 77.17 & 89.68 & 48.75 & **14.12** & 0.52 & 8.68 & 39.68 & 79.81 \\ +BAD+TEMP (ours) & 76.16 & 90.61 & **49.90** & 14.06 & 0.53 & 8.69 & **39.85** & 79.80 \\ \hline Blenderbott Roller et al. (2020) & 54.79 & 89.07 & 65.74 & 2.98 & 0.54 & 7.22 & 18.85 & 80.73 \\ +Detoxity Hanu (2020) & 63.47 & 89.55 & 45.29 & 4.26 & 0.27 & 4.67 & 23.75 & 81.03 \\ +PerAPI Lees et al. (2022) & 63.93 & **92.56** & 41.95 & **4.49** & 0.24 & 4.39 & 24.54 & 81.09 \\ +BBF Dinan et al. (2019) & **64.02** & 92.51 & 41.10 & 4.48 & 0.24 & 4.29 & 24.57 & 81.10 \\ +BAD Roller et al. (2020) & 63.65 & 91.78 & 43.57 & 4.32 & 0.26 & 4.51 & 24.00 & 81.06 \\ \hline +Detoxify+TEMP (ours) & 59.27 & 91.18 & **68.36** & 3.19 & **0.43** & **6.86** & 17.42 & **80.82** \\ +BBF+TEMP (ours) & 57.90 & 91.51 & 68.12 & **3.26** & 0.41 & 6.79 & 17.13 & 80.84 \\ +BAD+TEMP (ours) & 59.91 & 91.34 & 68.11 & 3.21 & 0.42 & 6.79 & **17.32** & 80.83 \\ \hline DialoGPT Zhang et al. (2020) & 73.51 & 95.67 & 34.74 & 9.49 & 0.26 & 6.89 & 18.41 & 80.48 \\ +Detoxity Hanu (2020) & 73.42 & 89.55 & 26.22 & 8.84 & 0.23 & 6.18 & 23.28 & 80.81 \\ +PersAPI Lees et al. (2022) & 73.79 & 95.84 & 25.08 & 8.76 & 0.22 & 6.03 & 23.76 & 80.85 \\ +BBF Dinan et al. (2019) & 73.42 & 95.90 & 23.46 & 8.66 & 0.21 & 5.85 & 24.61 & 80.91 \\ +BAD Roller et al. (2020) & 73.70 & 95.89 & 24.50 & 8.68 & 0.22 & 5.98 & 23.82 & 80.86 \\ \hline +Detoxify+TEMP (ours) & 75.80 & 95.47 & 40.51 & 9.53 & **0.25** & **6.99** & **19.53** & **80.62** \\ +BBF+TEMP (ours) & **76.89** & 95.42 & 41.23 & **9.58** & 0.24 & 6.95 & 19.81 & 80.66 \\ +BAD+TEMP (ours) & 75.98 & **96.12** & **45.75** & 9.53 & 0.25 & 6.96 & 19.57 & 80.64 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Safety evaluation on DiaSafety by safety accuracy and quality metrics.
Acceptance, and the Engagingness of responses. Details are given in Appendix A.
### Implementation Details
We use T5-small Raffel et al. (2020), as our rephrasing method backbone, with input max sequence length \(128\) and output max sequence length \(128\). We trained all TOD TEMP models for \(5\) times under learning rate \(3e-5\), batch size \(1\) with \(500\) steps, based on the implementation of huggingface Transformers Wolf et al. (2020). For chitchat, we train TEMP models with \(2\) epochs and use DBSCAN as the clustering algorithm, where we set the nearest neighbor number to \(150\), the epsilon to \(0.22\) for unsupervised rephrasing. All experiments are on a single 32G Nvidia Tesla V100 GPU.
### Safety Evaluation
We first evaluate the safety improvements of TEMP. Illustrated by Table 3, we collect all **unsafe** samples in the test set of DiaSafety and calculate the safety score of unsupervised TEMP models. We define three types of unsafe fractions in the training set, i.e. Simple (-S), Medium (-M), and Hard (-H) with fractions 0.04, 0.1, and 0.3 in the training set, respectively. From Table 3 we see that TEMP improves the safety of raw corpus, where adaptively sharpening (AS) plays an important role. Also, we can trade off the safety against the diversity by the cluster threshold epsilon.
Besides, we also consider a variant of TEMP trained with classification labels. Under this situation, we can rephrase unsafe dialogues after the filtering of safety classifiers. Table 2 has shown the results for both the test set and dialogue models. From the improvements in DIST and Entropy, we see that TEMP alleviates the shortcoming of trivial response generation for template replacement. Besides, the decrease in perplexity demonstrates that TEMP has a better correlation to dialogue contexts. In addition, results in some regular sentence metrics (e.g. Acceptability and Engagingness) indicate that responses after healing have a higher quality.
### Information-missing Experiments
We then evaluate the information-missing problem of dialogue healing models. Specifically, we use the information-relevant metrics (e.g. Success) in task-oriented dialogues (TODs) to quantify the efficacy of our TEMP and evaluate the unsafe rate in response (R-Unsafe) and dialogue (D-Unsafe) level. As shown in Table 4, we find that current TOD models with the unsupervised version of TEMP decrease the risks of generating unsafe responses with little cost (lower than 1% for TEMP-wta4 enhanced end-to-end models) in success and BLEU-4. However, sometimes TEMP with "exp" tends to sample unsafe responses, e.g. SOLOIST with TEMP-exp have much more probability to reply impolitely than vanilla SOLOIST, which may be because the corpus with a high fraction does not obey strict safe majority in Theorem 2 and in this situation a hard max function (i.e. wta) obtains more safe results than softmax.
Footnote 4: wta denotes the \(max\) sampling and exp denotes the \(softmax\) sampling.
### Ablation Study
To further study the effectiveness of each component in TEMP, we design ablation experiments. Shown in Table 5, tempering learning (TL), multi-target learning (MT) as well as bare sampling methods all play important roles in TEMP. Specifically, in WTA sampling we observe that both MT and TL can alleviate the information-missing problem of vanilla sampling, while MT further reduces the procedure of unsafe response generation. On the contrary, under EXP settings TL reduces the unsafe rate of models, while MT has no notable improvements. That may be because TEMP+EXP usually samples unsafe samples in a potentially lower probability, where the multi-target labels are more likely to be unsafe.
## 6 Model Internal Analysis
### Boundary Experiments
We first reveal the lowest pollution data fraction in the training corpus, as shown in Figure 3. We
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Method & Entropy\(\uparrow\) & B-PPL\(\downarrow\) & Safety \(\uparrow\) \\ \hline Test Set & 8.649 & 82.256 & 0 \\ \hline +random & 7.051 & 84.027 & 52.295 \\ +Detoxify & 7.681 & **82.845** & 45.709 \\ +PersAPI & 7.616 & 82.881 & 54.092 \\ +BBF & 7.416 & 82.991 & 55.489 \\ +BAD & 7.471 & 82.968 & 51.896 \\ \hline +TEMP-S eps=0.22 & 4.577 & 84.712 & **78.244** \\ +TEMP-S eps=0.42 & **8.104** & 83.251 & 29.741 \\ +TEMP-S w.o. AS & 5.384 & 84.035 & 50.299 \\ +TEMP-M & 5.124 & 84.997 & 61.876 \\ +TEMP-H & 5.953 & 84.849 & 43.713 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiment results on the pure unsafe test.
see that with a fraction 0.01 there is no sensitive response generation for generalized neural-pipeline-based dialogue systems. With the help of TEMP, we achieve zero sensitive response generation under the fraction of 0.04. In addition, the variance of the three curves grows much higher with an increasing fraction of the polluted generation rate, which demonstrates the high polluted rate causes much higher generation risks.
### Visualization
As shown in Figure 4, we visualize the clusters in each topic at _content clustering_ stage. In detail, we only display responses in top-10 clusters and mark the top 3 clusters with _blue_, _orange_, and _green_, and others as _red_. Figure 4 has shown that the head clusters have a much lower probability to sample unsafe (cross point) responses.
### Tempering Training
To investigate the concrete effectiveness of tempering learning, we trained TEMP models with tempering stages varying from 1 to 7. As illustrated in Figure 5, tempering learning helps TEMP maintain success rate and BLEU, and there are no notable differences after tempering number 4. However, a high tempering number may cause an increase in DPR and RPR, because long training procedures force the model to memorize training examples, which hurts the response rephrasing.
#### 6.3.1 Multi-target Training
Similar to tempering learning, we trained the TEMP model with different target numbers from 1 to 7, and the results can be found in Figure 6. We see that multi-target learning indeed decreases the pollution rate greatly, no matter at the dialogue level or utterance level. Furthermore, both DPR and RPR decrease to zero when the target number
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{**T**Eruption} & \multicolumn{4}{c}{**Low Fraction**} & \multicolumn{4}{c}{High Fraction**} \\ \cline{2-9} & **Success**\%(\uparrow) & **BLEU**\%(\uparrow) & **D-Unsafe(\downarrow)(\%) & **R-Unsafe(\downarrow)(\%)** & **Success**\%(\uparrow) & **BLEU**\%(\uparrow) & **D-Unsafe(\downarrow)** & **R-Unsafe(\downarrow)** \\ \hline \multirow{2}{*}{AugPT} & 71.48 & 18.04 & 0.0072 & 9.762 & 68.18 & 18.05 & 0.1624 & 0.0247 \\ & 70.92 & 16.97 (1.10) & 0.0004 (1.955) & 0.5424 (1.955) & 66.80 (1.38) & 15.46 (2.5) & 0.292 (3.005) & 0.0461 (1.875) \\ & 71.18 (\(\pm\) 0.30) & 17.17 (\(\pm\) 0.57) & 0.0018 (1.735) & 2.441 (1.75) & 67.78 (0.40) & 17.06 (5.97) & 0.0438 (1.936) & 0.0601 (7.576) \\ \hline SOLOIST & 71.96 & 17.86 & 0.0090 & 12.21 & 69.42 & 18.08 & 0.1350 & 0.0203 \\ & 71.00 (\(\pm\) 0.96) & 16.82 (1.01) & 0.0020 (1.87) & 2.712 (1.78) & 67.72 (1.70) & 15.52 (1.25) & 0.2682 (1.986) & 0.0418 (1.059) \\ & 71.52 (\(\pm\) 0.44) & 17.16 (\(\pm\) 0.70) & 0.0028 (1.699) & 3.797 (1.69) & 69.16 (0.26) & 17.05 (\(\pm\) 1.02) & 0.0248 (1.905) & 0.0029 (1.855) \\ \hline \hline \multicolumn{9}{l}{+ TEMP(rand)} & 71.43 (\(\pm\) 0.53) & 17.09 (\(\pm\) 0.77) & 0.003 (5.58) & 57.63 (1.53) & 67.12 (\(\pm\) 2.30) & 16.35 (1.73) & 0.1376 (1.91\%) & 0.0205 (1.99\%) \\ \hline \multicolumn{9}{l}{SimpletTOD} & 69.90 & 18.01 & 0.0070 & 9.492 & 66.98 & 17.82 & 0.1730 & 0.0263 \\ & 64.07 (\(\pm\) 5.83) & 16.67 (1.34) & 0.0000 (1.007) & 0.04 (1.007) & 65.28 (1.70) & 16.69 (1.13) & 0.0846 (1.51\%) & 0.0118 (5.598) \\ & 67.96 (\(\pm\) 1.94) & 17.01 (\(\pm\) 1.00) & 0.000 (1.007) & 0.04 (1.007) & 65.62 (1.36) & 17.29 (\(\pm\) 0.53) & 0.0304 (1.82\%) & 0.0041 (1.84\%) \\ \hline \multicolumn{9}{l}{SCGPT} & - & 15.68 & - & 0.0 & - & 15.45 (\(\pm\) 0.94) & - & 0.0096 \\ & - & 14.32 (\(\pm\) 1.36) & - & 0.0 (\(\pm\) 0.04) & - & 12.51 (\(\pm\) 0.94) & - & 0.0062 (3.396) \\ & - & 14.74 (\(\pm\) 0.96) & - & 0.0 (\(\pm\) 0.03) & - & 14.85 (\(\pm\) 0.60) & - & 0.0011 (1.899\%) \\ \hline \multicolumn{9}{l}{SCLSTM} & - & 26.38 & - & 0.0 & - & 26.07 & - & 5.58e- \\ & - & 22.76 (\(\pm\) 3.63) & - & 0.0 (\(\pm\) 0.03) & - & 21.48 (\(\pm\) 4.59) & - & 1.08e-2 (1 18\%) \\ & - & 23.01 (\(\pm\) 3.37) & - & 0.0 (\(\pm\) 0.03) & - & 22.84 (\(\pm\) 3.23) & - & 5.58e-5 (0 \%) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Information-missing Experiments under TOD settings**, where SimpleTOD (Hosseini-Asl et al., 2020), AuGPT (Kulhanek et al., 2021) and SOLOIST (Peng et al., 2020) are end-to-end methods, and SCGPT (Peng et al., 2020) and SCLSTM (Wen et al., 2015) are the NLG method in the pipeline. We propose two sampling strategies of TEMP, including βexpβ (softmax) and βwtaβ (max) respectively.
Figure 3: Abnormal fraction boundary experiments, the variances were scaled by 0.08, 0.08, 0.5, and 0.5 for each plot.
\begin{table}
\begin{tabular}{l|l l l l} \hline \hline TEMP & Success & BLEU-4 & D-Unsafe & R-Unsafe \\ \hline WTA (raw) & 59.00 & 15.73 & 1.30 & 1.898 \\ \hline + MT & 65.60 & 15.62 & **1.00** & **1.356** \\ +TL & 68.18 & 16.86 & 4.50 & 6.210 \\ +All & **68.40** & **17.42** & 1.60 & 2.169 \\ \hline EXP (raw) & 66.10 & 16.83 & 6.90 & 10.17 \\ \hline + MT & 67.90 & **17.43** & 7.80 & 11.39 \\ +TL & 60.60 & 16.52 & **3.00** & **4.203** \\ +All & **68.00** & 17.31 & 7.10 & 10.32 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Ablation experiments for TEMP,** where TL denotes tempering learning, MT denotes multi-target learning, and All denotes all the methods.
is 5+. Another finding is the improvements in response informativeness, where both success and BLEU grow to the position near the origin model, which indicates that multi-target learning is highly effective.
## 7 Conclusion and Future Work
This paper studies the problem of unsupervised unsafe dialogue response healing. To address this problem, we present a pseudo-label sampling strategy TEMP, which helps to select multiple potential safe response labels by dynamically sharpened sampling. Our TEMP is based on the long-tail clustering observation that unsafe examples are usually distributed in tail clusters, and it works well in real-world corpora under unsupervised or weak-supervised scenarios. Extensive experiments demonstrate the superiority of our proposed method. In the future, to improve interpretability as well as detoxifying ability in specific domains, we plan to explore knowledge-enhanced dialogue safety models.
## 8 Limitations
TEMP cannot work well in some fields where the number of unsafe samples is dominant. Figure 4 (e, f, g, h) has shown the detoxifying ability of TEMP in such a situation, while in Table 4 SOLOIST+TEMP (exp) has improved the unsafe rate compared to original SOLOIST. Besides, the variant of TEMP is quite simple, and some other weak-supervised learning methods might be used
Figure 4: Visualization of Content Clustering in Simple Corpus (a, b, c, d) and Hard Corpus (e, f, g, h).
Figure 5: Varying tempering number experiments, the variances were scaled by 3e+3, 2e+3, 1e+3 and 2e+3 for each plots.
Figure 6: Varying target number experiments, the variances were scaled by 3e+3, 2e+3, 1e+3 and 2e+3 for each plots.
on TEMP, like offline-policy reinforcement learning and semi-supervised learning. In addition, limited by the hardware environments, we only implemented TEMP with T5-small, which lacks the exploration of TEMP under large-scale (1B+) language model backbones.
|
2306.03465 | A generative framework for conversational laughter: Its 'language model'
and laughter sound synthesis | As the phonetic and acoustic manifestations of laughter in conversation are
highly diverse, laughter synthesis should be capable of accommodating such
diversity while maintaining high controllability. This paper proposes a
generative model of laughter in conversation that can produce a wide variety of
laughter by utilizing the emotion dimension as a conversational context. The
model comprises two parts: the laughter "phones generator," which generates
various, but realistic, combinations of laughter components for a given speaker
ID and emotional state, and the laughter "sound synthesizer," which receives
the laughter phone sequence and produces acoustic features that reflect the
speaker's individuality and emotional state. The results of a listening
experiment indicated that conditioning both the phones generator and the sound
synthesizer on emotion dimensions resulted in the most effective control of the
perceived emotion in synthesized laughter. | Hiroki Mori, Shunya Kimura | 2023-06-06T07:35:24Z | http://arxiv.org/abs/2306.03465v1 | A Generative Framework for Conversational Laughter: Its 'Language Model' and Laughter Sound Synthesis
###### Abstract
As the phonetic and acoustic manifestations of laughter in conversation are highly diverse, laughter synthesis should be capable of accommodating such diversity while maintaining high controllability. This paper proposes a generative model of laughter in conversation that can produce a wide variety of laughter by utilizing the emotion dimension as a conversational context. The model comprises two parts: the laughter "phones generator," which generates various, but realistic, combinations of laughter components for a given speaker ID and emotional state, and the laughter "sound synthesizer," which receives the laughter phone sequence and produces acoustic features that reflect the speaker's individuality and emotional state. The results of a listening experiment indicated that conditioning both the phones generator and the sound synthesizer on emotion dimensions resulted in the most effective control of the perceived emotion in synthesized laughter.
Hiroki Mori\({}^{1}\), Shunya Kimura\({}^{1}\)\({}^{1}\)Utsunomiya University, Japan
[email protected]
**Index Terms**: laughter synthesis, generative model, language model of laughter, emotional conditioning
## 1 Introduction
Laughing is a basic and essential emotional behavior for humans. Nevertheless, almost all of the conversational agents that interact with humans do not laugh. Part of the reason for this is attributed to the fact that we ourselves do not well understand why, when, and how we laugh. A recent study on conversational robots by a Kyoto-U team aimed at the positive effect of the robot's laughter on empathy [1]. By focusing on "shared laughter," they cleared the _when_ problem. For the _how_ problem, however, they avoided laughter synthesis and randomly picked one from the pools of "mirtfurl" or "social" laughs.
The current laughter synthesis study focuses on _how_ conversational agents should laugh. Laughter synthesis is an emerging technology and is gaining importance as the human-agent interaction becomes more advanced and popular in our daily lives [2, 3, 4, 5, 6, 7]. A large part of previous work has employed a similar framework to text-to-speech systems. An open problem here is how to construct input sequences for the synthesizer. As the phonetic structure and its functional aspects of laughter in conversation has not been fully understood, most previous work simply used exemplars of natural laughter for input, which limits flexibility. Recent research in non-speech vocalization synthesis [8] also points to the need for some kind of "language model".
This paper proposes a generative model of laughter in conversation that can produce a wide variety of laughter. A highlighted feature is the "language model" of laughter, which serves as a laughter sequence generator. This model generates various but realistic combinations of laughter components for a given speaker ID and emotional state. The generated sequence is then fed into the laughter "sound synthesizer," which produces acoustic features that reflect the speaker's individuality and emotional state.
In this paper, we will be using specific laughter-related terminology, following to [9]. A "laughter episode" will refer to a series of acoustic events that correspond to exhalation or inhalation. A "bout" will refer to an event that corresponds to an exhalation and is composed of one or more laughter calls. A "call" will be used to describe an individual unit of laughter, analogous to a syllable. Therefore, a typical bout "hahaha" is a 3-call bout.
## 2 Morphology of laughter sounds
A typical method for collecting laughter data has been induction by funny movies [10, 11]. Provine criticized past studies for focusing solely on audience-oriented, passive laughter [12]. He argued that laughter is social and that speakers actually laugh more than listeners. Since we are interested in laughter in agent-human interaction, we need to collect laughter that occurs naturally in conversation. In this study, we used the Online Gaming Voice chat Corpus (OGVC) [13], a speech corpus containing spontaneous dialogue during massively multiplayer online role-playing games (MMORPGs), which has a larger number of laughs than other Japanese conversational corpora used in emotion studies.
Bout- and call-level annotation was performed for the top three speakers with the highest frequency of laughter in OGVC. An example of the annotation is shown in Fig. 1. The annotation has a hierarchical structure: Bouts and inhalation sounds that comprise each laughter episode were annotated, as well as calls that comprise each bout.
The consonant and vowel of each call were transcribed as a romanization of Japanese syllable, rather than in a phonetic way. Therefore, laughter vowels are classified into one of a, e, i, u, or o. The proportions of vowels are shown in Fig. 2. The most common vowel was /u/, followed by /a/. However, these are not contrastive, and most laughter sounds are realized around the mid central vowel [3].
In addition to consonants and vowels, phonetic variants, including unvoiced (e.g. hu), nasal (e.g. h\(\bar{\text{u}}\)), and consonant prolongation (e.g. hu), were also transcribed. Among them, the voicelessness of laughter sound has received much attention due to its functional importance. For example, voiced laughter induces significantly more positive emotional responses in listeners than unvoiced laughter does [14].
The proportion of bout length (number of calls) is shown in Fig. 3. It is worth noting that the proportion of single-call bouts is surprisingly large. The proportion of unvoiced calls in single
call bouts (55.4 %) is significantly larger than that of multi-call bouts (18.7 %). This suggests that single-call bouts tend to be accompanied by negative emotions [14].
Individual inhalation sounds were identified as h (unvoiced) or H (voiced), and annotated at the same tier as bouts. Inhalation sounds often accompany vocal fold vibration (voiced), some of which constitute a main part of a laughter sound. This voiced/unvoiced distinction is crucial because of its relation to perceived emotion. Arimoto et al. [7] showed that laughs containing voiced inhalation sounds tend to be perceived as more pleasant and aroused. Voiced inhalation sounds are also important in characterizing the individuality of laughing speakers. For the top seven OGVC speakers with the highest frequency of laughter, the proportion of episodes with mid-laugh voiced inhalations is less than 1 % for two speakers, around 10 % for three speakers, and 21 % and 27 % for the remaining two speakers. This implies that there are speakers who almost exclusively use egressive laughter, as well as those who frequently produce ingressive laughter.
## 3 Emotion perception from laughter
The morphological variation of laughter depends on its discourse and social context. However, it is difficult to encode such contexts in a comprehensive and adequate way. As a first-order approximation, this study attempts to use the speaker's emotion perceived from laughter as an explanatory variable for modeling laughter forms [7].
This requires an evaluation of the perceived emotion for the laughs in the corpus. For this purpose, emotion categories such as "big six" emotions [15] seem virtually useless. In this study, we annotated the emotion perceived from laughter with two emotion dimensions, pleasantness and arousal. Dimensional descriptions of emotions have a long history and are well established in psychology. A number of studies have stated that two or three dimensions are sufficient to account for a good portion of emotional variation. Among all, the pleasantness (also known as valence) and arousal (also known as activation) dimensions have been regarded as fundamental [16].
Prior to the emotion annotation, the first author checked the laughter sounds of a male speaker 04_MSY and a female speaker 06_FWA, then filtered out subtle or less audible ones, which yielded 125 and 100 laughter episodes for the two speakers as our laughter dataset.
The two authors individually annotated the perceived pleasantness (1: extremely unpleasant, 7: extremely pleasant) and arousal (1: extremely sleep, 7: extremely aroused). The ground-truth values were obtained by averaging them. Figure 4 shows the distribution of the emotion dimensions for the two speakers. Most laughter sounds were evaluated as more pleasant and more aroused than neutral (4). Mean pleasantness and arousal were 5.86 and 5.19 for the male speaker 04_MSY, and 5.95 and 5.78 for the female speaker 06_FWA.
## 4 Phones generator: The "language model" of laughter
Contrary to the notion that laughter sounds have a homogenous structure such as "hahaha," "hehehe," or "huhuhu," there are so many variations that a closed lexicon of laughter cannot
Figure 4: Distribution of the ground-truth pleasantness and arousal dimensions evaluated for laughter sounds. Points are jittered to avoid overplotting of laughters with identical values.
Figure 3: Proportions of the number of calls per bout.
Figure 2: Power proportions of calls. Each darker color stands for voiced, and lighter color for unvoiced.
be defined. At the same time, we barely hear laughter sounds such like "hahohaho," which implies that there are some constraints that prescribe possible combinations of laughter calls. Provine [12] suggested biological constraints against producing such mixed-call laughs, but he also pointed out that one can easily switch call types in mid-laugh, as in "hahahoho." His observation implied the existence of some laughter _grammar_, but he did not discuss a computational model of laughter calls that could be applied to laughter synthesis.
A desired laughter language model should not only regulate such possible combinations (as opposed to the random arrangement [7]), but also account for morphological preferences related to discourse and social context. As described in Sect. 2, the length of laughter is related to its emotion. Therefore, we modeled the length first, then the components. Hereafter, we regard either a call or a single inhalation as a component and refer to each component as a "phone." For example, the phone sequence corresponding to the second laughter episode in Fig. 1 is "H hu hu H H H hu hu hu H."
Figure 5 shows the distribution of the laugh length (number of phones) versus pleasantness by black points. As these could be modeled by a Poisson regression, the fitted mean parameter \(\lambda\) (green line) and probability mass function (red bars) are overlaid (here the arousal value was set equal to the pleasantness value for simplicity). A generalized linear model with Poisson distribution was obtained through variable selection using AIC (Akaike Information Criterion) [17]. The fitted laugh length model was as below:
\[\log(\lambda_{i}) =b+0.527x_{i}^{\text{ple}}+0.750x_{i}^{\text{ple}}x_{i}^{\text{ao}}, \tag{1}\] \[y_{i} \sim\text{Pois}(\lambda_{i}), \tag{2}\]
where \(y_{i}\) is the length of \(i\)-th laughter, \(x_{i}^{\text{ple}}\) and \(x_{i}^{\text{aro}}\) are the pleasantness and arousal dimensions, whose range is linearly transformed from \([1,7]\) to \([-1,1]\), and \(b\) is the speaker specific baseline (1.433 for 04_MSY, 0.936 for 06_FWA).
In the generation phase, the decision to stop generating is determined dynamically and randomly. Here we define \(P_{\text{end}}(n)\) as the probability that the \(n\)-th generated phone is the last one:
\[P_{\text{end}}(n)=\frac{f(n;\lambda)}{1-F(n-1;\lambda)}, \tag{3}\]
where \(f(k;\lambda)\) and \(F(k;\lambda)\) are the probability mass function and cumulative distribution function of \(\text{Pois}(\lambda)\), respectively. For each generated phone, an "end-of-laughter" is drawn according to \(P_{\text{end}}(n)\). This ensures that the length distribution of generated laughs follows the Poisson distribution, whose mean is determined by Eq. (1).
Thirty-two different phones appeared in the laughter dataset described in Sect. 3. By replacing phones that appeared only once (e.g. hia, hi, na) with similar ones, we obtained a phone list comprising 22 different calls and inhalations. In the modeling, phone sequences that constitute each laughter episode were converted into a sequence of 64-dimensional embedding vectors.
Similar to neural language models [18], the call sequence of laughter was modeled with a recurrent neural network. We used an architecture with an LSTM layer with 128 hidden dimensions, a linear layer, and a softmax layer. The dimensionality of the input was 64 (phone embedding) \(\ +\ 1\ (P_{\text{end}}(n))\ +\ 1\ (\text{speaker})\ +\ 2\ (\text{ emotion dimensions})=68\).
Generated phones resulting from 10 draws for several combinations of emotion dimensions are shown in Fig. 6. Note that these are random draws without any cherry-picking, so many duplicates exist in the lists. From the figure, it is apparent that emotion and speaker individuality are reflected not only in the length of laughter but also in the pattern of laughter phones.
Figure 5: _Laugh length distribution (points) and its probabilistic model with Poisson regression. Points are horizontally jittered to avoid overfitting._
Figure 6: _An excerpt for generated laughter phones (10 draws per condition). See the multimedia file for the complete list._
For example, more pleasant and aroused laughter contains more /a/'s and voiced inhalations.
## 5 Laughter sound synthesizer
The current waveform synthesizer is basically a vocoder-based parametric speech synthesis [19], which can model human vocalization better than end-to-end models for limited data sizes, such as the one used in our case. The input feature set for duration modeling consisted of the identity of current consonant-vowel (19), 2 phonetic variations (voicedness, nasality) and their left and right context (x3), phone position (1), laughter length (1), and 2 emotion dimensions (67 in total). For acoustic modeling, phone duration and 3 numerical features for coarse-coded frame position in the current phone [20] were added to the input, and 59th-order Mel-cepstrum, \(\log f_{o}\), aperiodicity, their \(\Delta\), \(\Delta\Delta\), and the voicedness were inferred as the output. The network was composed of a three-layer stacked bidirectional LSTM with 128 hidden dimensions and a linear layer. For the subsequent experiment, the model was trained with the 04_MSY dataset whose waveform was downsampled to 16 kHz.
## 6 Experiment
To investigate emotion controllability in the proposed laughter synthesis, we conducted an ablation study on both the phones generator and the laughter sound synthesizer. Hereafter, we denote the absence or presence of emotion inputs to the phones generator as -/+phones, and similarly, the absence or presence of emotion inputs to the laughter sound synthesizer as -/+acoust. The emotion inputs were masked at the training and inference stages in the -phones and -acoust conditions. For each of 10 pleasantness and arousal combinations (4, 4), (4, 5), (5, 4), (5, 5), (5, 6), (6, 5), (6, 6), (6, 7), (7, 6), and (7, 7), twenty sequences were generated using the phones generator. Then, the corresponding laughter waveform was synthesized from the acoustic features generated for each sequences using WORLD [21]. The generated phones and synthesized waveforms are provided as the multimedia files for this paper.
For each condition, the first 10 phone sequences were used in the listening test (see Fig. 6). The number of stimuli was 10 (target emotion dimensions) \(\times\) 10 (phone sequences) \(\times\) 2 (-/+phones) \(\times\) 2 (-/+acoust) plus two reference real laughter sounds for subject screening \(\times\) 4 (repetitions) \(=408\). Thirty-one undergraduate and graduate students who were not involved in speech research participated in the listening test. First, they watched a video that described the objectives of the experiment and an introduction to the theory of emotion dimensions. The subjects then used a web interface to listen to the stimulus sounds in a random order and evaluated perceived pleasantness and arousal on a 7-point scale, as in Sect. 3. From the results of the screening test, two subjects were found not to meet our criteria (distinguishing between obviously pleasant/unpleasant laughter and responding consistently to identical stimuli), so their responses were excluded from later analysis.
The perceived pleasantness and arousal for the 400 synthesized laughter sounds were averaged over the subjects. Figure 7 shows the distribution of perceived pleasantness and arousal. For both dimensions, the +phones+acoust model showed the best controllability, as the correlation coefficient is as high as 0.87 (pleasantness) and 0.84 (arousal). This means that the emotion input to the phones generator and the emotion input to the laughter sound synthesizer are individually effective, but the emotion input to the both modules is even more effective. A statistical test for the difference between two paired correlations revealed that the correlation coefficient for the +phones+acoust model is significantly higher than that for the +phones-acoust model for both dimensions (\(p<0.01\)).
Best linear models to predict responses from the target dimension were obtained through variable selection using AIC:
\[\hat{y}^{\text{de}}= 0.112-0.301\delta_{\text{phones}}-0.199\delta_{\text{acoust}}+0.1 41\delta_{\text{phones}}\delta_{\text{acoust}} \tag{4}\] \[+(0.669\delta_{\text{phones}}+0.489\delta_{\text{acoust}}-0.321 \delta_{\text{phones}}\delta_{\text{acoust}})x^{\text{pk}},\] \[\hat{y}^{\text{aro}}= -0.265\delta_{\text{phones}}-0.198\delta_{\text{acoust}}+0.171 \delta_{\text{phones}}\delta_{\text{acoust}}\] \[+(0.661\delta_{\text{phones}}+0.561\delta_{\text{acoust}}-0.375 \delta_{\text{phones}}\delta_{\text{acoust}})x^{\text{aro}}, \tag{5}\]
where \(\delta_{\text{phones}}\) and \(\delta_{\text{acoust}}\) are the dummy (0/1) variables corresponding to the -/+phones and -/+acoust conditions. The coefficients of \(x^{\text{pk}}\) and \(x^{\text{am}}\) in Eqs. (4) and (5) clearly demonstrate the synergistic effect gained by controlling both the phones generator and the sound synthesizer.
## 7 Conclusions
In this paper, we proposed a generative model for laughter in conversation, which allows for the production of a wide variety of laughter that can be controlled by emotion dimensions. Our results indicate that conditioning both the phones generator and the laughter sound synthesizer on emotion dimensions is most effective in controlling perceived pleasantness (\(R=0.87\)) and arousal (\(R=0.84\)).
One limitation of the current study is the lack of scalability, as call-level annotation for new datasets could become a bottleneck. Although state-of-the-art speech recognition systems such as Whisper can transcribe laughter calls to some extent, they cannot distinguish the phonetic variants necessary for laughter synthesis. One potential solution is to fine-tune the model using richly annotated laughter data such as the one built in this study.
## 8 Acknowledgements
This work was supported by JSPS KAKENHI Grant Numbers 22K12107 and 22K18477.
Figure 7: Relationship between target and perceived emotion from synthesized laughter for (a) pleasantness, and (b) arousal. Points are horizontally jittered to avoid overplotting. |
2307.14230 | The nature of the X-ray sources in dwarf galaxies in nearby clusters
from the KIWICS | We present a deep search for and analysis of X-ray sources in a sample of
dwarf galaxies (M$_{r}$ < -15.5 mag) located within twelve galaxy clusters from
the Kapteyn IAC WEAVE INT Cluster Survey (KIWICS) of photometric observations
in the $\textit{r}$ and $\textit{g}$ using the Wide Field Camera (WFC) at the
2.5-m Isaac Newton telescope (INT). We first investigated the optical data,
identified 2720 dwarf galaxies in all fields and determined their
characteristics; namely, their colors, effective radii, and stellar masses. We
then searched the $\textit{Chandra}$ data archive for X-ray counterparts of
optically detected dwarf galaxies. We found a total of 20 X-ray emitting dwarf
galaxies, with X-ray flux ranging from 1.7$\times10^{-15}$ to
4.1$\times10^{-14}$ erg cm$^{-2}$ s$^{-1}$ and X-ray luminosities varying from
2$\times10^{39}$ to 5.4$\times10^{41}$ erg s$^{-1}$. Our results indicate that
the X-ray luminosity of the sources in our sample is larger than the Eddington
luminosity limit for a typical neutron star, even at the lowest observed
levels. This leads us to conclude that the sources emitting X-rays in our
sample are likely black holes. Additionally, we have employed a scaling
relation between black hole and stellar mass to estimate the masses of the
black holes in our sample, and have determined a range of black hole masses
from 4.6$\times10^{4}$ to 1.5$\times10^{6}$ M$_\odot$. Finally, we find a trend
between X-ray to optical flux ratio and X-ray flux. We discuss the implications
of our findings and highlight the importance of X-ray observations in studying
the properties of dwarf galaxies. | Γ
Βeyda Γ
Βen, Ersin GΓΒΆΓΒΓΒΌΓ
Β, Reynier F. Peletier, Nelvy Choque-Challapa, Amirnezam Amiri | 2023-07-26T14:55:52Z | http://arxiv.org/abs/2307.14230v1 | # The nature of the X-ray sources in dwarf galaxies in nearby clusters from the KIWICS
###### Abstract
We present a deep search for and analysis of X-ray sources in a sample of dwarf galaxies (M\({}_{r}\) < -15.5 mag) located within twelve galaxy clusters from the Kapteyn IAC WEAVE INT Cluster Survey (KIWICS) of photometric observations in the \(r\) and \(g\) using the Wide Field Camera (WFC) at the 2.5-m Isaac Newton telescope (INT). We first investigated the optical data, identified 2720 dwarf galaxies in all fields and determined their characteristics; namely, their colors, effective radii, and stellar masses. We then searched the _Chandra_ data archive for X-ray counterparts of optically detected dwarf galaxies. We found a total of 20 X-ray emitting dwarf galaxies, with X-ray flux ranging from 1.7\(\times\)10\({}^{-15}\) to 4.1\(\times\)10\({}^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) and X-ray luminosities varying from 2\(\times\)10\({}^{39}\) to 5.4\(\times\)10\({}^{41}\) erg s\({}^{-1}\). Our results indicate that the X-ray luminosity of the sources in our sample is larger than the Eddington luminosity limit for a typical neutron star, even at the lowest observed levels. This leads us to conclude that the sources emitting X-rays in our sample are likely black holes. Additionally, we have employed a scaling relation between black hole and stellar mass to estimate the masses of the black holes in our sample, and have determined a range of black hole masses from 4.6\(\times\)10\({}^{4}\) to 1.5\(\times\)10\({}^{6}\) M\(\odot\). Finally, we find a trend between X-ray to optical flux ratio and X-ray flux. We discuss the implications of our findings and highlight the importance of X-ray observations in studying the properties of dwarf galaxies.
keywords: galaxies: clusters: general - galaxies: dwarf - galaxies: evolution - X-rays: galaxies: clusters - astronomical data bases: surveys
## 1 Introduction
Galaxy clusters are the largest and most massive gravitationally-bound structures in the Universe. According to current cosmological theories of large scale formation, galaxy clusters have grown hierarchically by the merging of smaller virialized haloes (Press & Schechter, 1974; White & Rees, 1978; Blumenthal et al., 1984). They allow us to study numerous astrophysical phenomena. In particular, the investigations of the formation of early-type galaxies whose fraction is much higher in clusters as determined by the morphology-density relation (Dressler, 1980). This relation also reveals that the fraction of dwarf galaxies is even higher.
Dwarf galaxies constitute the most numerous subset of galaxies in the Universe (Binggeli et al., 1988). However, the observed number of dwarfs is lower by about two orders of magnitude than expected from the current models of galaxy formation (Moore et al., 1999). This is the so-called missing satellite problem. Moreover, the processes driving the formation of dwarf galaxies and how the environment affects their evolution are still poorly understood.
The problem of the dwarf invisibility is generally attributed to feedback processes which ejects most of the baryons, thus, making them difficult or impossible to be detected. Silk & Mamon (2012) review that three fundamental mechanisms for dwarf galaxy feedback are reionization of the Universe at early epochs, supernovae (SNe) and (ram pressure and tidal) stripping. Alternatively, active galactic nuclei (AGN) driven outflows from black holes (BHs) could contribute to the feedback mechanism. This scenario has been gaining support through in depth studies in recent years (Silk & Nusser, 2010). However, none of these models has so far has provided clear a solution. More accurate census of dwarf galaxies and revealing their properties will allow us to better understand their formation mechanisms and can be a benchmark for our cosmological models.
In massive galaxies, AGN feedback mechanism has been added as a regular component of the galaxy formation model since it seems that every massive galaxy has a supermassive black hole (SMBH) in its centre (Kormendy & Richstone, 1995). In dwarf galaxies, on the other
hand, the presence of massive BHs (mBH; \(M_{BH}=10^{4}\sim 10^{6}~{}M_{\sun}\); Greene et al.2020; Mezcua2017) is becoming increasingly evident, thanks to studies of individual galaxies to large scale surveys (e.g. Reines et al.2011, 2013; Moran et al.2014; Mezcua & Dominguez Sanchez2020; Birchall et al.2020). However, the occupation fraction of BHs in dwarf galaxies is still debated. The best evidence for BHs in dwarf galaxies comes from X-ray (Kormendy & Ho 2013; Pardo et al.2016) besides optical emission line studies (Baldassare et al.2016). These investigations indicate that AGNs are found in roughly one percent of dwarf galaxies.
X-ray observations are ideal for the detection of galaxy clusters since the X-ray emission is proportional to the square of the gas density, making it easier to identify clusters with cool cores. In addition, the projection effects can be completely overcome in the X-ray band. The first cluster samples were compiled from the first all-sky X-ray survey with _Uhuru_ and subsequent further observations were found more objects with _HEAO-1_, _Ariel-V_, _Einstein_, _EXOSAT_ and a centerpiece _ROSAT_. Later deep pointed observations with the current generation of X-ray satellites _XMM-Newton_, _Chandra1_ and _Suzaku_ have remarkably changed our X-ray view of clusters and their galaxies to investigate their evolutionary properties (see Rosati et al.2002, for a review).
Footnote 1: To see a list of recent and ongoing Chandra and XMM Newton surveys [http://cxc.harvard.edu/xrayssurveys/surveys.html](http://cxc.harvard.edu/xrayssurveys/surveys.html)
The resolving power of _Chandra_ and the low background of its ACIS instrument provide us an ideal combination to detect faint sources in nearby galaxies. Several studies have taken this advantages to study the fraction of AGNs in nearby late-type galaxies and have successfully detected X-ray nuclei in star-forming galaxies (Ghosh et al.2008; Grier et al.2011; She et al.2017), late-type bulgeless spirals (She et al.2017)b and dwarf irregular galaxies (Lemons et al.2015). For the local AGNs selected from the Sloan Digital Sky Survey (SDSS), more attention in X-rays has been devoted to perform detailed spectral and timing analysis using sufficiently deep observations (e.g. Moran et al.2005; Greene & Ho 2007; Dong et al.2012; Jin et al.2016). In addition, based on mainly archival observations, detailed characterizations of X-ray nuclei in nearby, lower mass early-type galaxies in the Virgo cluster (Ghosh et al.2008; Gallo et al.2010), and the Fornax cluster (Lee et al.2019) were performed.
In this paper, we performed one of the first investigations to unveil X-ray sources within dwarf galaxies in nearby galaxy clusters. Our search for X-ray emission using deep archival Chandra observations focused on dwarf galaxies up to a redshift of z \(\sim\) 0.03. These dwarf galaxies were identified with optical data in order to bring some light into galaxy evolution and transformation processes. It is clear that studying dwarf galaxies in cluster environments of different properties in both optical and X-rays would yield deeper understanding about their evolution.
Here, we use a sample of twelve nearby clusters from Kapteyn IAC WEAVE INT Cluster Survey (KIWICS) and present the results of our comprehensive investigations to uncover X-ray emission from dwarf galaxies in these galaxy clusters. In the next section, we describe optical and X-ray observations and explain our methodology to reduce them. In Section 3, we present the main results. We conclude, in section 4, with a discussion of these results and a comparison of X-rays versus optical data.
Throughout this paper, we assume a \(\Lambda\)CDM cosmology with \(\Omega_{m}\) = 0.3, \(\Omega_{\lambda}\) = 0.7, and H\({}_{0}\) = 70 kms\({}^{-1}\)Mpc\({}^{-1}\).
## 2 Observations and Data Reduction
### Optical Observations
Our optical observational data come from a deep photometric survey of galaxy clusters: The Kapteyn IAC WEAVE INT Cluster Survey (KIWICS, Pls R. Peletier and J.A. Lopez-Aguerri). This survey consisted in imaging of 47 X-ray selected, nearby (0.02 \(<\)\(z\)\(<\) 0.04) galaxy cluster (Piffaretti et al.2011) in the Northern hemisphere, and will be ideal for studying dwarfs and low-surface-brightness galaxies (LSB). All observational data were obtained by using the Wide Field Camera (WFC) at the 2.5-m Isaac Newton Telescope (INT) in La Palma, Spain. The full KWICS sample was selected as a preparation for the future spectroscopic WEAVE Cluster Survey to be carried out with WHT Enhanced Area Velocity Explores (WEAVE) spectrograph (Dalton et al.2016). We use the two broadband Sloan filters \(g\) and \(r\) with total integration times of \(\sim\) 1800 s and \(\sim\) 5400 s, respectively. The observations cover each cluster up to at least 1 \(R_{200}\) with a dithering pattern of individual exposures of 210s. A comprehensive description of the observational strategy and data reduction processes can be found in Mancera Pina et al. (2018, 2019) and Choque-Challapa et al. (2021). Here, we briefly summarise the main aspects.
The data reduction was done by using the Astro-WISE (McFarland et al.2013) environment following the same routine as explained in there. The data reduction was performed in two main steps; the first step contains the standard instrumental corrections, namely, applying bias subtraction and flat-fielding corrections. At this stage, weight maps were also generated for each frame. These weight maps contain information about bad pixels or saturated pixels (from hot and cold pixel maps), as well as the expected noise associated with each pixel and cosmic rays. The second step deals with the sky subtraction, after which astrometric and photometric corrections were applied. For this purpose, a set of standard stars were observed during each night of the observations with the SDSS DR14 catalogue (Abolfathi et al.2018). Finally, all the cluster frames corrected for bad pixels and cosmic rays. The astrometric solutions were computed by making use of the publicly available software SCAMP (Bertin2006). The astrometry of our final mosaic has an rms of \(\sim\) 0.2\(\arcsec\). They are median-stacked to produce a deep coadded mosaic with re-sampled to a scale of 0.2\(\arcsec\) per pixel.
The galaxy clusters studied here come from the sample of Choque-Challapa et al. (2021) that have a seeing \(<\) 1.6\(\arcsec\) FWHM with redshift lower than 0.03. In this study, we analyse dwarf galaxies in these clusters by combining optical and X-ray data. To classify objects
Figure 1: Sky map of all of the clusters being surveyed in KWICS and those studied in this work represent in red star symbols.
as dwarf galaxies, we employed a criterion of M\({}_{r}>-19.0\) mag. This threshold adheres to the established convention, as defined by Binggeli et al. (1988), which designates dwarf galaxies as those with M\({}_{B}>-18.0\) mag and assumes a color index \(B\) - \(r\sim 1.0\) mag for such galaxies. Additionally, we implemented a lower luminosity cutoff at M\({}_{r}=-15.5\) mag. Galaxies fainter than this limit were excluded from our analysis to reduce potential contamination from background objects.
#### 2.1.1 Identification of dwarf galaxies
We use SExtractor (Bertin and Arnouts, 1996; Holwerda, 2005) based on the stellarity CLASS_STAR and FLAG parameters to detect potential dwarf candidates. The CLASS_STAR parameter ranges from 0 to 1; objects close to 0 are likely to be extended objects and close to 1 are more likely to be point sources. Galaxies are defined as object with CLASS_STAR \(\leq 0.2\) in both \(g\) and \(r\) filters, and with FLAG\({}_{r}=0\). This criterion is used to exclude objects that are either blended with other nearby objects or have poor quality photometry.
To effectively eliminate background objects from the SExtractor catalogue, we firstly performed a cut based on the color \(g\) - \(r\leq 1.0\) mag to exclude all galaxies redder than this limit. This limit corresponds to a 12 Gyr old stellar population with supersolar metallicity, respectively (Worthey, 1994). As a final step of our selection of the dwarf candidates, a visual cleaning was done in order to remove artefacts if present. We exclude all the objects if they are background, artefacts, interaction, or fainter than \(m_{r}=20\) mag, as their visual selection become inaccurate (see Section 3 in Choque-Challapa et al., 2021, for details). In the end, we find that there are overall 2720 dwarf galaxies detected in all of these 12 clusters of the KIWICS survey fields. As an example, we present the dwarf galaxies in A1367 in Figure 2.
In order to obtain more accurate photometric measurements for the identified galaxies, we used GALFIT (Peng et al., 2010) via a fit to their light profiles. The process was carried out according to the methodology outlined in Venhola et al. (2018) for all probable cluster members, excluding objects with an SExtractor ISORAREA-IMAGE value of less than 200 pixels. This exclusion was primarily due to the fact that smaller objects tend to result in poor fits due to their faintness (apparent \(r\)-band magnitudes fainter than 22 mag). We used the central coordinates, isophotal magnitudes, and semi-major axis lengths (all obtained using SExtractor) as inputs for the photometric investigation. We adopt the non-circularized effective radius, which represents the length of the semimajor axis of an ellipse that best fits the isophotes enclosing half of the total light emitted by a galaxy. A single Sersic function was applied to each object in both the \(r\) and \(g\) bands. However, due to the limited resolution of our images, it was not possible to discern a nucleus in the galaxies. For the objects that have not a clear nucleus, the center coordinates are kept fixed when determining the SExtractor magnitudes.
To provide a comprehensive analysis, we also estimate the stellar masses of the sample galaxies. To accomplish this, we use the relation observed between the (g\(-\)r) color and mass to-light (M/L) ratio by Roediger and Courteau (2015):
\[\log(M_{*}/L_{r})=1.629\times(g-r)-0.792 \tag{1}\]
Our all dwarf samples and X-ray emitting dwarf samples have a median (mean) stellar mass of 4.93 (5.33) \(\times 10^{8}\)\(M_{\sun}\) and 7.16 (10.90) \(\times 10^{8}\)\(M_{\sun}\), respectively.
### X-ray Observations
We searched the Chandra data archive for X-ray observations covering the fields of our sample of 12 galaxy clusters with minimum exposure times of 10 ks in order to ensure significant detection. We find that one cluster (RXCJ1714.3+4341) was not observed with Chandra. The exposure times of X-ray observations of two clusters, namely RXCJ0919.8+3345 and ZwCL165 were 3 and 2 ks, respectively. Therefore, these two observations were not used. The remaining nine clusters were found to have deep enough Chandra observations in one or multiple pointings. We list the observation IDs of these data sets in Table 3.
The Chandra data were reduced using the Chandra Interactive Analysis of Observations (ciao) software version 4.13 with CALDB version 4.9.5. As the focal point instruments, we employed data recorded with a back-illuminated ACIS-S chip (S3) or the four front-illuminated chips of ACIS-I (I0-I3). Note that the other chips on ACIS-S were generally not useful for our purposes due to the fact that the point spread function (PSF) becomes large at large off-axis angles.
To classify optical counterparts to our Chandra X-ray data, we place both the Chandra and KIWICS data onto the International Celestial Reference System (ICRS) by finding matches between stars or background galaxies in the Two Micron All-Sky Survey Point Source Catalog (2MASS-PSC, Skrutskie et al., 2006). We have already performed astrometry for KIWICS data (see Section 2.1). In order to improve the astrometry of Chandra/ACIS images relative to the KIWICS images, we use our own pipeline which aligns the images using the astrobain Python package (Beroiz et al., 2020) and the CIAO task wcs-match script. Using the transformation matrices obtained from the analysis, the aspect solution files and the coordinate parameters are updated in all of the X-ray event files. We set an upper bound for the rms residual between optical and X-ray positions as \(\sim 0.02^{\prime\prime}\).
We identified X-ray sources within our aligned Chandra images using the wavdetect tool of CIAO. It is a Mexican-Hat wavelet-based source detection algorithm on the full energy band (\(0.5-10\) keV) with the false detection threshold value set to \(10^{-6}\). The wavelet scales are set to 1.0, 2.0, 4.0 and 8.0 pixels. We detected 487 X-ray sources in all nine fields that we investigated. Finally, we obtained the matching pairs of the coordinates of Chandra X-ray sources and optical dwarf source lists by allowing the two to be at most \(0.5^{\prime\prime}\) apart from each other. In total, 20 dwarf galaxy sources were detected in X-rays. We list these 20 sources in Table 2. We also illustrate the optical and X-ray paired galaxies in A1367 in Figure 2.
For each of these X-ray emitting systems, we performed X-ray spectral analysis to determine their X-ray flux as follows: We extracted source spectra for 18 sources from a circular region of \(10^{\prime\prime}\) radius centered at their X-ray positions listed in Table 3. For two sources, in particular A262\(-\)2 and RXCJ2214\(-\)1, the radius was set to be \(12^{\prime\prime}\) due to their extended nature. We then grouped these source spectra in order for each spectral bin to contain 10 source counts. The background spectra were extracted from circular aperture of the same radii from nearby source-free regions. To take into account time-dependent and position-dependent ACIS responses, the corresponding response and ancillary response files are also extracted per observation. For the five sources in the A1367 galaxy cluster for which there were multiple Chandra observations, we modeled these multiple spectra for each source simultaneously by linking the power law index parameters so that the fit would yield a joint power law index for each source.
We fit each background subtracted spectrum with the power
law model attenuated by the interstellar hydrogen population using XSPEC (Arnaud, 1996). We fix the HI column density \(N_{H}\) with corresponding value for each cluster that is the Galactic absorption towards these clusters2. We initially allowed the power law index to vary to perform the fits. We find that the spectra of 12 sources can be represented with the power law model whose indices range from about 1 to 2.5 (see Table 3). However, the index parameter for the spectra of eight X-ray source cannot be constrained. In those cases, we fixed the power law index at the average value of constrained indices of the other cluster members. We finally used the best fit parameters to calculate the X-ray flux in the band 0.5\(-\)10 keV (see Table 3). We list \(g-r\) color, M\({}_{r}\) magnitudes, effective radius as well as stellar mass (M\({}_{*}\)) estimates of these 20 X-ray emitting dwarf galaxies in Table 4.
Footnote 2: Obtained from NASAβs HEASARC N\({}_{H}\) tool, [https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3ah.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3ah.pl)
## 3 Results
We present the color-magnitude diagram (CMD) of all optically detected systems in Figure 3. We also indicate those 20 galaxies emitting X-rays on the same Figure. We find that X-ray emitting dwarfs cover this range of magnitude almost uniformly. On the other hand, the g\(-\)r colors of these dwarfs range in a rather narrow interval of 0.55 and 0.85. Note that the g\(-\)r color outlier with 0.40 (RXCJ0751\(-\)2), the estimated effective radius is one of the largest. Excluding this system, we observe a nearly linear trend between the g\(-\)r color and absolute magnitude. A linear fit to those 19 yields a slope of \(-0.52\pm 0.02\) which is the red sequence. We compare these relations with early- and late-type dwarf galaxies from the Fornax Deep Survey (FDS, Venhola et al., 2018). One clearly sees that early type dwarfs and our X-ray emitting samples form a red sequence, while the late type dwarfs are situated in a blue cloud below it. Our outlier dwarf galaxy is located within the region occupied by FDS late-type dwarf galaxies in Figure 3.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Name & RA & Dec & Redshift & \(\sigma_{\rm e}\) & Seeing \(r\)-band & Seeing \(g\)-band & \(M_{500}\) & \(R_{500}\) \\ & (deg) & (deg) & z & (km s\({}^{-1}\)) & (\({}^{\prime}\)) & (\({}^{\prime}\)) & \(10^{14}\)\(M_{\sun}\) & Mpc \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline A1367 & 176.152 & 19.759 & 0.0214 & 581 \(\pm\) 64 & 1.5 & 1.6 & 2.14 & 0.90 \\ A262 & 28.188 & 36.157 & 0.0163 & 402 \(\pm\) 24 & 1.5 & 1.4 & 1.19 & 0.74 \\ RXJ0123.2+3327 & 20.801 & 33.461 & 0.0146 & 483 \(\pm\) 95 & 1.6 & 1.3 & 0.36 & 0.50 \\ RXJ0123.6+3315 & 20.921 & 33.261 & 0.0164 & 483 \(\pm\) 95 & 1.6 & 1.3 & 0.61 & 0.60 \\ RXCJ0751.3+5012 & 117.844 & 50.213 & 0.0228 & 360 \(\pm\) 24 & 1.3 & 1.3 & 0.42 & 0.52 \\ RXCJ1206.6+2811 & 181.656 & 28.184 & 0.0283 & 381 \(\pm\) 42 & 1.5 & 1.4 & 0.42 & 0.52 \\ RXCJ1223.1+1037 & 185.777 & 10.624 & 0.0258 & 302 \(\pm\) 12 & 1.6 & 1.5 & 0.56 & 0.58 \\ RXCJ1715.3+5724 & 258.841 & 57.408 & 0.0276 & 475 \(\pm\) 29 & 1.3 & 1.5 & 0.87 & 0.67 \\ RXCJ0199.8+3345 & 139.955 & 33.760 & 0.0230 & 299 \(\pm\) 19 & 1.4 & 1.4 & 0.26 & 0.45 \\ RXCJ1714.3+3441 & 258.578 & 43.690 & 0.0276 & 176 \(\pm\) 22 & 1.3 & 1.5 & 0.31 & 0.48 \\ RXCJ2214.8+1350 & 333.720 & 13.847 & 0.0253 & 351 \(\pm\) 07 & 1.3 & 1.3 & 0.32 & 0.48 \\ ZwCL1665 & 125.798 & 04.356 & 0.0293 & 382 \(\pm\) 15 & 1.3 & 1.4 & 0.73 & 0.63 \\ \hline \end{tabular} Note. (1) galaxy cluster name; (2) and (3) right ascension and declination in J 2000; (4) and (5) redshift and velocity dispersion from Choque-Challapa et al. (2021); (6) and (7) mean seeing during the observation in the \(r\) and \(g\) band; (8) and (9) mass (\(M_{500}\)) and radius (\(R_{500}\)) from Piffaretti et al. (2011).
\end{table}
Table 1: Properties of our sample of galaxy clusters.
Figure 2: (Left) Map of the A1367 cluster. The black symbols correspond to dwarf galaxies identified in optical. The red stars represent matching galaxies with X-rays. The green cross shows the X-ray center from Ebeling et al. (1998), the green dotted circle indicates the estimated R\({}_{200}\) radius, and the blue area indicates the combined field covered by multiple Chandra observations. (Right) Zoomed in view of the X-ray centre of the galaxy cluster.
Figure 4: Colorβstellar mass _(top)_ and stellar massβMr diagrams _(middle)_ for the identified dwarf galaxies in all 12 clusters. Red points represent those with paired X-ray emission. Black hole mass as a function of stellar mass _(bottom)_ with three different assumptions.
Figure 3: ColorβM\({}_{r}\), Colorβ\(R_{e}\) and \(R_{e}-\)M\({}_{r}\) diagram for the identified dwarf galaxies in all 12 clusters. Red points represent those with paired X-ray emission. Turquoise and blue symbols show the FDS early- and late-type dwarfs, respectively.
We present the color vs. effective radius behavior of the optically identified dwarf galaxies in Figure 3 (middle). We find that 80% of the objects detected in X-ray have a size smaller than 1.5 kpc. They appear to be also clustered in \(R_{e}\sim\) 1.2 kpc, that is in general agreement with the size distribution of dwarf. Also in Figure 3 (bottom), we show effective radius against absolute magnitudes of galaxies. It is noteworthy that the clustering of dwarfs towards smaller \(R_{e}\) and high M\({}_{r}\) is not followed by those dwarfs identified in X-rays, indicating that the detected objects have a range in surface brightness (or density).
We plot stellar mass as a function of color, M\({}_{r}\) in the top and middle panels of Figure 4, respectively. Our X-ray emitting dwarf samples mass range from \(\sim\) 2\(\times\)10\({}^{8}\)\(M_{\sun}\) to 6 \(\times\)10\({}^{9}\)\(M_{\sun}\). We find no X-ray emitting dwarf galaxies below the galaxy mass of \(M_{*}\sim 10^{8.3}M_{\sun}\).
We also studied the X-ray to optical emission (X/O) ratio as introduced by Maccacaro et al. (1988) as
\[X/O=\log(f_{X}/f_{opt})=\log(f_{X})+C+m_{r}/2.5 \tag{2}\]
where, \(f_{X}\) is the X-ray flux in given energy band in ergs cm\({}^{-2}\) s\({}^{-1}\), \(m_{opt}\) is the magnitude at the chosen optical wavelength and C3 is a constant which depends on the specific filter used in the optical observations. We present X/O values of these 20 systems against their X-ray flux in the 0.5\(-\)10 keV band in Figure 5. We find that all X/O ratios in our sample are less than \(-\)0.25. Moreover, there is a positive correlation between X/O values and flux with Spearman's rank order correlation coefficient, r of 0.70 and chance occurrence probability, P of 4.5\(\times\)10\({}^{-4}\). A linear trend fit to these data points yields a slope of 0.69\(\pm\)0.19. We also investigated the trends of X/O and O/X with their corresponding optical flux values. We find that both ratios vary significantly with increasing optical flux as well. Therefore, X/O vs. X-ray flux is an indicative probe for both X-ray and optical emission.
Footnote 3: It is taken as 5.57 for r band from Haggard et al. (2010).
An important advantage of BH-galaxy scaling relationships is their ability to estimate the mass of a black hole from readily measurable galaxy properties. This feature makes them a valuable tool for investigating BH properties and their role in galaxy evolution. In our case, it is, however, not possible to estimate the mass of a black hole (M\({}_{BH}\)) using the relationship between M\({}_{BH}\) and stellar velocity dispersion (\(\sigma_{*}\)) (e.g., Ferrarese & Merritt, 2000; Gultekin et al., 2009; Beifiori et al., 2012) due to the lack of available \(\sigma_{*}\) measurements in our samples. Previous studies have shown a strong positive correlation between M\({}_{BH}\) and M\({}_{*}\)(Reines & Volonteri, 2015; Shankar et al., 2016; Suh et al., 2020). Here, we evaluate the relationship between M\({}_{BH}\) and M\({}_{*}\) with three different assumptions. Reines & Volonteri (2015) suggested a relation between the masses of the black holes and their host galaxies (see their equation 5) using a sample of nearby inactive early-type galaxies and local AGN. They report that their M\({}_{BH}\) measurements have errors of \(\sim\)0.5 dex. Shankar et al. (2016) used a combination of dynamical modeling and the virial method to calculate black hole masses (see their equation 6). Suh et al. (2020) investigate the relationship between black hole mass and galaxy total stellar mass up to a redshift z \(\sim\) 2.5 for 100 X-ray-selected AGN sample and provides a relation via their equation 2.
Note that their sample of galaxies has total stellar masses of 10\({}^{11-12}M_{\sun}\). We present \(M_{BH}\) estimates within these three approaches for the sample of 20 X-ray emitting galaxies in the bottom panel of 4. It is important to note that the \(M_{BH}\) vs. \(M_{*}\) relation for dwarf galaxies is not established well. Nevertheless, the sample of Reines & Volonteri (2015) is the only one that includes dwarf galaxies and extends to lower galaxy masses. Therefore, we employed the relation of Reines & Volonteri (2015) and obtain an average black-hole-mass of 2.4\(\times\)10\({}^{5}M_{\sun}\) for our 20 galaxy sources (see Figure 6).
\begin{table}
\begin{tabular}{l c c c c} \hline Name & ObsID & Start Date and Time & Exposure & Count Rate\({}^{(1)}\) \\ & & (UTC) & (ks) & (10\({}^{-4}\) c s\({}^{-1}\)) \\ \hline A1367-1 & 4189 & 2003-01-24 10:29:13 & 48 & 25.2 \(\pm\)3.0 \\ & 17199 & 2015-01-30 21:05:47 & 38 & \\ & 17201 & 2016-01-31 13:37:06 & 61 & \\ A1367-2 & 514 & 2000-02-26 10:44:03 & 41 & 18.6 \(\pm\)3.6 \\ & 17199 & 2015-01-30 21:05:47 & 38 & \\ & 17200 & 2015-01-30 05:05:64:2 & 40 & \\ & 17201 & 2016-01-31 13:37:06 & 61 & \\ A1367-3 & 17199 & 2015-01-30 21:05:47 & 38 & 6.4 \(\pm\)2.5 \\ & 17200 & 2015-11-05 00:56:42 & 40 & \\ A1367-4 & 17199 & 2015-01-30 21:05:47 & 38 & 6.2 \(\pm\)2.4 \\ & 17200 & 2015-11-05 00:56:42 & 40 & \\ & 17201 & 2016-01-31 13:37:06 & 61 & \\ A1367-5 & 17199 & 2015-01-30 21:05:47 & 38 & 10.8 \(\pm\)3.0 \\ & 17200 & 2015-11-05 00:56:42 & 40 & \\ & 17201 & 2016-01-31 13:37:06 & 61 & \\ A1367-5 & 17199 & 2015-01-30 21:05:47 & 38 & 10.8 \(\pm\)3.0 \\ & 17200 & 2015-11-05 00:56:42 & 40 & \\ & 17201 & 2016-01-31 13:37:06 & 61 & \\ A262-1 & 7921 & 2006-11-20 03:35:12 & 111 & 54.4 \(\pm\)3.6 \\ A262-2 & 7921 & 2006-11-20 03:35:12 & 111 & 8.8 \(\pm\)2.0 \\ A262-3 & 7921 & 2006-11-20 03:35:12 & 111 & 7.4 \(\pm\)3.1 \\ RX01023-1 & 2882 & 2002-01-08 04:47:16 & 44 & 42.6 \(\pm\)5.0 \\ RXCJ0751-1 & 15170 & 2013-05-14 07:59:03 & 98 & 7.6 \(\pm\)1.2 \\ RXCJ0751-1 & 15170 & 2013-05-14 07:59:03 & 98 & 2.9 \(\pm\)1.3 \\ RXCJ0751-3 & 15170 & 2013-05-14 07:59:03 & 98 & 2.3 \(\pm\)0.9 \\ RXCJ1206-1 & 6939 & 2006-01-26 02:02:12 & 36 & 20.6 \(\pm\)3.7 \\ RXCJ1223-1 & 3232 & 2003-02-04 16:09:47 & 30 & 15.6 \(\pm\)3.6 \\ RXCJ1715-1 & 4194 & 2003-09-17 08:33:38 & 47 & 2.5 \(\pm\)2.4 \\ RXCJ1715-2 & 4194 & 2003-09-17 08:33:38 & 47 & 3.8 \(\pm\)2.0 \\ RXCJ1715-3 & 4194 & 2003-09-17 08:33:38 & 47 & 13.1 \(\pm\)2.3 \\ RXCJ2214-1 & 6392 & 2006-01-12 22:33:07 & 33 & 6.1 \(\pm\)3.0 \\ RXCJ2214-2 & 6392 & 2006-01-12 22:33:07 & 33 & 8.3 \(\pm\)2.4 \\ RXCJ2214-3 & 6392 & 2006-01-12 22:33:07 & 33 & 9.8 \(\pm\)2.2 \\ \hline \end{tabular} Note. (1) Background subtracted count rates in the 0.5\(-\)10 keV band.
\end{table}
Table 2: Details of Chandra X-Ray Observations for the 20 X-ray Emitting Galaxies
Figure 5: Plot of X-ray to optical flux ratio vs. the X-ray flux in the 0.5\(-\)10 keV band. The solid line is the best fit linear trend to these data points. The horizontal dotted line marks the level of equal X-ray and optical fluxes.
## 4 Discussion and Conclusions
Through our investigation of the correlation between optical and X-ray data of a sample of nearly galaxy clusters, we identified 2720 dwarf galaxies, twenty of which were found to emit also in X-rays. Earlier multiband investigations revealed by Pardo et al. (2016) revealed X-ray emission from ten dwarf galaxies at z \(<\) 1 out of 605 in the optical. They estimate an AGN fraction of \(\sim\) 1% for their sample of dwarf galaxies. They identified AGN candidates based on their hardness ratios and a hard X-ray photon index in the energy range between 2 and 10 keV. Reines et al. (2013) studied a sample of 151 dwarf galaxies (mass range from 10\({}^{8.5}\) to 10\({}^{9.5}\) M\(\odot\), z \(<\) 0.055) with narrow and/or broad emission line signatures indicating the presence of an AGN and identified 10 dwarf galaxies with both narrow and broad emission line AGN signatures. Lemons et al. (2015) cross-matched the Reines et al. (2013) parent sample with the Chandra Source Catalog and found 8 systems with nuclear hard X-ray emission at levels higher than expected from low-mass and high-mass X-ray binaries. Miller et al. (2015) is another study that used a combination of optical and X-ray data to identify AGN in dwarf galaxies. They conducted a study on approximately 200 early-type dwarf galaxies in the local universe. These galaxies were selected optically using Hubble Space Telescope imaging as part of the AGN Multi-wavelength Survey of Early-type galaxies (AMUSE) surveys (see Gallo et al., 2008; Miller et al., 2012, for details on these surveys). They identified a total 23 AGN in their samples with mass range from about 10\({}^{7}\) to 10\({}^{9}\) M\(\odot\). Recently, Chen et al. (2017) identified ten AGNs in low-mass galax
\begin{table}
\begin{tabular}{l c c c c} \hline Name & \(g-r\) & M\({}_{r}\) & \(R_{e}\) & \(log\)\(M_{*}\) \\ & (mag) & (mag) & (kpc) & (\(M_{\odot}\)) \\ \hline A1367-1 & 0.68 \(\pm\)0.04 & -18.40 \(\pm\)0.09 & 1.02 \(\pm\)0.01 & 9.55 \\ A1367-2 & 0.68 \(\pm\)0.03 & -16.08 \(\pm\)0.22 & 0.88 \(\pm\)0.01 & 8.73 \\ A1367-3 & 0.70 \(\pm\)0.03 & -15.56 \(\pm\)0.25 & 2.39 \(\pm\)0.07 & 8.62 \\ A1367-4 & 0.69 \(\pm\)0.03 & -16.34 \(\pm\)0.13 & 1.31 \(\pm\)0.02 & 8.48 \\ A1367-5 & 0.68 \(\pm\)0.06 & -18.96 \(\pm\)0.14 & 0.71 \(\pm\)0.01 & 8.43 \\ A262-1 & 0.76 \(\pm\)0.04 & -18.65 \(\pm\)0.11 & 0.93 \(\pm\)0.00 & 9.78 \\ A262-2 & 0.55 \(\pm\)0.05 & -15.93 \(\pm\)0.24 & 0.70 \(\pm\)0.01 & 8.83 \\ A262-3 & 0.63 \(\pm\)0.03 & -16.81 \(\pm\)0.16 & 1.11 \(\pm\)0.09 & 8.35 \\ RXJ1023-1 & 0.89 \(\pm\)0.04 & -16.39 \(\pm\)0.20 & 0.85 \(\pm\)0.01 & 8.75 \\ RXCJ0751-1 & 0.79 \(\pm\)0.04 & -16.95 \(\pm\)0.12 & 1.23 \(\pm\)0.01 & 9.35 \\ RXCJ0751-2 & 0.40 \(\pm\)0.05 & -17.88 \(\pm\)0.19 & 0.83 \(\pm\)0.01 & 9.14 \\ RXCJ0751-3 & 0.78 \(\pm\)0.03 & -17.49 \(\pm\)0.22 & 2.23 \(\pm\)0.01 & 8.88 \\ RXCJ1206-1 & 0.60 \(\pm\)0.03 & -16.17 \(\pm\)0.25 & 1.56 \(\pm\)0.07 & 8.52 \\ RXCJ1223-1 & 0.63 \(\pm\)0.08 & -16.67 \(\pm\)0.13 & 1.11 \(\pm\)0.03 & 8.78 \\ RXCJ1715-1 & 0.87 \(\pm\)0.05 & -18.22 \(\pm\)0.10 & 1.33 \(\pm\)0.01 & 9.67 \\ RXCJ1715-2 & 0.80 \(\pm\)0.04 & -17.98 \(\pm\)0.15 & 1.27 \(\pm\)0.01 & 9.79 \\ RXCJ1715-3 & 0.82 \(\pm\)0.08 & -18.15 \(\pm\)0.17 & 1.55 \(\pm\)0.04 & 9.57 \\ RXCJ2214-1 & 0.82 \(\pm\)0.07 & -17.84 \(\pm\)0.11 & 1.44 \(\pm\)0.01 & 9.43 \\ RXCJ2214-2 & 0.74 \(\pm\)0.06 & -17.88 \(\pm\)0.11 & 1.50 \(\pm\)0.02 & 9.55 \\ RXCJ2214-3 & 0.72 \(\pm\)0.04 & -15.73 \(\pm\)0.23 & 0.91 \(\pm\)0.02 & 8.54 \\ \hline \end{tabular}
\end{table}
Table 4: Optical Properties of the 20 X-ray Emitting Galaxies
\begin{table}
\begin{tabular}{l c c c c c c} \hline Name & RA & Dec & \(N_{H}\) & Photon Index & Flux & \(logL_{X}\) & \(\chi^{2}\) / dof \\ & (deg) & (deg) & (\(10^{20}\) cm\({}^{-2}\)) & \(\Gamma\) & (\(10^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\)) & (erg s\({}^{-1}\)) & \\ \hline A1367-1 & 176.152 & 19.893 & 1.80 & \(1.77^{+0.33}_{-0.20}\) & 4.05 \(\pm\)0.25 & 40.62 & 26.45/36 \\ A1367-2 & 176.160 & 19.735 & 1.80 & \(1.72^{+0.26}_{-0.26}\) & 2.07\(\pm\)0.18 & 40.33 & 65.18/69 \\ A1367-3 & 176.257 & 19.764 & 1.80 & 1.75 & 1.42 \(\pm\)0.34 & 40.17 & 4.85/5 \\ A1367-4 & 176.298 & 19.702 & 1.80 & 1.75 & 1.19\(\pm\)0.26 & 40.09 & 16.38/13 \\ A1367-5 & 176.070 & 19.738 & 1.80 & 1.75 & 2.37\(\pm\)0.27 & 40.39 & 26.79/26 \\ A262-1 & 28.210 & 36.155 & 6.80 & \(2.17^{+0.21}_{-0.20}\) & 2.94\(\pm\)0.18 & 40.25 & 128.19/72 \\ A262-2 & 28.170 & 36.198 & 6.80 & \(2.00\) & \(0.56\pm\)0.10 & 39.53 & 3.82/5 \\ A262-3 & 28.261 & 36.160 & 6.80 & \(1.92^{+1.65}_{-0.20}\) & 0.67\(\pm\)0.14 & 39.60 & 43.86/50 \\ RXJ0123-1 & 20.892 & 33.252 & 5.26 & \(2.39^{+0.29}_{-0.20}\) & 2.74\(\pm\)0.30 & 40.12 & 35.22/24 \\ RXCJ0751-1 & 117.609 & 50.156 & 5.61 & \(0.96^{+0.66}_{-0.60}\) & 1.66\(\pm\)0.21 & 40.29 & 6.54/7 \\ RXCJ0751-2 & 117.864 & 50.169 & 5.61 & \(1.54^{+1.11}_{-1.11}\) & 0.59\(\pm\)0.16 & 39.84 & 2.71/6 \\ RXCJ0751-3 & 117.671 & 50.175 & 5.61 & \(1.50\) & \(0.17\pm\)0.03 & 39.30 & 5.79/3 \\ RXCJ1206-1 & 181.712 & 28.108 & 1.72 & \(1.02^{+0.63}_{-0.20}\) & 2.93\(\pm\)0.25 & 40.73 & 10.85/9 \\ RXCJ1223-1 & 185.742 & 10.630 & 2.71 & \(2.49^{+0.21}_{-1.00}\) & 0.85\(\pm\)0.14
ies from the NuSTAR serendipitous survey (Lansbury et al., 2017), which is capable of probing hard X-ray emission up to 24 keV.
The X-ray flux of our sample of 20 galaxies range from 1.7\(\times\)10\({}^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) to 4.1\(\times\)10\({}^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\). The corresponding X-ray luminosities (considering isotropic emission at their corresponding distances) vary from 2\(\times\)10\({}^{39}\) erg s\({}^{-1}\) to 54\(\times\)10\({}^{40}\) erg s\({}^{-1}\). Note that even the lowest X-ray luminosity exceeds the Eddington luminosity limit for a typical neutron star (mass of 1.4 M\(\odot\) and radius of 10 km). Mezcua et al. (2016) also analyzed star-forming dwarf galaxies in the COSMOS field up to z = 1.5, and found an excess of X-ray emission that is attributed to a population of accreting BHs, accounting for expected contributions from low-mass and high-mass X-ray binaries (LMXBs and HMXBs, respectively) as well as X-ray emission from hot gas.
X-ray binaries (XRBs) are expected to make a significant contribution, primarily through the presence of young, HMXBs and and the latter by longer-lived low-mass XRBs. This is because LMXBs are typically found in early-type galaxies with very low star formation rates and ages exceeding a few Gyr (see review by Fabbiano, 2006). The luminosity of LMXBs can range up to 10\({}^{38}\) erg s\({}^{-1}\), while HMXBs can have luminosities higher than \(\sim\) 10\({}^{39}\) erg s\({}^{-1}\). We can consider the contribution of HMXBs according to our samples luminosity range. On the other hand, we expect that galaxies with larger SFR will have more significant contributions from HMXBs but dwarf galaxies are mostly passive and old systems.
Lehmer et al. (2016) reported an intriguing observation that a larger population of HMXBs is found in regions with lower metallicity and the luminosity function of HMXBs at lower luminosity (L\({}_{x}\) < 10\({}^{38}\) erg s\({}^{-1}\)) does not show significant sensitivity to changes in metallicity. With the purpose of determining the impact of metallicity, we calculate the gas-phase metallicity4 for three out of the total of twenty galaxies in our study. This limited number of calculations was due to the lack of optical spectral observations for the majority of the galaxies. The calculated gas-phase metallicity values for the three galaxies are as follows: 8.48 for A1367-1, 8.43 for RXCJ0751-1, and 8.47 for RXCJ2214-2. A detailed descriptions of analysis is given in Appendix A.
Footnote 4: Here, we adopt the term βmetallicityβ to refer to the gas-phase oxygen abundance, which is measured in units of 12 + log (O/H). The ratio of O/H represents the abundance of oxygen relative to hydrogen by number. The solar metallicity is defined in this scale as 8.69 (Allende Prieto et al., 2001), while the metallicities of the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are 8.4 and 8.0, respectively (Garnett, 1999).
The relation between size and color in dwarf galaxies in cluster environments is similar to that observed in the general population of dwarf galaxies. According to the analysis conducted, the dwarf galaxies in the cluster environment are on the red sequence, which means that they are relatively old and passive objects. It does not appear that the contribution of HMXBs plays a significant role in our sample.
In order to investigate the effect of AGN, we calculated the black hole masses of galaxies using three approaches in the literature.
The virial method has been a useful tool in estimating the masses of these objects in quiescent galaxies. Reines and Volonteri (2015) employed this method to estimate black hole masses and found a linear \(M_{BH}\)-\(M_{*}\) relation, indicating that the black hole mass scales proportionally with the host galaxy's stellar mass. However, they did not take into account the bias introduced by resolving the black hole's sphere of influence. Shankar et al. (2016) adopted a combination of dynamical modeling and the virial method to estimate black hole masses in galaxies. Moreover, they accounted for the bias introduced by resolving the black hole's sphere of influence, resulting in a lower normalization than that derived by Reines and Volonteri (2015). Note that the highest stellar mass in our sample is 1.4\(\times\)10\({}^{10}\) M\(\odot\). However, the de-biased \(M_{BH}\)-\(M_{*}\) relation by Shankar et al. (2016) was derived for galaxies with \(M_{*}\) > 2\(\times\)10\({}^{10}\) M\(\odot\). This relation yields significantly lower mass for the low mass systems in our sample, down to stellar mass black hole regime. To further investigate the \(M_{BH}\)-\(M_{*}\) relation, Suh et al. (2020) employed a Bayesian approach to estimate black hole masses in early-type galaxies up to z \(\sim\) 2.5. Considering the stellar mass range and redshifts, the scaling relation by Reines and Volonteri (2015) is the best resemblance to our sample among the three approaches. In this framework, the highest \(M_{BH}\) reaches the level of 10\({}^{6.2}\)\(M\)\(\odot\). Note that even higher black hole masses were reported in dwarf galaxies using optical diagnostics (Reines et al., 2013), X-ray observations (Mezcua et al., 2018, 2023) and radio observations (Reines et al., 2020).
The X-ray to optical luminosity ratio is a useful tool for characterizing the properties of dwarf galaxies, and there have been several studies in the literature that have explored this ratio for these systems. X/O can provide information about the source of the X-ray emission, such as hot gas, star formation, or an AGN. In general, dwarf galaxies exhibiting high X-ray to optical emission ratios are presumed to contain abundant amounts of hot gas, which is likely due to a plethora of phenomena such as supernova explosions, tidal interactions, or other factors. Dwarf galaxies with low X-ray to optical emission ratios, on the other hand, may contain little hot gas and may be dominated by star formation or AGN activity (Jeltema et al., 2005; Mineo et al., 2012). Nonetheless, it's worth mentioning that the X-ray to optical emission ratio can fluctuate extensively between diverse dwarf galaxies and can also be affected by a range of parameters, such as the star formation history, the presence of an AGN, and the environment. For example, dwarf galaxies in dense cluster environments may have lower X-ray to optical emission ratios due to ram pressure stripping, which removes hot gas from these systems (Liu et al., 2019).
We find that the dwarf galaxies in our sample have X/O ratios less than -0.25 and there exists a positive correlation between the X/O values and the flux of these galaxies. X-rays in galaxy scales are usually produced through accretion processes onto the central black holes. An increased mass accretion rate would yield an increase in X-rays (from the inner portions of the disk), as well as increase in optical (from the outer accretion disk). Therefore, the positive correlation between X/O values and X-ray flux levels observed could be due to mass transfer rate onto the central compact object.
The use of multiwavelength observations, especially targeting the optical and X-ray wavelength ranges, has emerged as a potential probe for elucidating the intrinsic characteristics of X-ray sources. Notably, Ultra-Luminous X-ray Sources (ULXs) display X/O ratios primarily spanning the range of 1.5 to 2.5 indicating that the X-ray flux is much higher than the optical flux in these objects (Feng and Kaaret, 2008). With the resulting values of our X-ray-to-optical flux ratios, we can conclude that our X-ray sources do not indicate the characteristics typically associated with ULXs. On the other hand, an X-ray source with X-ray-to-optical flux ratios (considering the R-band as the reference for optical flux) ranging from -1.0 to 1.7 is characteristically identified as an AGN (Maccacaro et al., 1988; Lehmer et al., 2016). Such a wide range of values suggests that AGN generally exhibits a broader distribution of X-ray to optical flux ratios, encompassing cases where the optical emission can exceed that of X-rays. Moreover, based on the position of our limited sample in the BPT diagram (Baldwin et al., 1981; see Fig. 11), a commonly employed tool for classifying galaxies based on the primary sources of
ionizing radiation, one would conclude that they would be classified as AGN rather than star-forming galaxies.
In order to achieve a more comprehensive understanding of the characteristics of dwarf galaxies, it is crucial to have extensive and detailed surveys that can provide a significant number of objects to study their kinematics, stellar populations, and metallicity. These surveys can enable the study of properties such as substructure within the dwarf population, which can obtain insights into how the cluster environment influences the formation and evolution of these galaxies and the reasons behind their X-ray emissions.
Further observations and studies are essential to better understand the nature of these X-ray emitting components. Nonetheless, hope is on the horizon as the imminent WEAVE nearby spectroscopic cluster survey holds the promise of providing us with a wealth of data. This survey will comprehensively cover all the clusters of our photometric survey and endow us with thousands of kinematic properties, such as velocity dispersion per cluster. With the help of this upcoming data, we could unravel the intricate properties of dwarf galaxies and gain invaluable insights into the role of their environment in shaping their evolution.
## Acknowledgements
We thank the anonymous referee for valuable comments and suggestions that improved the clarity and impact of our results. S. (Aydemir) acknowledges support from through 2218-National Postdoctoral Research Fellowship Program under project number 118C553 from the The Scientific and Technological Research Council of Turkey (TUBITAK). A.A. acknowledges support from the ACI-ISI, Consejeria de Economia, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under the grant with reference PROID2021010044. NCC acknowledges support from 'Proyecto comite mixto ESO - Gobierno de Chile', N\({}^{2}\) 21119.
## Data Availability
Upon request to the corresponding authors, the optical data supporting this article will be made available. The original, unprocessed data is currently stored in the Isaac Newton Group Archive. The X-ray data underlying this article are available in the Chandra Data Archive ([https://cxc.harvard.edu/cda/](https://cxc.harvard.edu/cda/)) by searching the observation identifiers (ObsID) listed in Table3 in the Search and Retrieval interface, ChaSeR ([https://cda.harvard.edu/chaser/](https://cda.harvard.edu/chaser/)).
|
2310.10499 | Contractibility of the geometric stability manifold of a surface | Using a recent description of the geometric stability manifold, we show the
geometric stability manifold associated to any smooth projective complex
surface is contractible. We then use this result to demonstrate infinitely many
new families of surfaces whose stability manifold is contractible. | Nick Rekuski | 2023-10-16T15:19:48Z | http://arxiv.org/abs/2310.10499v2 | # Contractibility of the geometric stability manifold of a surface
###### Abstract.
Using a recent description of the geometric stability manifold, we give a short proof showing the geometric stability manifold associated to any smooth projective complex surface is contractible.
Key words and phrases:Bridgeland stability, geometric stability conditions, contractibility, homotopy type 2020 Mathematics Subject Classification: Primary 14F08, 14J60, 18G80 The author was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award Number DE-SC-SC0022134 and an OVPR Postdoctoral Award at Wayne State University
## 1. Introduction
On a triangulated category, \(\mathcal{T}\), Bridgeland defined the notion of a stability condition which establishes a framework to study moduli of complexes of sheaves [1]. The collection of all stability conditions forms a complex manifold (with possibly infinitely many connected components), \(\operatorname{Stab}(\mathcal{T})\). Stability conditions and the stability manifold have been used to obtain new results in homological mirror symmetry, representation theory, symplectic geometry, and moduli of stable sheaves. Furthermore, \(\operatorname{Stab}(\mathcal{T})\) has an inherent richness that is worthy of study in its own right.
Recent work has focused on understanding the homotopy type of \(\operatorname{Stab}(\mathcal{T})\). This focus is partially inspired by a conjecture of Bridgeland that states if \(X\) is a \(K3\) surface then a distinguished component of \(\operatorname{Stab}(D^{b}(X))\) is simply-connected [1, Conjecture 1.2]. A positive answer to this conjecture would shed light on the group of exact autoequivalences of the bounded derived category of a \(K3\) surface. More generally, there is a folklore conjecture that if \(\operatorname{Stab}(\mathcal{T})\) is nonempty then it is homotopy discrete.
With current techniques, it seems intractable to show \(\operatorname{Stab}(\mathcal{T})\) is homotopy discrete in general. For this reason, recent research has focused on two open submanifolds of \(\operatorname{Stab}(\mathcal{T})\): the algebraic stability manifold, \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\), and the geometric stability manifold, \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})\). The algebraic stability manifold consists of stability conditions arising from full exceptional collections of \(\mathcal{T}\), and the geometric stability manifold consists of stability conditions where skyscraper sheaves are stable.
In many cases \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})\cup\operatorname{Stab}^ {\operatorname{alg}}(\mathcal{T})\) is a connected component of \(\operatorname{Stab}(\mathcal{T})\). Therefore, the homotopy types of \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\) and \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})\) provide important topological information about \(\operatorname{Stab}(\mathcal{T})\). For example, if \(\mathcal{T}=D^{b}(Q)\)--the bounded derived category of representations of a quiver \(Q\)-and the underlying graph of \(Q\) is simple and acyclic then \(\operatorname{Stab}^{\operatorname{alg}}(Q)=\operatorname{Stab}(Q)\)[1, Theorem 1.1]. Similarly, if \(\mathcal{T}=D^{b}(X)\)--the bounded derived category of coherent sheaves on a variety \(X\)--and \(X\) has finite Albanese then \(\operatorname{Stab}^{\operatorname{geo}}(X)=\operatorname{Stab}(X)\)[11, Theorem 1.1].
We give a brief overview of topological results involving \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\) and \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})\). We start with results for \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\) (see Figure 1 for illustrations of the relevant quivers).
* If \(\mathcal{T}=D^{b}(Q)\) where \(Q\) is a quiver of type \(A_{n}\), \(D_{n}\), or \(E_{n}\) then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [14, 1, 15, 16].
* If \(\mathcal{T}=D^{b}(Q)\) where \(Q\) is of type \(\widetilde{A}_{2}\) then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [17, 18].
* If \(\mathcal{T}=D^{b}(K_{n})\) where \(K_{n}\) is the Konecker quiver (including the case \(\mathcal{T}=D^{b}(\mathbb{P}^{1})\)) then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [17, 18].
* If \(\mathcal{T}=D^{b}(P_{2})\) (equivalently \(\mathcal{T}=D^{b}(\mathbb{P}^{2})\)) then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\) is contractible [1].
* If \(\mathcal{T}=D^{b}(\Lambda(r,n,m))\) where \(\Lambda(r,n,m)\) is the path algebra of \(Q_{n,m}\) subject to the relations \[\gamma_{n-r+1}\gamma_{n-r+2},\gamma_{n-(r-1)+1}\gamma_{n-(r-1)+2},\ldots, \gamma_{n}\gamma_{1}=0\] then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [1].
We now review results for \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})\).
* If \(\mathcal{T}=D^{b}(X)\) where \(X\) is a smooth positive genus curve then \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [10],
* If \(\mathcal{T}=D^{b}(X)\) for an abelian surface \(X\) then \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is simply connected and if the Neron-Severi rank of \(X\) is \(1\) then \(\operatorname{Stab}(\mathcal{T})\) is contractible [11, 12].
* If \(\mathcal{T}=D^{b}(X)\) where \(X\) is an irregular surface of Neron-Severi rank one then \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [12].
* If \(\mathcal{T}=D^{b}(X)\) where \(X\) is an abelian threefold of Neron-Severi rank one then \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})=\operatorname{Stab}( \mathcal{T})\) is contractible [12].
There is an intermediate result on \(\mathbb{P}^{2}\) that is neither strictly geometric nor algebraic: if \(\mathcal{T}=D^{b}(\mathbb{P}^{2})\) then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\cup\operatorname{Stab}^ {\operatorname{geo}}(\mathcal{T})\) is a contractible connected component of \(\operatorname{Stab}(\mathcal{T})\)[13].
There are cases when \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\cup\operatorname{Stab}^ {\operatorname{geo}}(\mathcal{T})\) is _not_ a connected component of \(\operatorname{Stab}^{\operatorname{geo}}(\mathcal{T})\). For example, if \(\mathcal{T}=D^{b}(X)\) where \(X\) is a \(K3\) surface [1], a general Enriques surface [10], a local projective plane [11], or a surface with no exceptional collections admitting a rational curve of negative self-intersection [14] then \(\operatorname{Stab}^{\operatorname{alg}}(\mathcal{T})\cup\operatorname{Stab}^ {\operatorname{geo}}(\mathcal{T})\) is not a connected component of \(\operatorname{Stab}(\mathcal{T})\). As far as the author is aware, the only topological results in this case with \(X\) a projective variety are for \(K3\) surfaces of Picard rank one [1]. There are more topological results in this case if \(X\) is not compact [11, 12].
In the case of a smooth complex surface, Dell gives a description of \(\operatorname{Stab}^{\operatorname{geo}}(X)\) and used this description to show \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is connected [1, Corollary 5.39]. We use this same description to show \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is contractible.
**Theorem A** (3.2).: _If \(X\) is a smooth projective complex surface then \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is contractible._
In Example 3.3 we use Theorem A and a recent result of of Fu, Li, and Zhao, to show if \(X\) has finite Albanese then \(\operatorname{Stab}(X)\) is contractible. This generalizes [12, Theorem 1.2] to arbitrary Neron-Severi rank. This generalization gives infinitely many new families of surfaces with contractible stability manifold.
In Example 3.4 we use Theorem A and a recent result of Dell to show if \(X\) has finite Albanese and \(G\) acts freely on \(X\) then \(\operatorname{Stab}(X/G)\) contains a contractible connected component. In particular, the stability manifold of a Beauville-type or bielliptic surface contains a contractible distinguished component.
**Notation and Assumptions.** Suppose \(X\) is a smooth projective complex surface. We write \(\operatorname{NS}(X)\) for the Neron-Severi group of \(X\) and \(\operatorname{Amp}(X)\) for the ample cone. We also write \(\operatorname{NS}_{\mathbb{R}}(X)=\operatorname{NS}(X)\otimes\mathbb{R}\) and \(\operatorname{Amp}_{\mathbb{R}}(X)=\operatorname{Amp}(X)\otimes\mathbb{R}\). We use greek letters (e.g. \(\alpha,\beta,\phi\)) for real numbers.
The bounded derived category of \(\operatorname{Coh}(X)\) will be written \(D^{b}(X)\) and the numerical Grothendieck group of \(X\) will be written \(K_{\operatorname{num}}(X)\). We use capital script letters (e.g. \(\mathscr{E}\)) for coherent sheaves on \(X\). We use capital letters (e.g. \(E\), \(F\)) for chain complexes in \(D^{b}(X)\). We use \(\operatorname{ch}_{i}(\mathscr{E})\) for the \(i\)th Chern character of \(\mathscr{E}\) viewed as an element of \(H^{2i}(X,\mathbb{Q})\).
**Acknowledgments.** The author is thankful to Hannah Dell, Rajesh Kulkarni, and Andrew Salch for discussions related to this work.
## 2. Preliminaries
We recall some background on stability conditions.
**Definition 2.1**.: A _stability condition_ on \(D^{b}(X)\) (also called a _Bridgeland stability condition_) is a pair \(\sigma=(Z,\mathcal{P})\) where \(Z:K_{\operatorname{num}}(X)\to\mathbb{C}\) is a group homomorphism, called a _central charge_, and \(\{\mathcal{P}(\phi)\}_{\phi\in\mathbb{R}}\) is a collection of full subcategories of \(D^{b}(X)\), called a _slicing_, satisfying the following conditions
* If \(E\in\mathcal{P}(\phi)\setminus\{0\}\) then \(Z([E])\in\mathbb{R}_{>0}\exp(i\pi\phi)\)--the positive real ray spanned by \(\exp(i\pi\phi)\),
* If \(\phi\in\mathbb{R}\) then \(\mathcal{P}(\phi+1)=\mathcal{P}(\phi)[1]\),
* If \(\phi_{1}>\phi_{2}\) then \(\operatorname{Hom}_{D^{b}(X)}(\mathcal{P}(\phi_{1}),\mathcal{P}(\phi_{2}))=0\),
* If \(E\in D^{b}(X)\setminus\{0\}\) then there exists real numbers \(\phi_{1}>\phi_{2}>\cdots>\phi_{n}\) and a sequence of distinguished triangles
where \(F_{i}\in\mathcal{P}(\phi_{i})\), and
* \(\inf\limits_{\phi\in\mathbb{R}}\left\{\frac{|Z([E])|}{||[E]||}:E\in \mathcal{P}(\phi)\setminus\{0\}\right\}>0\) for some norm \(||\cdot||\) on \(K_{\operatorname{num}}(X)\).
If \(\sigma\) is a stability condition and \(E\in\mathcal{P}(\phi)\setminus\{0\}\) then we say \(E\) is \(\sigma\)_-semistable of phase \(\phi\)_. If \(E\in\mathcal{P}(\phi)\setminus\{0\}\) is simple in \(\mathcal{P}(\phi)\) then we say \(E\) is \(\sigma\)_-stable of phase \(\phi\)_.
We write the collection of all stability conditions on \(X\) as \(\operatorname{Stab}(X)\). Bridgeland described a natural metric on \(\operatorname{Stab}(X)\) so that \(\operatorname{Stab}(X)\) is a complex manifold (with possibly infinitely many connected components) [1, Theorem 1.2]. For our purposes, we restrict our attention to an open submanifold of \(\operatorname{Stab}(X)\) where all skyscraper sheaves are stable.
**Definition 2.2**.: A stability condition on \(\sigma\in\operatorname{Stab}(X)\) is said to be _geometric_ if the skyscraper sheaf \(\mathscr{O}_{x}\) is \(\sigma\)-stable for all \(x\in X\). We write the collection of all geometric stability conditions on \(D^{b}(X)\) as \(\operatorname{Stab}^{\operatorname{geo}}(X)\).
Commonly, geometric stability conditions are defined to be stability conditions where skyscraper sheaves are all stable of the same phase. We have dropped this additional assumption from Definition 2.2 because [1, Proposition 2.9] has shown it is already implied by the stability assumption.
The subspace \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is open in \(\operatorname{Stab}(X)\), so each connected component of \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is a complex submanifold of some connected component of \(\operatorname{Stab}(X)\)[1, Proposition 9.4].
Dell recently gave an explicit description of \(\operatorname{Stab}^{\operatorname{geo}}(X)\) in terms of a generalization of the Le Potier function. We first recall this function then recall the description of \(\operatorname{Stab}^{\operatorname{geo}}(X)\).
**Definition 2.3** ([16, Definition 3.1],[17, Definition 5.8]).: Suppose \(X\) is a smooth projective complex surface. We define the _Le Potier function_\(\Phi:\operatorname{Amp}_{\mathbb{R}}(X)\times\operatorname{NS}_{\mathbb{R}}(X) \times\mathbb{R}\to\mathbb{R}\) as
\[\Phi(H,D,\beta)=\limsup_{\mu\to\beta}\left\{\frac{\operatorname{ch}_{2}( \mathscr{E})-D\cdot\operatorname{ch}_{1}(\mathscr{E})}{H^{2}\operatorname{ rank}(\mathscr{E})}:\begin{array}{l}\mathscr{E}\in\operatorname{Coh}(X) \text{ is slope stable (with respect}\\ \text{to H})\text{ and }(H\cdot\operatorname{ch}_{1}(\mathscr{E}))/ \operatorname{rank}(\mathscr{E})=\mu\end{array}\right\}.\]
We recall Dells's description of \(\operatorname{Stab}^{\operatorname{geo}}(X)\) in terms of the Le Potier function. A similar result was shown in [16, Proposition 3.6].
**Theorem 2.4** ([15, Theorem 5.10]).: _Suppose \(X\) is a smooth projective complex surface. There is a homeomorphism_
\[\operatorname{Stab}^{\operatorname{geo}}(X)\cong\mathbb{C}\times\left\{(H,D, \beta,\alpha)\in\operatorname{Amp}_{\mathbb{R}}(X)\times\operatorname{NS}_{ \mathbb{R}}(X)\times\mathbb{R}\times\mathbb{R}:\Phi(H,D,\beta)<\alpha\right\}.\]
We broadly sketch the proof of this theorem. In the case of surfaces, geometric stability conditions are determined by their central charge [11, Proposition 10.3],[15, Theorem 5.5]. By considering the central charge of a skyscraper sheaf and ideal sheaves of curves there is a continuous injection \(i:\operatorname{Stab}^{\operatorname{geo}}(X)\to\mathbb{C}\times \operatorname{Amp}_{\mathbb{R}}(X)\times\operatorname{NS}_{\mathbb{R}}(X) \times\mathbb{R}\times\mathbb{R}\)[11, Section 10], [14, Proposition 3.6],[15, Theorem 5.5]. By directly constructing a quadratic form, Dell shows the support property is satisfied by exactly \((\lambda,H,D,\beta,\alpha)\in i(\operatorname{Stab}^{\operatorname{geo}}(X))\) satisfying \(\Phi(H,D,\beta)<\alpha\)[15, Lemma 5.32].
## 3. Contractibility of the Geometric Stability Manifold
Contractibility of \(\operatorname{Stab}^{\operatorname{geo}}(X)\) will follow from Theorem 2.4 and the following topological result. This lemma shows the region above the graph of a "sufficiently nice" function is contractible. Note we cannot assume the function is continuous because the Le Potier function is not continuous in general.
**Lemma 3.1**.: _Suppose \(Z\) is a topological space and \(f:Z\to\mathbb{R}\) is a function (not necessarily continuous). Define_
\[\Gamma_{f}^{<}=\left\{(z,\alpha)\in Z\times\mathbb{R}\mid f(z)<\alpha\right\}\]
_which we give the subspace topology. If there exists a continuous function \(g:Z\to\mathbb{R}\) satisfying \(f(z)\leq g(z)\) for all \(z\in Z\) then \(\Gamma_{f}^{<}\) and \(Z\) are homotopy equivalent._
Proof.: By replacing \(g(z)\) with \(g(z)+1\) we may assume \(f(z)<g(z)\) (this assumption makes the notation somewhat simpler). Since \(f(z)<g(z)\) for all \(z\in Z\), the function \(F:\Gamma_{f}^{<}\times[0,1]\to\Gamma_{f}^{<}\) defined by
\[F((z,\alpha),t)=(z,\max\{\alpha,(g(z)-\alpha)t+\alpha\})\]
is well-defined and continuous. Moreover, since \(\max\{\alpha,g(z)\}\geq g(z)\), \(F\) is a strong deformation retract of \(\Gamma_{f}^{<}\) onto
\[\Gamma_{g}^{<}=\left\{(z,\beta)\in Z\times\mathbb{R}\mid g(z)\leq\beta\right\}.\]
Furthermore, the function \(G:\Gamma_{g}^{<}\times[0,1]\to\Gamma_{g}^{<}\) defined by
\[G((z,\beta),t)=(z,\beta(1-t)+g(z)t)\]
is a strong deformation retract of \(\Gamma_{g}^{<}\) onto the graph of \(g\). Since \(g\) is continuous, the graph of \(g\) is homeomorphic to \(Z\). Hence, we have shown \(\Gamma_{f}^{<}\) is homotopy equivalent to \(Z\), as desired.
**Theorem 3.2**.: _Suppose \(X\) is a smooth projective surface over \(\mathbb{C}\). The manifold \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is contractible._
Proof.: By [15, Lemma 4.6], we have
\[\Phi(H,D,\beta)\leq\frac{1}{2}\left(\left(\beta-\frac{D\cdot H}{H^{2}}\right) ^{2}-\frac{D^{2}}{H^{2}}\right)\]
for all \((H,D,\beta)\in\operatorname{Amp}_{\mathbb{R}}(X)\times\operatorname{NS}_{ \mathbb{R}}(X)\times\mathbb{R}\) where \(\Phi\) is the generalized Le Potier function (see Definition 2.3). Therefore, by Theorem 2.4 and Lemma 3.1, it suffices to show \(\operatorname{Amp}_{\mathbb{R}}(X)\times\operatorname{NS}_{\mathbb{R}}(X) \times\mathbb{R}\) is contractible.
Since \(\operatorname{NS}(X)\) is a finitely generated abelian group, there is a homeomorphism \(\operatorname{NS}_{\mathbb{R}}(X)=\mathbb{R}^{m}\) for some \(m\geq 1\). Similarly, since \(\operatorname{Amp}_{\mathbb{R}}(X)\) is an open cone in \(\operatorname{NS}_{\mathbb{R}}(X)=\mathbb{R}^{m}\), \(\operatorname{Amp}_{\mathbb{R}}(X)\) is also contractible. Thus, \(\operatorname{NS}_{\mathbb{R}}(X)\times\operatorname{Amp}_{\mathbb{R}}(X) \times\mathbb{R}\) is contractible and so \(\operatorname{Stab}^{\operatorname{geo}}(X)\) is also contractible, as desired.
We use Theorem 3.2 to give new examples of smooth projective surfaces with contractible stability manifold.
_Example 3.3_ (Varieties with Finite Albanese).: If \(X\) is a surface with finite Albanese (i.e. the Albanese morphism \(\operatorname{alb}_{X}:X\to\operatorname{Alb}(X)\) is finite onto its image) then \(\operatorname{Stab}^{\operatorname{geo}}(X)=\operatorname{Stab}(X)\)[14, Theorem 1.1]. Therefore, by Theorem 3.2, \(\operatorname{Stab}(X)\) is contractible. This example generalizes [14, Theorem 1.2] to any Neron-Severi rank.
A variety \(X\) has finite Albanese if and only if \(X\) is a finite cover of a subvariety of an abelian variety. Therefore, any product of varieties with finite Albanese also has finite Albanese. In particular, if \(X\) is a
finite cover of \(C_{1}\times C_{2}\) for smooth positive genus curves \(C_{1},C_{2}\) or \(X\) is a finite cover of an abelian surface then \(\operatorname{Stab}(X)\) is contractible. This gives infinitely many new families of surfaces with contractible stability manifold.
We also obtain some topological results for \(\operatorname{Stab}(X)\) in cases where \(X\) does not have finite Albanese.
_Example 3.4_ (Free Quotients of Varieties with Finite Albanese).: If \(X\) has finite Albanese and \(G\) acts freely on \(X\) then \(\operatorname{Stab}^{\operatorname{geo}}(X/G)\) is a connected component of \(\operatorname{Stab}(X)\)[13, Corollary 3.10]. Therefore, by Theorem 3.2, \(\operatorname{Stab}(X)\) contains a contractible component.
This example applies to Beauville-type surfaces and bielliptic surfaces. For these surfaces the irregularity is \(0\) and \(1\) respectively so they do not have finite Albanese.
|
2303.05812 | Semi-supervised Adversarial Learning for Complementary Item
Recommendation | Complementary item recommendations are a ubiquitous feature of modern
e-commerce sites. Such recommendations are highly effective when they are based
on collaborative signals like co-purchase statistics. In certain online
marketplaces, however, e.g., on online auction sites, constantly new items are
added to the catalog. In such cases, complementary item recommendations are
often based on item side-information due to a lack of interaction data. In this
work, we propose a novel approach that can leverage both item side-information
and labeled complementary item pairs to generate effective complementary
recommendations for cold items, i.e., for items for which no co-purchase
statistics yet exist. Given that complementary items typically have to be of a
different category than the seed item, we technically maintain a latent space
for each item category. Simultaneously, we learn to project distributed item
representations into these category spaces to determine suitable
recommendations. The main learning process in our architecture utilizes labeled
pairs of complementary items. In addition, we adopt ideas from Cycle Generative
Adversarial Networks (CycleGAN) to leverage available item information even in
case no labeled data exists for a given item and category. Experiments on three
e-commerce datasets show that our method is highly effective. | Koby Bibas, Oren Sar Shalom, Dietmar Jannach | 2023-03-10T09:39:18Z | http://arxiv.org/abs/2303.05812v1 | # Semi-supervised Adversarial Learning for Complementary Item Recommendation
###### Abstract.
Complementary item recommendations are a ubiquitous feature of modern e-commerce sites. Such recommendations are highly effective when they are based on collaborative signals like co-purchase statistics. In certain online marketplaces, however, e.g., on online auction sites, constantly new items are added to the catalog. In such cases, complementary item recommendations are often based on item side-information due to a lack of interaction data. In this work, we propose a novel approach that can leverage both item side-information and labeled complementary item pairs to generate effective complementary recommendations for cold items, i.e., for items for which no co-purchase statistics yet exist. Given that complementary items typically have to be of a different category than the seed item, we technically maintain a latent space for each item category. Simultaneously, we learn to project distributed item representations into these category spaces to determine suitable recommendations. The main learning process in our architecture utilizes labeled pairs of complementary items. In addition, we adopt ideas from Cycle Generative Adversarial Networks (CycleGAN) to leverage available item information even in case no labeled data exists for a given item and category. Experiments on three e-commerce datasets show that our method is highly effective.
Recommender systems, Complementary Items, CycleGAN +
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while with Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β : Work was done while Meta.)
+
Footnote β β β β : Work was done while Meta.)
+
Footnote β
which we derive with the help of side-information, into these latent spaces. This process finally allows us to locate items in a given category space that are close to the seed item. The basic learning process in our architecture is based on _supervised learning_ using labeled pairs of complementary items. We note that these pairs of items, which can for example be obtained by analyzing existing shopping baskets of a site, do _not_ include the seed item, as we supply recommendations for cold items. Instead, they help us learn how categories are related. Since these labeled pairs can be sparse for certain categories, we furthermore incorporate ideas from Cycle Generative Adversarial Networks (CycleGAN) (Zhu et al., 2017) in our architecture. This allows us to leverage available item information even when no labeled data exists for a given item and category.
Overall, we therefore suggest a novel _semi-supervised_ model for complementary item recommendations. An evaluation of the model on three e-commerce datasets confirms that it is favorable over alternative approaches under different experimental settings. Moreover, the experiments clearly demonstrate the benefits of extending the architecture with CycleGAN elements. To our knowledge, utilizing CycleGAN concepts for improved complementary item recommendation has not been explored in the literature before.
## 2. Background and Related Work
A common technical approach in real-world e-commerce is to determine related items, e.g., on item detail pages2, is to rely on co-purchasing patterns. Recommendations based on such simple shopping basket analyses can be surprisingly effective (Zhu et al., 2017; Li et al., 2018). We note, however, that the items returned by such approaches are not necessarily complementary items, and it may happen that two particular shirts are frequently bought together (Krishnan et al., 2017). In our work, we therefore assume that a complementary item belongs to one among the pre-defined complementary categories, which are different from the seed item's category (e.g., pants in the case of shirts).
Footnote 2: Such recommendations are often not personalized according to long-term user profiles, which is also one assumption of our present work. See (Zhu et al., 2017) for a personalized approach at eBay.
Another type of recommendations commonly seen in practice are often shown under labels such as "Similar Items" or "Related Items", some of which may serve as substitutes (Bahdan et al., 2016; Li et al., 2018; Li et al., 2018; Li et al., 2018). Finding similar items is another relevant problem, but almost the opposite of our research focus. _Related items_, see, e.g., (Li et al., 2018), on the other hand, can in theory include both complements (e.g., accessories) and substitutes, i.e., alternatives. Several works (Bahdan et al., 2016; Li et al., 2018; Li et al., 2018) aim to find complementary products, but these often cannot distinguish between similar and complementary items, since the seed and the recommended items are encoded by the exact same network.
In terms of the addressed research challenge, the work by Galron et al. (Galron et al., 2017) at eBay is closest to our research, and we also adopt a similar evaluation approach. Given the sparsity of user-item interaction data, which hampers the use of traditional collaborative filtering approaches, the item-based DCF (Deep Collaborative Filtering) method proposed in (Galron et al., 2017) learns a similarity function between items based on their side-information. This similarity function is then used to retrieve recommendations for a given seed item, i.e., the neural network takes a pair of seed and target items as inputs, and returns a similarity prediction as an output. Internally, the network encodes the items as sparse feature vectors based on characteristics such as the title or category. To train the model, purchase data from eBay in the form of pairs of items that were co-purchased by the same users is used. Computational experiments and an A/B test indicate that DCF performed significantly better than the existing system at eBay. Given the proven performance of DCF in a real-world setting, we use this method as a baseline in our research. We note that in the evaluation of DCF recommendations from the same category were considered as ground truth.
_P-Companion_(Krishnan et al., 2017) is another neural framework for complementary item recommendation. This model focuses on diversifying the recommendations and as it outperformed several other methods (Krishnan et al., 2017; Li et al., 2018; Li et al., 2018), we also use it as baseline. While this model has some commonalities with our approach, it does not allow semi-supervised learning. Furthermore, it does not maintain a separate latent space per category. Instead, the various dimensions in the item embeddings are weighted differently, according to their category.
In terms of applications, various previous works focus on fashion recommendation (Bahdan et al., 2016; Li et al., 2018; Li et al., 2018; Li et al., 2018). In many cases, _visual_ approaches are highly effective (Bahdan et al., 2016). Such approaches typically consider domain-specifics and strongly rely on item images to establish relationships between different clothing items, e.g., by projecting items in a shared visual style space (Li et al., 2018). In a recent work (Bahdan et al., 2016), the authors for example use shape and color information to learn which shapes and colors are compatible. Compatibility estimation is also the focus in (Li et al., 2018; Li et al., 2018). A session-based recommendation approach, which also considers visual information is presented in (Li et al., 2018). However, such visual approaches can also have their limitations. Specifically, purely visual approaches may return items that are stylistically similar, but not complementary. In (Li et al., 2018), the authors therefore propose a complementary item recommendation approach that relies on textual information, and can be extended using recent methods (Li et al., 2018). These works are different from our approach in that our framework is not domain-specific. Furthermore, our approach considers both visual and textual side-information and in addition allows us to specify target categories for the complementary items. Explicitly making compatibility predictions between items is not in the focus of our present work, but an interesting area for future investigations. Similarly, works that aim to automatically discern if two items rather represent alternatives or complements, e.g., (Li et al., 2018; Li et al., 2018), may be integrated in our framework in the future as well to assess if they help further improve accuracy.
A number of other works exist that use datasets that contain information about co-purchased or co-viewed items like we do in our computational experiments, e.g., (Li et al., 2018; Li et al., 2018). The focus of such works however is often not primarily on making complementary item recommendations. Instead, the goal is to improve prediction accuracy in general, which is achieved by exploiting co-occurrence information in the data. No distinction is however made if the resulting recommendations actually contain substitutes or complements.
The use of adversarial training in recommender systems is wide-spread (Krishnan et al., 2017; Li et al., 2018; Li et al., 2018). Specifically, using GANs for complementary recommendations was proposed in (Li et al., 2018), where a generator learns how to create the image of a complementary item. However, the goal of this methods is to generate an image and not an item. Another generative adversarial learning approach is proposed in (Krishnan et al., 2017). Given a seed item, a generator network creates recommendations
and a discriminator needs to distinguish between real labeled recommendations and generated ones. However, this model requires full supervision and, unlike our approach, it does not allow to control important traits of the recommendations, like the category.
A _quality-aware_ method for complementary item recommendation is proposed in (Zhu et al., 2017). The underlying intuition is that not only the compatibility of the items matters, but also the quality of the recommended items. The quality in their approach is based on explicit item ratings, and the proposed model jointly considers user preferences and compatibility aspects. In our work, in contrast, we do not assume the existence of long-term user preference profiles and explicit ratings. Nonetheless, personalizing the recommendations (Zhu et al., 2017; Li et al., 2018; Li et al., 2019) or considering the sequential nature of user-sessions (Zhu et al., 2019) may be an aspect to explore in the future.
## 3. Overview of Proposed Approach
Summary of Problem SettingWe recall the specifics of our problem setting. Given a _seed_ item and a _target_ category, the task is to recommend complementary items from the target category3. The main challenge is that no interaction data yet exists for the seed item. However, we assume that each item has a known category and there is some additional side-information available for each item. Moreover, we make the assumption that there exists a labeled set of pairs of complementary items4. Such a set may be created manually or automatically derived, e.g., by analyzing co-purchase patterns of pairs of items with sufficient purchase signals. Generally, however, paucity of labeled data is an inherent problem, and a model that can cope with it is preferable.
Footnote 3: In practice, there can be multiple complementary categories and the recommendation process may thus be repeated for each category.
Technical ApproachLet \(C\) denote the set of all item categories in the catalog. As mentioned, one key idea of our approach is to maintain a latent space for each item category \(c\in C\). To make a recommendation for a given seed item, we first create a distributed representation (i.e., embedding) of it, and we then translate this embedding into the target category's latent space. The translation process can be understood as creating a pseudo-item by converting the seed item's representation into the latent space of the target category. The main challenge in our approach now is to learn based on sparse data how to translate items to the target latent space in a way that the important traits of the seed item are maintained. Once the seed item is positioned in the target category's latent space, plausible recommendations can be derived by selecting items that are close to it. More formally, let \(I_{c}\) be the set of items in category \(c\). To find plausible recommendations in category \(c\) for seed item \(s\), the translated representation \(v_{s}^{c}\) is generated. Then items are recommended according to their distance from \(v_{s}^{c}\): \(\operatorname*{argmax}_{i\in I_{c}}\cos(v_{s}^{c},v_{i})\), where \(\cos\) is the cosine similarity and \(v_{i}\) is a distributed representation of item \(i\).
We recall that in our approach we aim to concentrate on a subset of _relevant_ categories for each item. Therefore, given an item's category, we determine complementary categories from item-level labeled data. To find the set of complementary categories for category \(c\in C\) we aggregate the item-level labeled pairs to the category-level to determine the set of other relevant categories5
Footnote 5: Note that the pairs in the labeled set do not contain the seed item.
Architecture ComponentsThe overall architecture has two main elements. First, the architecture includes a _supervised_ learning component, which uses item side-information and the labeled set of pairs of complementary items to learn the translation between the category latent spaces. Since this labeled set of pairs may suffer from data paucity in particular for rare categories, we extend the architecture to include a sub-network that supports _unsupervised_ adversarial learning from all items in the catalog. I.e., also for items for which there is no labeled complementary item in a given category in the dataset. Thus, the two elements combined together constitute a semi-supervised learning approach. Figure 1 shows the overall architecture of the model.
## 4. Supervised Learning Component
The supervised learning component has two elements. The _item encoder_ creates an embedding of the items. The _category translator_ then translates the encoded item into the latent space of any given target category.
Item encoderGiven an item \(i\), the model first generates its representation \(v_{i}\) based on its side-information. In many application scenarios, including fashion as discussed above, the image of the item is a highly important piece of side-information. To incorporate this information, we first feed the item image through a pre-trained image processing model. Then, we pass its output through a learned multilayer perceptron (MLP). For categorical features, the model fits an embedding per each distinct value. We note that continuous features, such as an item's price, can be dealt with by converting them to categorical features through discretization. To obtain the overall item representation, the representations of all features are concatenated and passed through another MLP. We acknowledge that other, more sophisticated approaches could be used instead of a simple MLP. Yet, we defer these potential improvements to future work. We note that the category itself is used here as another feature, which makes the _item encoder_ category-aware.
Category translatorThe task of this element is to learn how to transfer the item representations from one latent space to another. For example, let \(s\) be a specific shirt and \(p\) and \(n\) (stand for positive and negative) are two pairs of pants, with embeddings \(v_{s}\), \(v_{p}\) and \(v_{n}\), respectively. Let us assume that \(s\) is complemented by \(p\), but not by \(n\). Then, we would like to learn a transformation such that projecting the shirt's representation \(v_{s}\) into the pants domain will yield a representation that is more similar to \(v_{p}\) than to \(v_{n}\). We stress that there are many plausible architectures to combine the item and the category representations, some of them allow heterogeneous dimensionalities of the latent spaces of the various categories. However, our model of choice is a simple concatenation of these embeddings, followed by an MLP.
Model trainingTraining in the supervised learning component is based on the given set of labeled pairs of items. Each labeled pair consist of a seed item \(s\) and a positive complementary item \(p\) of
category \(c\). For each such pair, we apply negative sampling and draw a negative item \(n\) (from category \(c\) as well). The _item encoder_ is then invoked on each of the items to obtain the representations \(v_{s}\), \(v_{p}\) and \(v_{n}\). Then, applying the _category translator_ on \(v_{s}\) yields \(v_{s}^{c}\), the representation of the seed item in the desired latent space. As a loss function, we use the triplet loss (Srivastava et al., 2017): \(\mathcal{L}\left(v_{s}^{c},v_{p},v_{n}\right)=\max\left(\mathrm{d}\left(v_{s}^ {c},v_{p}\right)-\mathrm{d}\left(v_{s}^{c},v_{n}\right)+\mathrm{\alpha},0\right)\), where \(\mathrm{d}\) is a distance function; the negative cosine similarity performed well in our experiments, \(\mathrm{\alpha}\) is a hyperparameter that sets the margin by which the positive sample should be more similar than the negative one.
## 5. Adversarial Learning Component
The effectiveness of any _supervised_ learning approach is bound by the available labeled data. In complementary item recommendation, data paucity, as mentioned, is a common problem. In particular for more rare categories there may not be a sufficient amount of labeled pairs available for effective learning. Therefore, our architecture includes an adversarial learning component which implements _unsupervised_ learning. Thereby, we aim to leverage information from any item in the catalog, even if there are no labeled complementary items for it in a given category. In our example, this would be the case if we are given a particular shirt and there is no complementary item in the pants category in the training data. Ultimately, the combination of both components leads to a _semi-supervised_ learning approach. We emphasize that the adversarial component is only needed to further improve the training process. At inference time, only the outputs of the _item encoder_ and the _category translator_ are used to determine the recommendations.
### Architecture Elements
The main goal of this component is to further improve the effectiveness of the _category translator_. To ensure that the _category translator_ is not misguided when incorporating information about items without labels, the adversarial component takes inspiration from CycleGAN (Zhu et al., 2017). CycleGAN was designed in the context image-to-image translation problems, e.g., to translate a summer landscape into winter. A central idea in these networks is the concept of _cycle consistency_. For the mentioned example, a translation would be cycle consistent if we end up close to the original image if we translate the winter landscape back. Ultimately, cycle consistency ensures that inputs and outputs are paired in a meaningful way.
Technically, our adversarial learning component has two main sub-networks, the _classifier_ and the _reconstructor_. The _classifier_ receives an item representation \(v_{i}\) and a category \(c\) as an input and returns the probability that item \(i\) belongs to category \(c\). To this end, the _classifier_ assigns a score for each category by passing \(v_{i}\) to an MLP, where the last layer is of size \(|C|\). Then, these scores are turned into probabilities using _softmax_ and the probability of the item belonging to category \(c\) is extracted. Finally, the translated representation \(v_{i}^{c}\) is sent to the _classifier_ to affirm it indeed looks like a representation of an item in category \(c\). This auxiliary component motivates the _category translator_ to produce reliable embeddings. However, it might be incapable to train a good recommender, because it is not guaranteed that important traits of the seed item are preserved. In our running example, the translated representation may seem like it belongs to a pair of pants, but it might not lead to good recommendations, as it is not constrained to retain either concrete or abstract properties of the original shirt like style, age group, or price.
To address this issue, the _reconstructor_ is introduced, which encourages the _category translator_ to be _cycle consistent_ and to retain the traits of the seed item. Specifically, this network receives a translated representation \(v_{i}^{c}\) and the original category of the seed item. It then aims to reconstruct the original representation of the seed item \(v_{i}\) by returning a vector \(v_{i}^{\prime}\) such that \(v_{i}\approx v_{i}^{\prime}\).
### Model training
The adversarial learning component incorporates two types of loss functions: for the _reconstructor_ and for the _classifier_. To allow end-to-end learning of the entire architecture, the final loss is given by a weighted sum of these two losses and the loss function of the supervised learning component. The weights for the losses constitute a convex combination and are determined by hyperparameters.
Figure 1. The proposed model architecture. Elements with dashed lines are part of the adversarial learning network.
_Cycle consistency loss_. The loss of the _reconstructor_, in our approach is the Euclidean distance \(\|v_{i}-v_{i}^{\prime}\|^{2}\). Since both terms in the loss, \(v_{i}\) and \(v_{i}^{\prime}\), are outputs of learned networks, a degenerated, yet optimal loss can be achieved. For instance, if the _item encoder_ and the _category translator_ return the same fixed value, regardless of the input, then the loss would be 0. To overcome this issue, we recall that the purpose of this loss is to motivate the _category translator_ to not dismiss important traits in the seed item, represented by \(v_{i}\). Therefore, \(v_{i}\) serves as a label for this loss. Consequently, we stop the gradients backpropagate from \(v_{i}\) due to this loss. We point out that this procedure allows to further improve the _item encoder_, due to the Jacobian obtained from \(v_{i}^{\prime}\).
_Classifier loss_. As mentioned above, given an item representation and a category, the _classifier_ outputs the probability \(p\) that the represented item belongs to this category. Conventionally, its loss is cross-entropy and since it is supplied with a single true category, the loss is contracted to \(-\)log \(p\). We should bear in mind that only a well trained classifier can challenge the _category translator_ and thereby allow it to generate suitable representations. This poses several difficulties and rules out the feasibility of a naive implementation of the architecture. We discuss these challenges and how we addressed them as follows. To optimally train the _classifier_ we supply it with two types of training instances.
The first type consists of genuine outputs of the _item encoder_. Namely, we invoke the _classifier_ twice, with the embeddings of the seed item \(v_{\text{s}}\) and the positive item \(v_{\text{p}}\)6, where the label categories are the ones from the catalog. However, we note that a standard implementation would also affect the _item encoder_. Specifically, it can ultimately lead to a degenerated model, as the category is an input to the _item encoder_. That way, the _item encoder_ is motivated to cooperate with the _category translator_ so as to excel at this loss at the expense of the true objective of the model, which is supplying recommendations. For example, the _item encoder_ may output the category embedding, while ignoring the rest of its input features. We therefore implemented the model in a way that we prevent the gradients of the _classifier_ of flowing through the parameters of the _item encoder_. Our experiments revealed that this led to significantly improved performance.
Footnote 6: For running time considerations the negative itemβs embedding is not fed to the _classifier_, although it could be trivially added.
As for the second type of input for the _classifier_, we recall that the _category translator_ aims to create an embedding \(v_{\text{s}}^{c}\), which makes the _classifier_ classify \(v_{\text{s}}^{c}\) as if it belongs to category \(c\). In order to avert a situation where the _category translator_ finds edge cases (adversarial examples) that only deceive the _classifier_, but do not look like real vectors of items in category \(c\), it is important that the _classifier_ is trained on the output of the _category translator_\(v_{\text{s}}^{c}\), with label category \(c\). If done this way, \(v_{\text{s}}^{c}\) is used to train both the _category translator_ and the _classifier_, but each of them has a different objective. The _category translator_ aims to fold the _classifier_, while the latter needs to challenge the _category translator_. To address this issue, we use _adversarial training_ with a gradient reversal layer. This means that during backpropagation the _category translator_ obtains the original gradients, while the _classifier_ is directed by the additive inverse of the gradients.
We note that this model is scalable since its training time is linear in the size of the labeled set and item-category pairs; also, the amount of parameters grows linearly with the number of categories.
### Specific CycleGAN Adaptations
Here, we lay out the commonalities and differences of CycleGAN and our approach in more detail. Specifically, this exposition will clarify how we transferred ideas from CycleGAN to the complementary item recommendation problem in an innovative way.
CycleGAN is a variant of Generative Adversarial Networks (GAN) (Goodfellow et al., 2016), which includes two generators and two discriminators that are trained simultaneously. Let \(X\) be the source domain and \(Y\) the target domain for a given translation problem (e.g., summer to winter). Generator \(\mathsf{G}\) learns to transform images from \(X\) to \(Y\). Generator \(\mathsf{F}\) learns to transform images from \(Y\) to \(X\). Discriminator \(\mathsf{D\_X}\) learns to differentiate between genuine images of \(X\) and generated images \(F(Y)\). That is, its objective is to return a high probability value for \(x\in X\) and a low probability value for \(\mathsf{F}(y)\), for \(y\in Y\). Similarly, discriminator \(\mathsf{D\_Y}\) operates in domain \(Y\). For simplicity, we collectively refer to \(\mathsf{D\_X}\) and \(\mathsf{D\_Y}\) as \(\mathsf{D}\), which represents a discriminator that distinguishes between genuine and generated images. Training of these networks is done using adversarial training. It is generally desired that \(\mathsf{G}(x)\) does not dismiss important characteristics of the original image. This would guarantee that \(\mathsf{G}(x)\) is a translation of \(x\), rather than just an image that seems to belong to \(Y\), but has no affinity to \(x\). To this end, the cycle consistency loss is introduced. Namely, this loss asks to minimize the difference between \(x\) and \(\mathsf{F}(\mathsf{G}(x))\).
We first explain that in a sense, the _item encoder_ serves as the real distribution, the _category translator_ as \(\mathsf{G}\), the _classifier_ as \(\mathsf{D}\), and the _reconstructor_ as \(\mathsf{F}\). Given an input representation \(v\) and a category \(c\), the _classifier_ works as follows. If \(v\) is a was generated directly by the _item encoder_, then it should confirm that the item belongs to the declared category \(c\); otherwise, \(v\) was further translated by the _category translator_, and the _classifier_ should reject an assumed affinity between the item and the category. Therefore, similar to GANs, the _classifier_ (\(\mathsf{D}\)) aims to output a probability close to one for inputs drawn from the _item encoder_ (real distribution) and probability of zero for inputs from the _category translator_ (\(\mathsf{G}\)). Like in CycleGAN, our approach applies the cycle consistency loss using the _reconstructor_, which translates the item representation back to its original category and therefore serves as \(\mathsf{F}\).
However, there are some notable differences between the two models. First, the instances drawn from the real distribution in CycleGAN are genuine images from the source domain. Therefore, the cycle consistency loss is well defined, as the original image \(x\) serves as the label for \(\mathsf{F}(\mathsf{G}(x))\). In contrast, our approach works on the feature space. That is, the "real distribution" is actually the output of the _item encoder_, which is a _learned_ network. Consequently, the outputs of the _image encoder_ also serve as labels, which might hamper its training. As mentioned before, we overcome this problem by stopping the gradients that arise from the seed item's vector.
Another difference stems from the complexity of the _category translator_. While in CycleGAN there are only two domains, in our problem the number of domains (categories) can reach to hundreds. Therefore we resort to training a unified translator for all categories.
Given the unified translator it might seem like the _reconstruct_ is redundant, as also the _category translator_ can translate between any pair of categories. However, in our experiments we noticed a crucial advantage in separating these two networks. We conjecture that it is due to their different objectives. The _category translator_ aims to find good recommendations, while the _reconstruct_ wishes to recover the original latent traits of the item. Therefore, the _category translator_ may generate vectors that are similar to those of popular or high quality items, since they usually make good recommendations, while the _reconstruct_ is not bound to this need. Therefore, the _category translator_, which generates the recommendations, should preferably be distinct from the _reconstruct_.
## 6. Experimental Evaluation
We conducted an in-depth experimental evaluation of our approach. All our code is shared online for reproducibility 7.
Footnote 7: [https://fb.me/cgan_complementary_item_recommendation](https://fb.me/cgan_complementary_item_recommendation)
### Experiment Design
_Datasets & Preprocessing._ We rely on real-world data from _Amazon_(Dasas, 2017), as used in previous related works (Das, 2017; Das, 2017). From these datasets, we first extracted the items' side-information, which include image, price, and category. Furthermore, each item \(i\) in the dataset can be associated with a list of recommended items \(\{r_{i}\}\) that are _frequently bought together_. By removing items from \(\{r_{i}\}\) that belong to the same category as \(i\), we create a labeled set of complementary item pairs \(\{(s,r_{i})\}\), which we use for model training and evaluation in the experiments. We note that these pairs may be asymmetric, e.g., a laptop may be accompanied by a recommendation for a charger, but not vice versa.
We considered three subsets of the _Amazon_ datasets: "Clothing, Shoes and Jewelry" (dubbed as Clothing), "Home and Kitchen" (Home), and "Toys and Games" (Toys). These subsets were chosen because complementary items are of key importance in these domains. At the same time, they differ in key aspects like their verticals, target users, attributes that affect the notion of complementary, and number of items and categories. The main statistics of these datasets are shown in Table 1.
In terms of pre-processing, we excluded categories with less than five items to reduce noise. As a pre-trained image processing model, we used ResNet152 (He et al., 2016). If an item had multiple images, we consider only the first one. We furthermore discretized the continuous price to twenty bins using equal-depth binning. We used a random sample of 80% of the data for training and the rest for validation and testing. To mimic the cold start problem, we ensured that the seed items in the validation and test sets do not appear in the training set.
_Evaluation Procedure & Metrics._ The output of our model is a ranked list of complementary items, given a seed item. We therefore apply standard list-based accuracy metrics, namely Hit Rate (_HR@k_) and NDCG (Normalized Discounted Cumulative Gain), see (Srivastava et al., 2014). Moreover, since the labeled set is highly skewed to popular items, some models may focus on a narrow set of such popular items in their recommendations, thus leading to limited coverage and diversity. Therefore, we also report _catalog coverage_(Das, 2017), which is defined as the fraction of the items of the catalog that appear in the top-\(k\) recommendation lists for the seed items in the test set. The value of \(k\) in this measure was 10.
Following the described problem setting, in the main evaluation protocol the desired category of the recommendations is given. To showcase the robustness of the methods, we also experiment in a setting where the recommendations can come from any category. We recall that our model requires a target category as an input. To create a recommendation list that considers all relevant categories, we go over all complementary categories of the seed item in a round-robin fashion and iteratively select the next recommendation.
_Baselines._ We compare our model against these baselines.
* **Popularity** is a simple yet effective baseline, which utilizes the labeled set to count the popularity of each item. For each item \(i\) it records the number of seed items for which \(i\) is labeled as their complementary item in the labeled set.
* **DCF**(Cheng et al., 2017) is a recent neural model optimized for cold items. Since its code was not released, we implemented it based on the the original paper and make it publicly available.
* **DCF-Hard** is a variant of _DCF_ that we propose here. It leverages category information to apply _hard negative mining_. Specifically, negative samples are not drawn from the entire catalog, but only from the items in the category of the labeled target item.
* **P-Companion**(Das, 2017) is another recent neural model. Since its code is not publicly available, we implemented it and publish the code in our repository as well. To make it compatible with our problem setting we made two modifications. First, we control the target categories and second, we omit collaborative information to support the cold start problem.
We carefully tuned the hyperparameters both of our model and of the baselines for each of the datasets in a manual process. The final hyperparameters can be found in the code repository.
### Results
#### 6.2.1. Main Results
Table 2 reports the main results of our experiments, where the category is given to the recommenders. We name our proposed model **ALCIR**, which stands for _Adversarial Learning for Complementary Item Recommendation_. To analyze the contribution of the individual elements of our architecture, we report results for (a) _ALCIR-Sup_, which only consists of the supervised learning component from Section 4 and (b) _ALCIR_, which corresponds to the _full_ model as described in Section 5.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Statistic & Clothing & Toys & Home \\ \hline \#items & 14,591 & 20,510 & 29,258 \\ \#item pairs & 53,375 & 112,964 & 162,497 \\ \#categories & 126 & 124 & 286 \\ \#category pairs & 4,621 & 5,734 & 18,999 \\ Avg. items per category & 115.8 & 165.4 & 102.3 \\ Max items per category & 808 & 2,558 & 796 \\ Min items per category & 12 & 16 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Dataset statistics
The results in Table 2 show that _ALCIR_ consistently outperforms all baselines in terms of the accuracy measures on all three datasets, usually with a large margin8. We additionally observe that already the supervised component of our model (_ALCIR-Supervised_) performs better than the baselines. The adversarial component is then successful in even further increasing these already strong results.
Footnote 8: We report results at additional cut-off thresholds in the code repository.
Considering the ranking of the other models, we notice that the popularity-based approach represents a baseline that can be difficult to beat. The _DCF_ model and even the improved _DCF-Hard_ model never reach the accuracy levels of the popularity-based method. _P-Companion_ works better, but still does not reach the Hit Rate values of popular-item recommendations. Only in terms of the NDCG, _P-Companion_ reaches a similar performance level. The strong performance of the popularity-based method is not surprising, though. An inspection of the datasets revealed that the ten most popular items cover between a fifth and a quarter of the labeled complementary items. Moreover, also the evaluation of a "Co-Purchase" method in (Kumar et al., 2017) showed that _P-Companion_ did not outperform such a popularity-based approach in a user-centric study9.
Footnote 9: Popularity-based recommendations were not examined in (Ball et al., 2017).
Our proposed model, in contrast, outperforms the popularity-based model consistently. We also observe that _ALCIR_ leads to high _coverage_ values, ranging from 82% to 94%. Only the _P-Companion_ method reaches an even slightly higher catalog coverage. The popularity-based method by design only recommends very popular items, leading to the lowest coverage. Interestingly, also the _DCF_ method has very low catalog coverage. The improved DCF version (_DCF-Hard_) helps to address the coverage problem to a good extent.
Table 3 finally shows the performance results when we consider all relevant categories for complementary item recommendations as described above. Naturally, across all models, the performance results for this experiment are lower compared to a case where the category is known. Nevertheless, we observe that the proposed model ALCIR is superior also in this problem setting.
#### 6.2.2. Additional Analyses.
Impact of Data PaucityOne main assumption when we introduced the unsupervised learning component to our model was that it will be particularly beneficial for rare categories. To validate this assumption, we conducted the following analysis to assess the relative contribution of the full _ALCIR_ method when we vary the amount of labeled data for each pair of categories. To this end, we count the number of labeled instances per each pair of complementary categories and discretize these counts to ten equally-sized bins. We order the bins by the amount of labeled data for each pair of categories in ascending order. The first bins therefore represent pairs of complementary categories for which little supervision is available in the training set.
Figure 2 shows the performance of each model for the ten bins. We note that performance measures between different bins cannot be trivially compared, because each bin holds different target categories, with a different amount of items from which the recommender should select. Therefore, only the performance of the different models within the same bin and dataset can be compared, since they refer to the exact same test items. Considering this aspect, we can observe the same trend in all datasets, where the relative advantage of _ALCIR_ is higher for the rare category pairs, i.e, in
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & NDCG & HR@1 & HR@5 & HR@10 & Cov. (\%) \\ \hline \multicolumn{6}{l}{**Clothing Dataset**} \\ Popularity & 0.238 & 0.051 & 0.153 & 0.234 & 6.99 \\ DCF & 0.211 & 0.011 & 0.059 & 0.111 & 7.24 \\ DCF-Hard & 0.226 & 0.019 & 0.083 & 0.147 & 40.49 \\ P-Companion & 0.238 & 0.030 & 0.102 & 0.169 & 84.59 \\ ALCIR-Sup & 0.298 & 0.074 & 0.203 & 0.302 & **89.92** \\ ALCIR & **0.316** & **0.092** & **0.233** & **0.332** & 88.5 \\ \hline \multicolumn{6}{l}{**Toys Dataset**} \\ Popularity & 0.231 & 0.038 & 0.126 & 0.200 & 5.13 \\ DCF & 0.193 & 0.009 & 0.042 & 0.081 & 5.96 \\ DCF-Hard & 0.208 & 0.017 & 0.064 & 0.111 & 36.72 \\ P-Companion & 0.235 & 0.028 & 0.104 & 0.172 & **93.78** \\ ALCIR-Sup & 0.297 & 0.073 & 0.205 & 0.303 & 90.46 \\ ALCIR & **0.308** & **0.078** & **0.224** & **0.330** & 82.62 \\ \hline \multicolumn{6}{l}{**Home Dataset**} \\ Popularity & 0.251 & 0.048 & 0.160 & 0.255 & 8.2 \\ DCF & 0.211 & 0.011 & 0.051 & 0.102 & 8.32 \\ DCF-Hard & 0.221 & 0.014 & 0.046 & 0.077 & 12.82 \\ P-Companion & 0.245 & 0.032 & 0.111 & 0.187 & **94.45** \\ ALCIR-Sup & 0.296 & 0.068 & 0.197 & 0.293 & 93.44 \\ ALCIR & **0.304** & **0.077** & **0.210** & **0.312** & 94.38 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Category-aware recommendation performance
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & NDCG & HR@1 & HR@5 & HR@10 & Cov. (\%) \\ \hline \multicolumn{6}{l}{**Clothing Dataset**} \\ Popularity & 0.099 & 0.001 & 0.005 & 0.007 & 0.06 \\ DCF & 0.111 & 0.000 & 0.001 & 0.002 & 0.09 \\ DCF-Hard & 0.118 & 0.001 & 0.003 & 0.007 & 1.82 \\ P-Companion & 0.111 & 0.000 & 0.002 & 0.003 & 0.62 \\ ALCIR-Sup & 0.150 & 0.009 & 0.036 & 0.060 & 59.36 \\ ALCIR & **0.170** & **0.012** & **0.060** & **0.098** & **60.55** \\ \hline \multicolumn{6}{l}{**Toys Dataset**} \\ Popularity & 0.102 & 0.001 & 0.003 & 0.005 & 0.04 \\ DCF & 0.124 & 0.000 & 0.001 & 0.002 & 0.06 \\ DCF-Hard & 0.148 & 0.003 & 0.010 & 0.016 & 5.67 \\ P-Companion & 0.123 & 0.001 & 0.001 & 0.002 & 5.12 \\ ALCIR-Sup & 0.172 & **0.010** & 0.044 & 0.021 & **64.56** \\ ALCIR & **0.184** & 0.009 & **0.046** & **0.071** & 56.03 \\ \hline \hline \multicolumn{6}{l}{**Home Dataset**} \\ Popularity & 0.093 & 0.000 & 0.001 & 0.002 & 0.03 \\ DCF & 0.117 & 0.000 & 0.001 & 0.002 & 0.05 \\ DCF-Hard & 0.117 & 0.000 & 0.002 & 0.003 & 0.13 \\ P-Companion & 0.117 & 0.000 & 0.001 & 0.001 & 1.61 \\ ALCIR-Sup & 0.143 & 0.005 & 0.021 & 0.033 & 53.42 \\ ALCIR & **0.150** & **0.008** & **0.030** & **0.049** & **55.53** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Recommendation performance w/o target category
the left parts of the figures. In contrast, the advantage of the full model becomes smaller for the very popular category pairs, and it is particularly pronounced for the Toys and Home datasets. Overall, the analysis confirms our assumption regarding the usefulness of the full model in particular for rare categories.
_Ablation Study._ Besides consisting of a supervised and an unsupervised component, one central feature of our architecture is that it makes use of a number of specific loss functions (triplet loss, cycle consistency loss and classifier loss), all of which are aimed to increase the performance of the model. To validate that these architecture elements contribute to the overall model performance, we ran an ablation study to assess the performance when only some of the components are utilized. Table 4 shows the outcomes of this analysis in absolute numbers and relative to the performance of the complete model (shown in parentheses). The results provide evidence that indeed all components contribute to the performance of the model. Most noteworthy is that a model that does not utilize labeled data, but only the unsupervised losses of the classifier and the cycle consistency performs the worst. This, however, comes as no surprise, given the well-known importance of labeled data.
## 7. Future Work
In this work we have proposed a novel semi-supervised approach for the highly relevant problem of complementary item recommendation, and an in-depth empirical evaluation clearly demonstrates the benefits of the approach. Our insights point to a number of future research directions. Our work focused on complementary item recommendations for cold items, and we assume that existing data (e.g., about co-purchases) can be used for the warm items. In future work, we plan to extend our model to also support warm items, using the same framework. That way, the input features for the item encoder would include also collaborative data by applying e.g., the method suggested in (Beng et al., 2017). In addition to such extensions and continuing related research in (Zhu et al., 2018; Wang et al., 2019), other promising approaches to further improve the effectiveness of the model could lie in the _personalization_ of the complementary item recommendations and to consider aspects of item _quality_. Finally, our problem setting can be thought as a special case of domain adaptation, where we transfer representations from one category to another. An interesting future work would be to extend our method to other recommendation scenarios, like context-aware recommendations, where we generate item representations in different contexts.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Method & NDCG & HR@1 & HR@5 & HR@10 \\ \hline \multicolumn{4}{l}{**Clothing Dataset**} \\ Classifier+Cycle & 0.21 & 0.015 & 0.059 & 0.114 \\ & (-33.5\%) & (-83.9\%) & (-74.7\%) & (-65.7\%) \\ ALCIR-Sup & 0.298 & 0.074 & 0.203 & 0.302 \\ & (-5.7\%) & (-19.4\%) & (-13.1\%) & (-8.9\%) \\ Triplet+Cycle & 0.311 & 0.087 & 0.226 & 0.319 \\ & (-1.8\%) & (-5.6\%) & (-3.1\%) & (-3.8\%) \\ Triplet+Classifier & 0.305 & 0.082 & 0.213 & 0.306 \\ & (-3.5\%) & (-11.3\%) & (-8.8\%) & (-7.5\%) \\ ALCIR & 0.316 & 0.092 & 0.233 & 0.332 \\ \hline \multicolumn{4}{l}{**Toys Dataset**} \\ Classifier+Cycle & 0.207 & 0.016 & 0.07 & 0.124 \\ & (-32.9\%) & (-79.5\%) & (-68.7\%) & (-62.5\%) \\ ALCIR-Sup & 0.297 & 0.073 & 0.205 & 0.303 \\ & (-3.7\%) & (-6.4\%) & (-8.7\%) & (-8.2\%) \\ Triplet+Cycle & 0.294 & 0.072 & 0.205 & 0.303 \\ & (-4.4\%) & (-6.8\%) & (-8.3\%) & (-8.4\%) \\ Triplet+Classifier & 0.298 & 0.073 & 0.209 & 0.309 \\ & (-3.3\%) & (-6.6\%) & (-6.6\%) & (-6.5\%) \\ ALCIR & 0.308 & 0.078 & 0.224 & 0.330 \\ \hline \multicolumn{4}{l}{**Home Dataset**} \\ Classifier+Cycle & 0.218 & 0.014 & 0.068 & 0.127 \\ & (-28.1\%) & (-81.2\%) & (-67.5\%) & (-59.3\%) \\ ALCIR-Sup & 0.296 & 0.068 & 0.197 & 0.293 \\ & (-2.6\%) & (-11.4\%) & (-6.1\%) & (-6.1\%) \\ Triplet+Cycle & 0.298 & 0.069 & 0.201 & 0.303 \\ & (-2\%) & (-9.9\%) & (-4.1\%) & (-2.9\%) \\ Triplet+Classifier & 0.297 & 0.069 & 0.199 & 0.297 \\ & (-2.1\%) & (-10.7\%) & (-4.9\%) & (-4.7\%) \\ ALCIR & 0.304 & 0.077 & 0.210 & 0.312 \\ \hline \end{tabular}
\end{table}
Table 4. Ablation study for category aware recommendation
Figure 2. Performance with respect to the amount of labeled data |
2305.08060 | Two is Better Than One: Digital Siblings to Improve Autonomous Driving
Testing | Simulation-based testing represents an important step to ensure the
reliability of autonomous driving software. In practice, when companies rely on
third-party general-purpose simulators, either for in-house or outsourced
testing, the generalizability of testing results to real autonomous vehicles is
at stake. In this paper, we enhance simulation-based testing by introducing the
notion of digital siblings, a multi-simulator approach that tests a given
autonomous vehicle on multiple general-purpose simulators built with different
technologies, that operate collectively as an ensemble in the testing process.
We exemplify our approach on a case study focused on testing the lane-keeping
component of an autonomous vehicle. We use two open-source simulators as
digital siblings, and we empirically compare such a multi-simulator approach
against a digital twin of a physical scaled autonomous vehicle on a large set
of test cases. Our approach requires generating and running test cases for each
individual simulator, in the form of sequences of road points. Then, test cases
are migrated between simulators, using feature maps to characterize the
exercised driving conditions. Finally, the joint predicted failure probability
is computed, and a failure is reported only in cases of agreement among the
siblings.
Our empirical evaluation shows that the ensemble failure predictor by the
digital siblings is superior to each individual simulator at predicting the
failures of the digital twin. We discuss the findings of our case study and
detail how our approach can help researchers interested in automated testing of
autonomous driving software. | Matteo Biagiola, Andrea Stocco, Vincenzo Riccio, Paolo Tonella | 2023-05-14T04:10:56Z | http://arxiv.org/abs/2305.08060v3 | # Two is Better Than One: Digital Siblings to Improve Autonomous Driving Testing
###### Abstract
Simulation-based testing represents an important step to ensure the reliability of autonomous driving software. In practice, when companies rely on third-party general-purpose simulators, either for in-house or outsourced testing, the generalizability of testing results to real autonomous vehicles is at stake.
In this paper, we strengthen simulation-based testing by introducing the notion of _digital siblings_, a novel framework in which the AV is tested on multiple general-purpose simulators, built with different technologies. First, test cases are automatically generated for each individual simulator. Then, tests are migrated between simulators, using feature maps to characterize of the exercised driving conditions. Finally, the joint predicted failure probability is computed and a failure is reported only in cases of agreement among the siblings.
We implemented our framework using two open-source simulators and we empirically compared it against a digital twin of a physical scaled autonomous vehicle on a large set of test cases. Our study shows that the ensemble failure predictor by the digital siblings is superior to each individual simulator at predicting the failures of the digital twin. We discuss several ways in which our framework can help researchers interested in automated testing of autonomous driving software.
Keywords:AI Testing; Self-Driving Cars; Simulation-Based Testing; Digital Twins; Deep Neural Networks; Autonomous Vehicles.
## 1 Introduction
The development of autonomous vehicles (AVs) has received great attention in the last decade. As of 2020, more than $150 billions have been invested in AVs, a sum that is expected to double in the near future [13].
AVs typically integrate multiple advanced driver-assistance systems (e.g., for adaptive cruise control, parking assistance, and lane-keeping) into a unified control unit, using a perception-plan-execution strategy [62]. Advanced driver-assistance systems based on Deep Neural Networks (DNNs) are trained on labeled input-output samples of real-world driving data provided by the vehicle sensory to learn human-like driving actions [22].
Before deployment on public roads, AVs are thoroughly tested in the field, on private test tracks [8; 10; 14; 44]. While essential for fully assessing the dependability of AVs on the road, field testing has known limitations in terms of cost, safety and adequacy [44]. To overcome these limitations, driving simulators are used to generate several real-life edge case scenarios that are unlikely to be experienced during field testing, or that are dangerous to reproduce for human operators [10; 30]. Simulation-based testing represents a consolidated testing practice, being more affordable than field testing, yet capable of exposing many bugs before deployment [8; 10; 14; 44].
In this paper, we distinguish two main categories of driving simulators, namely digital twins (DT) and general-purpose simulators (GPS).
DT provide a software replica of _specific_ real vehicles, that are digitally recreated in terms of appearance, aerodynamics, and physical interactions with the environment [10]. In the context of mixed-reality testing approaches [48; 52], such as Hardware-in-the-Loop and Vehicle-in-the-Loop, the digital twin is connected to physical AV components to further increase the degree of fidelity. In this paper, we consider simulation-based testing where the digital twin is a software replica of a specific real vehicle. Developing a DT is prohibitively expensive [31; 55] and can take up to five years [60]. Hence, it remains an exclusive prerogative of big companies such as Uber (Waabi World [58]), Waymo (Simulation City [59]) or Wayve (Infinity Simulator [60]).
GPS are generally designed without the need to faithfully reproduce a specific vehicle or testing scenario, as they rather offer generic APIs to run one or more AVs on virtual road tracks. GPS such as Siemens PreScan [42] or ESI Pro-SiVIC [23] offer a more affordable alternative to the expensive DT development, and are widely used for outsourcing testing tasks to third-party companies [32], for which access to, or customizations of the original DT are not feasible for each individual vehicle.
Despite affordability, GPS can be affected by a _fidelity_ and _reality gap_, when the simulated experience does not successfully transfer from the GPS to the reference DT and eventually to the real AV. These discrepancies can lead to a distrust in simulation-based testing, as reported by recent industrial surveys [1; 21].
While comparative works of GPS exist in the literature [28; 39], cross-simulator testing for AVs is a relatively unexplored avenue for research. Only a recent study [10] investigates the use of multiple GPS for testing a pedestrian vision detection system. The study compares a large set of test scenarios on both PreScan [42] and Pro-SiVIC [23] and reports inconsistent results in terms of safety violations and behaviors across these simulators. Consequently, using a single
simulator approach for AV testing might be unreliable, as the testing results could be highly dependent on the chosen GPS.
In this paper, we target the fidelity gap between GPS and DT by proposing a multi-simulator approach for AV testing called _digital siblings_ (DSS). Our framework leverages automated test generation and proposes a novel cross-simulator feature map analysis that combines the outcome of several simulator-specific test generators into a unified view. We use DSS as a surrogate model of the behavior of a DT. Our intuition is that agreement among multiple GPS will increase the confidence in observing the same behavior in the DT. On the other hand, in the presence of disagreements, DSS can mitigate or even eliminate the risk of choosing the worst GPS, which would give poor simulation testing results.
In detail, our multi-simulator approach consists in the generation of test cases (i.e., driving scenarios) with an automated test generation tool and in the usage of feature maps to group failures by similarity, to avoid reporting the same failures multiple times. To account for the specificities of each GPS, we execute test generation separately for each sibling. Then, we migrate the tests generated for one sibling to the other sibling. Finally, we merge failing and non failing executions based on similarity of features and estimate an overall joint failure probability.
In our study we use DSS to test a state-of-the-art DNN lane-keeping model--Nvidia DAVE-2 [9]. We consider as siblings two open-source simulators, namely Udacity [50] and BeamNG [6], widely used in previous studies to test lane-keeping software [20; 26; 38; 47; 67]. As DT, we adopt an open-source framework [49] used in previous research [44; 56; 57; 64] featuring a virtual replica of a 1:16 scale electric AV. We evaluate DSS with both _offline_ and _online_ testing [25], i.e., DAVE-2 is tested both w.r.t. the accuracy of its predictions on labeled individual inputs, and at the system-level for its capability to control the vehicle on several hundreds automatically-generated roads.
Our study shows that, at the model-level, the distribution of prediction errors of DSS is statistically indistinguishable from that of the DT. At the system-level, the failure probability of DSS highly correlates with the true failure probability of the DT. More notably, the quality of driving measured in the DSS can predict the true failure probability of the DT, which suggests that we can use the digital sibling framework to possibly anticipate the failures of the real-world AV more reliably than with a single GPS. A practical implication of our findings for software engineers is the usage of digital siblings when adopting AV testing techniques, to increase the level of fidelity of the observed behaviors and failures. The same recommendation holds for AV testing researchers.
Our paper makes the following contributions:
* **Digital Siblings.** A novel approach to AV testing that combines the outcome of general-purpose driving simulators to approximate a digital twin. This is the first solution that leverages a multi-simulator approach to overcome the simulation fidelity gap.
* **Evaluation.** An empirical study showing that the digital siblings are effective at predicting the failures of a digital twin for a physical scaled vehicle in the lane-keeping task.
## 2 Approach
The goal of our approach is to use digital siblings to test the driving component of an AV. The key intuition is that multiple GPS can better approximate the driving behavior of the AV run in a DT, as opposed to a single-simulator approach. Figure 1 (top) shows an overview of our approach in which two digital siblings, namely DS\({}_{1}\) and DS\({}_{2}\), are used to test the behavior of a driving model under test \(M\) (e.g., an end-to-end DNN for lane-keeping).
In the first phase, \(M\) is either trained or fine-tuned (step ) to run on both DS\({}_{1}\) and DS\({}_{2}\), as well as on the target platform (i.e., DT). A test generation phase (step ) is executed for each digital sibling, generating two _feature maps_\(FM_{DS_{1}}\) and \(FM_{DS_{2}}\). Feature maps group together test cases with similar feature combination values to reduce redundancy and summarize the AV behavior for unique feature combination [67; 66]. The value in a feature map cell (displayed in a colored heat scale) represents the average test case outcome, i.e., the behavioral information about the execution of \(M\) in each test scenario (e.g., the failure probability). For each simulator, the test generation algorithm produces test scenarios that are executed by \(M\) to assess its driving behavior under many different circumstances. Hence, the output of test generation is simulator and model dependent and the feature maps of DS\({}_{1}\) (\(FM_{DS_{1}}\)) and DS\({}_{2}\) (\(FM_{DS_{2}}\)) can be different.
The next step of our approach (step ) requires to _migrate_ the test cases across simulators. In detail, the test cases in \(FM_{DS_{1}}\) are executed on DS\({}_{2}\), resulting in the feature map \(\overline{FM}_{DS_{1}}\). Similarly, the test cases in \(FM_{DS_{2}}\) are executed on DS\({}_{1}\), resulting in the feature map \(\overline{FM}_{DS_{2}}\). Then, for both DS\({}_{1}\) and DS\({}_{2}\), we compute
Figure 1: Overview of our approach and its usage.
the _union_ of the two feature maps, obtaining \(FM_{U_{1}}\) for DS\({}_{1}\) and \(FM_{U_{2}}\) for DS\({}_{2}\). Both maps contain the same set of test cases, although executed on two different simulators. The final output of the digital siblings (step ) is obtained by _merging_\(FM_{U_{1}}\) and \(FM_{U_{2}}\) into the final feature map \(FM_{DSS}\).
Step \(\,\vbox{\hbox{\hbox{\scalebox{1.0}{$\bullet$}}}}\,\) assesses the correlation of the \(FM_{DSS}\) map with the \(FM_{DT}\) map, to evaluate the predictive capability of the digital siblings framework. Figure 1 (bottom) shows an overview of the empirical evaluation of our approach (detailed later, in Section 3). All the test cases in the final feature map \(FM_{DSS}\) are executed (i.e., migrated) on DT, to obtain the ground truth feature map \(FM_{DT}\).
### Test Scenarios
#### 2.1.1 Representation
We adopted an abstract representation of the road in each driving simulator so that only a sequence of road control points is needed when creating a new road in the driving scene. We follow the representation given by Riccio and Tonella [38] who defined a two-lane road using a series of _control points_ (displayed as red stars in Figure 2). The control points are interpolated using _Catmull-Rom_ splines [5], giving the road its final shape (yellow solid line).
Figure 2 shows the visualization of a test scenario generated at step \(\,\vbox{\hbox{\scalebox{1.0}{$\bullet$}}}\,\). Specifically, the road is defined using nine control points whereas the Catmull-Rom spline only goes through seven of them. This is because a spline segment (e.g., \(P_{2}-P_{3}\)) is always defined by four control points (e.g., \(P_{1}\), \(P_{2}\), \(P_{3}\), \(P_{4}\)). Since two of them are on either side of the endpoints of the spline segment (e.g., \(P_{1}\) and \(P_{4}\)), the spline cannot traverse the extreme endpoints (e.g., \(P_{1}\) and \(P_{9}\)). Hence, \(P_{2}\) defines the start point of the road (depicted as a black triangle) whereas \(P_{8}\) defines the end point (depicted as a black square).
#### 2.1.2 Implementation
The default initial state of each test case involves positioning the vehicle in the first drivable control point (i.e., \(P_{2}\) in Figure 2), at the center of the right lane following the road orientation.
We uniformed the 3D rendering of each simulator such that the driving scenarios have the same look and feel: a two-lane asphalt road, where the road is
Figure 2: Example of test scenario for a lane-keeping autonomous driving system.
delimited by two solid white lines on each side and the two driving lanes are separated by a single solid yellow line. The road is placed on top of a green plane representing grass. Harmonization of the driving scenarios across simulators ensures that geometrical features are preserved for the collected driving images and that any color transformation applied to them during training preprocessing remains applicable [9].
#### 2.1.3 Validity and Oracle
After interpolation, a road is deemed _valid_ if it respects the following constraints: (1) the start and end points are different; (2) the road is contained within a squared bounding box of a predefined size (specifically 250 \(\times\) 250); and, (3) there are no intersections.
A test case is deemed _successful_ when the vehicle drives within the right lane until the last road control point (e.g., \(P_{8}\) in Figure 2). On the contrary, a test case _failure_ occurs when the vehicle drives _out of bound_ (OOB).
### Creating/Fine-Tuning the Driving Model
For the creation or fine-tuning of a self-driving model (step ), a labeled dataset of driving scenes is needed.
#### 2.2.1 Data Collection
We automate labeled data collection by resorting to _autopilots_ that have _global knowledge_ of the driving scenario such as the detailed road geometry and precise vehicle position. In particular, in each simulator, at each step of the simulation, the steering angle of the autopilot is computed by a Proportional-Integral-Differential (PID) controller [18]. The _PID_ controller computes the error between a reference value of a certain variable and its current measured value. Then, it adjusts the controlled system to reach the reference value using three terms, namely _Proportional_, _Integral_ and _Derivative_. In the context of self-driving, and in particular in the context of lane-keeping, the error to minimize is the _lateral position_ (LP) which measures the distance between the center of the vehicle and the center of the lane [45] (in particular, the lateral position is zero when the vehicle drives at the center of the lane). Given the LP value, the PID controls the steering of the vehicle with the following formula:
\[\textit{steering}=K_{P}\cdot\mathrm{LP}+K_{D}\cdot\textit{diff}_{\mathrm{LP} }+K_{I}\cdot\textit{total}_{\mathrm{LP}} \tag{1}\]
Equation 1 states that the proportional constant \(K_{P}\) acts on the raw error while the derivative constant \(K_{D}\) controls the difference between two consecutive errors and the integral constant \(K_{I}\) considers the total sum of the errors during the whole simulation until the current timestep. Finally, the steering value is clipped in the interval \([-1,+1]\), where \(-1\) means steering all the way to the left and \(+1\) to the right (\(0\) means the vehicle goes straight as no steering is applied). The steering values are normalized in order to account for the different simulators that we use in our approach.
The autopilot produces a steering angle label for each image which is used to train the driving model. We aligned the frame rates of the different simulators at 20 _fps_ such that each simulator autopilot collects a comparable number of labeled images. The speed of the vehicle, both for the autopilot and \(M\), is controlled by the throttle via a linear interpolation between the minimum speed and maximum speed so that the car decreases the speed when the steering angle increases (e.g., in a curve). The following formula computes the throttle based on the speed of the vehicle and the steering:
\[\textit{ throttle}=1-\textit{steering}^{2}-\left(\frac{\textit{speed}}{K} \right)^{2} \tag{2}\]
where \(K\) is set to a predefined low value \(L\) when the measured _speed_ is greater than a given maximum speed threshold, to enforce strong deceleration; viceversa, \(K\) is set to a high value \(H\) when the measured _speed_ is lower than or equal to the maximum speed threshold, to reduce the deceleration component. From Equation 2, we can see that the throttle is close to 1 (the highest possible value) when the vehicle does not steer (\(\textit{steering}=0\)) and the _speed_ is substantially lower than the maximum allowed speed (in this case, \(K=H\)); when one of the two conditions is false the throttle decreases, because of either deceleration component. Similarly to the steering angle values, we clip the throttle value in the interval \([0,1]\).
#### 2.2.2 Model Fine-Tuning via Hybrid Training
The next step involves training the model \(M\) using all simulators and the data collected in step. Alternatively, if an existing trained model \(M\) is available for the target DT, our approach requires _fine-tuning_ it for all digital siblings. In both scenarios, we use _hybrid_ training based on gradient descent [12].
Hybrid training requires combining the datasets collected for different simulators/platforms into a unified dataset, making sure that each dataset is equally represented (i.e., the unified dataset contains the same number of samples from each simulator/platform specific dataset). Then, the unified dataset is split into training and validation sets (e.g., using the standard 80/20 ratio). The training pipeline is designed in such a way that each image, of dimensions 320\(\times\)160, is processed according to the simulator/platform it was taken from. For example, images may be cropped differently. Depending on the vehicle size, the front part of the car may, or may not be visible in the frame captured by the camera. Another example of simulator-specific adaptation is the cropping of the above-horizon portion of the image, unnecessary for the lane-keeping task. After cropping, each image is resized to the size required for training, i.e., 320\(\times\)160.
The training pipeline can be further configured to use plain synthetic virtual images from the driving simulators, or pseudo-real images resembling real-world driving images. The first configuration represents the standard practice in AV testing. In the second configuration, the reality gap due to low photo-realism is reduced by an _image-to-image_ transformation that translates the driving images of each simulator into images similar to those captured by the real-world AV during on-road driving. This practice was proposed in the literature [44] and in industry [7] to increase the transferability of the driving model tested in simulation to the real world.
More specifically, this second configuration requires training a CycleGAN model for each driving simulator [65]. CycleGAN entails two _generators_, one that learns how to translate images from _simulated_ to _real_ world (sim2real) and the other that learns the opposite transformation (real2sim). During training of the model, we use the sim2real generator trained for the respective simulator to translate the corresponding training set images. During testing, the sim2real generator translates images on the fly, during the execution of the simulation. We refer to the translated images as _pseudo_-real, since they are the output of a generative process designed to resemble real images.
Figure 3 shows an example of image translation with a CycleGAN trained for each simulator. The corresponding networks translate an image of a road curve taken in the simulated domain (left) to an image belonging to the real domain (right)--the test track of a small scale physical AV. During training and testing of the driving model in a given simulator, we use the generator of the CycleGAN trained for such simulator.
In our evaluation (Section 3), we consider both configurations of our approach, i.e., training using either simulator or pseudo-real images. We refer to the model trained on simulator images as \(M_{S}\), and the model trained on pseudo-real images as \(M_{R}\).
### Test Generation
While our approach is compatible with any test generation algorithm, in this paper we adopt the _MapElites_[34] algorithm implemented in DeepHyperion [67], because the output of DeepHyperion is projected to a feature map that characterizes each generated test scenario according to its features. In other words, test cases having equivalent features (e.g., 3 turns and maximum curvature of 0.2) are grouped into the same _cell_ of the feature map.
Figure 4 shows an example of feature map generated by DeepHyperion. The roads (i.e., the test cases) in the map are characterized by two structural features, i.e., the _number of turns_ in the road (\(x\) axis) and the _curvature_ of the road (\(y\) axis), the latter defined as the minimum radius of the circles going through each sequence of three consecutive road points [67]. Such features have been used in previous work and have been shown to be effective at characterizing the search space of road generators [67]. Characterizing a test case based on its structural features, i.e., only based on the properties of the road, allows us to identify unique failure scenarios, i.e., failure scenarios with distinctive road properties.
During test generation, the test cases are distributed in the map according to their features. The _value_ of each cell is influenced by the behavior of \(M\) when
Figure 3: Example of translation with the CycleGAN for the three simulators
driving on the roads pertaining to a cell. The minimum _lateral distance_ recorded by the simulator is used by DeepHyperion as a _fitness_ of the generated test case. The lateral distance is the opposite of the lateral position, i.e., it is maximum when the vehicle drives at the center of the lane and it decreases as the vehicle approaches the road side. In particular, it is negative when the model misbehaves (i.e., the vehicle goes out of bound). In Figure 4 the two dashed-encircled cells point out two failure cells for \(M\) (i.e., cells containing roads with negative fitness).
Algorithm 1 shows the pseudocode of the DeepHyperion algorithm. It takes as input the driving model under test \(M\), the simulator instance \(S\) and two hy
Figure 4: Example of feature map generated by DeepHyperion. The two axes represent structural features of the roads.
perparameters, i.e., the population size \(P_{s}\) and the number of iterations \(N\) the search is allowed to run, i.e., the budget of the algorithm. The algorithm starts by initializing an empty feature map and population (Lines 1-2). Then, the _while_ loop at Lines 4-9 fills the initial population by randomly generating an individual (Line 5) and executing it to collect its fitness value \(f\) (Line 6).
The assignment to the feature map (Line 7) is done by the procedure placeIndividualMap based on the feature values of the individual \(t_{c}\) (to determine the coordinates of the target cell) and its fitness value. If the target cell is empty, the individual is placed in the cell. If the cell is non-empty (i.e., another test case was already generated for that cell), a _local competition_ based on the value of the fitness takes place. If the fitness of the individual in the cell is greater than the fitness of the candidate individual, the individual in the cell gets replaced with the candidate individual. Otherwise, no replacement is carried out, which also holds if the individual in the cell already has a negative fitness. The selection function ensures that the search space of the features is explored at large, while the local competition on the individual cells keeps only the lowest performing individuals (i.e., potential misbehaviours) at the end of the generation in order to guide the search towards misbehaviors with unique feature values.
The _while_ loop at Lines 11-16 evolves the initial population of individuals. First, an individual is selected (Line 12) and mutated (Line 13), i.e., the control points of the road are changed in order to form a new individual \(\hat{t}_{c}\) with different features. Such individual is then executed (Line 14) and placed in the map (Line 15). The algorithm terminates after a number \(N\) of iterations (Line 16).
Algorithm 1 returns a feature map with a single individual for each cell, i.e., the one with the lowest fitness (Line 17). In order to further explore the search space, we run DeepHyperion multiple times for each digital sibling to generate multiple feature maps. Then, we combine such maps by considering the _bounds_ of each feature map axis in all the runs (i.e., minimum and maximum value) and placing each generated individual in the combined map, whose bounds are the lowest (resp. highest) bound values across maps. In this way, there are potentially multiple individuals in each cell and the value of a cell represents the metric of interest averaged over all individuals in that cell (see \(FM_{DS_{1}}\) and \(FM_{DS_{2}}\) in Figure 1). For instance, considering the failure probability, the value of a cell represents the number of times the model under test failed over the number of all individuals in the cell (a failure occurs when the fitness of an individual is negative).
### Migration and Union
The test generation step produces two feature maps \(FM_{DS_{1}}\) and \(FM_{DS_{2}}\), for DS\({}_{1}\) and DS\({}_{2}\), respectively. The next step of our approach (i.e., step, see Figure 1) consists of _migrating_ the test cases in \(FM_{DS_{1}}\) to DS\({}_{2}\) (producing \(\overline{FM}_{DS_{1}}\)) and viceversa (producing \(\overline{FM}_{DS_{2}}\)). Such operation consists of instantiating the abstract (control point based) road representation of the test case being migrated, such that it respects the dimensionality constraints of and it can be supplied as input to the target simulator.
After migration, for both DS\({}_{1}\) and DS\({}_{2}\), we consider the _union_ of their maps. We consider the bounds of each feature in the two maps and we place the respective
test cases in a new unified map according to their coordinates, producing the map \(FM_{U_{1}}\) for DS\({}_{1}\) (i.e., \(FM_{DS_{1}}+\overline{FM}_{DS_{2}}\)) and the map \(FM_{U_{2}}\) for DS\({}_{2}\) (i.e., \(FM_{S_{2}}+\overline{FM}_{S_{1}}\)). Hence, the two maps contain the same tests that fill the same cells at the same coordinates.
The value of each cell in the union maps \(FM_{U_{1}}\), \(FM_{U_{2}}\) is recomputed from the individuals assigned to them. For the failure probability, if a given cell in \(FM_{DS_{1}}\) has \(n_{1}/N_{1}\) failing individuals, while the corresponding cell in \(\overline{FM}_{DS_{2}}\) has \(n_{2}/N_{2}\) failing individuals, the failure probability value of the cell in the union map \(FM_{U_{1}}\) will be \((n_{1}+n_{2})/(N_{1}+N_{2})\). When a quality of driving metric is computed, instead of a failure probability, the union map will contain the average of the respective quality of driving metrics: \(qm=(qm_{1}+qm_{2})/2\), where \(qm_{1}\), \(qm_{2}\) are the quality of driving metrics found in the same cell in the two feature maps being united (\(FM_{DS_{1}},\overline{FM}_{DS_{2}}\), or \(FM_{S_{2}},\overline{FM}_{S_{1}}\)), while \(qm\) is the resulting quality of driving metric, in the union map (\(FM_{U_{1}}\) or \(FM_{U_{2}}\)).
### Merge
The final step of the approach (i.e., step \(\raisebox{-0.86pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5 pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\rule{0.4pt}{6.5pt}}\) in Figure 1) requires to _merge_ the two union maps \(FM_{U_{1}}\) and \(FM_{U_{2}}\) into \(FM_{DSS}\). The objective of the merge operation is to combine the testing output of the two digital siblings. Since we aim to use the digital siblings to approximate the behavior of \(M\) on DT and predict its failures, the merge operator privileges _agreements_ between the maps of the two digital siblings, i.e., only cells in the maps that have a hot color (e.g., a high failure probability) will produce a hot color in the merged cell. Indeed, such tests are likely to represent simulator-independent misbehaviors of the model under test, which are critical for the safety of the system. Specifically, if the failure probability of \(FM_{U_{1}}\) is \(fp_{1}=n_{1}/N_{1}\) and that of \(FM_{U_{2}}\) is \(fp_{2}=n_{2}/N_{2}\), in the merged map the failure probability will be the product, \(fp=fp_{1}\times fp_{2}\). When a quality of driving (resp. lack of quality of driving) metric is computed, instead of a failure probability, the merged map will conservatively contain the maximum (resp. minimum) of the respective quality of driving metrics: \(qm=\max\{qm_{1},qm_{2}\}\) (resp. \(qm=\min\{qm_{1},qm_{2}\}\)), where \(qm_{1}\), \(qm_{2}\) are the quality of driving metrics found in the same cell in \(FM_{U_{1}}\), \(FM_{U_{2}}\), respectively, while \(qm\) is the resulting quality of driving metric, in the merged map. By giving priority to failures (resp. quality of driving degradations) that occur in both siblings and are hence very likely to be relevant for the target platform, this choice better accommodates the limited testing budget available for production/field testing [8; 10; 14; 32; 44].
### Evaluation Scenario
While our approach is useful when no DT is available, to evaluate whether the DSS can approximate the behavior of \(M\) and predict its failures when executed on DT, we migrate all the tests in the digital siblings feature map (i.e., \(FM_{DSS}\)) to an actual DT, which is used to obtain the ground truth map \(FM_{DT}\) (see "Evaluation Scenario" in Figure 1 (bottom)). The two maps being compared contain the same tests in the same cells, but the values of the cells might differ, depending on the behavior of \(M\) in the different simulators. Thus, we analyze and compare the two
feature maps \(FM_{DSS}\) and \(FM_{DT}\) to assess the capability of DSS at predicting the failures of the model when executed on the DT.
## 3 Empirical Study
The goal of the empirical study is to evaluate whether two digital siblings (DSS) can approximate the _behavior_ of a driving model and predict its failures on a digital twin (DT) better than using only one general-purpose simulator (GPS). To this aim, we consider the following research questions:
**RQ1 (Offline Evaluation).**_How do the offline prediction errors by the DSS compare to those of the DT?_
We first test our hypothesis at the model-level. For all simulators, we compute the errors between the model predictions and each autopilot ground truth labels on a stationary driving images dataset. We compare the error distributions of each individual simulator with the DT, as well as their combination as digital siblings.
With RQ\({}_{1}\) we aim to assess whether a correlation between the offline predictions exists at the model-level, which can be useful for developers to gain trust about their DNN model prediction accuracy, prior to running system-level tests.
**RQ2 (Failure Probability).**_How does the failure probability of the DSS compare to that of the DT?_
In RQ\({}_{2}\) we test the model at the system-level, specifically the hypothesis that combining the failure probabilities of the two digital siblings provides a better predictor of the ground truth failure probability of the model executed on the DT. A positive answer to RQ\({}_{2}\) would support our digital siblings framework to predict, and possibly anticipate, the failures on the DT, which are expected to be accurate proxies of real-world failures.
**RQ3 (Quality of Driving).**_How does the quality of driving of the DSS compare to the failure probability of the DT?_
By considering only the failure probability, we might overlook the correlation between real failures on the DT and near-failures on the DSS--test cases in which the model exhibits a degraded driving quality without necessarily going off road. Thus, with RQ\({}_{3}\), we also assess whether finer-grained driving quality metrics can predict the ground truth failure probability of the model on the DT.
### Test Object and Simulators
#### 3.1.1 Study Object
We test a popular DNN-based AV agent: Nvidia DAVE-2 [9], a robust lane-keeping model used as an object of study in several DNN testing works [26; 43; 44; 45; 46; 47; 67]. Moreover, its open-source nature makes it adequate to be trained and evaluated on the simulators considered in this work. Architecturally, DAVE-2 consists of three convolutional layers, followed by five fully-connected layers [9].
#### 3.1.2 Digital Siblings (DSS)
We implemented and investigated the effectiveness of DSS using the simulators BeamNG [6] and Udacity [53]. We chose them as digital siblings because: (1) they
support training and testing of a DNN that performs lane-keeping, including DAVE-2; (2) they are often used as simulator platforms for AV testing; (3) they are potentially complementary because they are developed with different technologies/game engines and they are characterized by different physics implementations (e.g., rigid vs soft-body dynamics); (4) they are publicly available under open-source or academic-oriented licenses, hence customizable.
BeamNG [6] is a framework specialized in autonomous driving developed by BeamNG GmbH. The framework is released under an academic-oriented license and it has been downloaded \(5.5k\) times as of January 2023. From a technical standpoint, BeamNG features a _soft-body dynamics_ simulation based on a spring-mass model. Such a model is composed of nodes (mass points) that are connected by beams (springs), i.e., weightless elements that allow accurate vehicle deformation and other aerodynamic properties [19].
Udacity [53] is developed with Unity 3D [54], a popular cross-platform game engine. The project has been publicly released in 2016 by the for-profit educational organization Udacity, to allow people from all over the world to access some of their technology and to contribute to an open-source self-driving car project. As of January 2023, the simulator has \(3.7k\) stars on GitHub. From a technical standpoint, Udacity is based on the Nvidia PhysX engine [35], featuring discrete and continuous collision detection, ray-casting, and _rigid-body dynamics_ simulation.
#### 3.1.3 Digital Twin (DT)
We use the Donkey Car\({}^{\tiny\mbox{\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytinytiny\tinytiny\tinytiny\tinytinytiny\tinytinytiny\tiny\tinytiny\tinytiny\tinytiny\tinytiny\tinytiny\tinytiny\tiny\tinytiny\tinytiny\tinytiny\tinytiny\tinytiny\tiny\tinytiny\tinytiny\tiny\tinytiny\tinytiny\tiny\tiny\tinytiny\tinytiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\
one that achieved the best neural translations (in terms of visual quality) using a test set of \(\approx\)8\(k\) simulated images for each simulator, representing a test road driven from beginning to the end [44]. While a quantitative assessment of the output of CycleGAN is still a major challenge [11] and out of the scope of this paper, the driving capability of the lane-keeping model, as the experimental evaluation shows, represents an implicit validation of the CycleGAN model's ability to retain all essential features needed for an accurate steering angle prediction.
#### 3.2.2 Driving Models
**Data Collection.** For all simulators (i.e., DS\({}_{1}\), DS\({}_{2}\) and DT), we collected a training set by running the autopilots on a set of randomly generated roads (this set is different from the one used to train the CycleGAN). To ensure having non-trivial driving scenarios and appropriate labels for challenging curves, the maximum angle of a curve was set to be less than or equal to 270\({}^{\circ}\). In particular, for our training set, we generated 25 roads with 8 control points [67]. To collect a balanced dataset where left and right curves are equally represented, each road was driven by the autopilot in both directions, i.e., from the start point to the end point and from the end point to the start point. The autopilot drove successfully the totality of the roads on all simulators; our training set comprises \(\approx\)70\(k\) images, equally distributed across the simulators.
**Training.** We trained two DAVE-2 models, one by using the plain simulated images (\(M_{S}\)) and another one by translating the images of each simulator into _pseudo_-real images (\(M_{R}\)) using the respective CycleGAN generator. We followed the guidelines by Bojarski et al. [9] to train AV autopilots. For both \(M_{S}\) and \(M_{R}\) we trained the model for 50 epochs, with an early stopping patience of 10 epochs if no improvements of the validation loss were observed during fitting. We used the Adam optimizer [29] to minimize the mean squared error (MSE) between the predicted steering angles and the ground truth value. Moreover, we set a learning rate of 10\({}^{-4}\) and a batch size of 128. The best MSE on the validation set for \(M_{S}\) was 0.003, reached after 48 epochs, whereas the best MSE on the validation set for \(M_{R}\) was 0.02, reached after 25 epochs.
#### 3.2.3 Offline Evaluation
We collected a labeled dataset for offline evaluation by generating 20 roads (i.e., 10 roads driven in both directions) with the same parameters as the training set (i.e., 8 control points per road and a maximum angle of 270\({}^{\circ}\)). The images collected for the _offline_ evaluation dataset amount to \(\approx\)26\(k\), considering all simulators.
#### 3.2.4 Test Generation
After training \(M_{S}\) and \(M_{R}\), we executed DeepHyperion _twice_ to generate tests using the two digital siblings DS\({}_{1}\) and DS\({}_{2}\). We chose a population size of 20 individuals and a number of search iterations respectively equal to 150 for \(M_{S}\) and 100 for \(M_{R}\), as we observed from preliminary experiments that this choice of hyper-parameters allows an extensive coverage of the feature maps. For both \(M_{S}\) and \(M_{R}\) and each digital sibling, we repeated test generation five times to diversify
the exploration of the search space and to collect multiple test cases for each cell in the feature maps. Overall, across all runs and driving models, DeepHyperion generated 1,455 tests for both siblings.
Concerning the simulations, for all simulators, we set the maximum speed for the vehicle to 30 km/h [67]. When testing \(M_{R}\) in a given simulator, we engineered the testing pipeline to load the appropriate sim2real CycleGAN generator to translate the simulated image generated by BeamNG/Udacity into pseudo-real images _in real-time during driving_. For each executed test case, we collected the lateral position of the vehicle for each simulation step as well as its lateral distance. The former determines the quality of driving of the model [26], while the latter is the fitness of the test case.
#### 3.2.5 Migration and Union
For the initial (\(FM_{DS_{1}}\), \(FM_{DS_{2}}\)) and for the union (\(FM_{U_{1}}\), \(FM_{U_{2}}\)) feature maps, we compute the failure probability as the number of tests with a negative fitness divided by the total number of tests in the respective cell. To evaluate the quality of driving, we adopted the maximum lateral position experienced during the test case execution. Previous work showed that such metric is effective at characterizing the degradation in the quality of autonomous driving [26] since the lower the value of such metric, the higher is the quality of driving (thus, it actually measures _lack_ of quality of driving). When considering the quality of driving, the value of each cell in a feature map represents the average of the maximum lateral positions of each test case in that cell. Furthermore, we normalized the maximum lateral position values in the interval \([0,1]\) before taking the union.
#### 3.2.6 Merge
Merging the maps of the two digital siblings requires a different treatment for failure probability and quality of driving. Regarding the failure probability, the merge operator that ensures a conservative aggregation of two values is the _product_. Regarding the lack of quality of driving, the conservative merge operator is the _minimum_, since the quantities to merge are not probabilities. In fact, by taking the minimum we get a high lack of driving quality only when both simulators exhibit high values for such a metric.
### Metrics
#### 3.3.1 RQ\({}_{1}\) (Offline Evaluation)
We computed the prediction errors given by the difference between the predictions of the model (\(M_{R}\)) on images of the offline evaluation dataset (see Section 3.2) and the corresponding ground truth labels given by the autopilot. We binned the prediction errors of the model on each simulator and built the respective _probability density_ (i.e., the number of errors in each bin is divided by the total number of prediction errors) such that different distributions could be compared.
Then, we computed the _distance_ between each digital sibling distribution, as well as their combination, and the DT using the _Wasserstein_ distance [3] (also
known as the _earth mover's distance_). Given two one-dimensional distributions \(A\) and \(B\), the Wasserstein distance \(W(A,B)\) is defined by the following formula [36]:
\[W(A,B)=\int_{\mathbb{R}}|CDF_{A}(x)-CDF_{B}(x)|dx \tag{3}\]
where \(CDF\) is the _cumulative distribution function_ of a distribution. In other words, the Wasserstein distance between two distributions is defined as the difference between the area formed by their cumulative distribution functions.
We assess whether the difference between two distributions is statistically significant using the Wilcoxon test [15] applied to the density functions of the two error distributions to compute the \(p\)-value (with threshold \(\alpha\leq 0.05\)). We also perform power analysis (with statistical power \(\beta\geq 0.8\)) on the prediction errors to check whether a non-significant \(p\)-value is due to a low data sample size or to the difference being statistically insignificant.
#### 3.3.2 RQ\({}_{2}\) (Failure Probability) and RQ\({}_{3}\) (Quality of Driving)
For RQ\({}_{2}\), we computed the pairwise _Pearson correlation_ between maps along with the corresponding \(p\)-value. In particular, correlations are computed between each union feature map of each digital sibling (\(FM_{U_{1}}\), \(FM_{U_{2}}\)) and the feature map of the DT (\(FM_{DT}\)), and between \(FM_{DSS}\) and \(FM_{DT}\). For RQ\({}_{3}\), the setting is equivalent to that of the failure probability but considering quality of driving maps, again comparing DS\({}_{1}\), DS\({}_{2}\) and DSS against the ground truth DT.
To evaluate the capabilities of the digital siblings (individually or jointly) to predict failures on the DT, we computed the area under the curve Precision-Recall (AUC-PRC) at increasing thresholds, for both RQ\({}_{2}\) and RQ\({}_{3}\). This requires the discretization of failure probabilities into binary values (failure vs non-failure) for the ground truth (i.e., DT): we consider a cell in the DT feature map to be a failure cell if the associated failure probability is \(>0.0\). AUC-PRC is more informative than the AUC-ROC metric (i.e., the area under of the curve of the Receiver Operating Characteristics) when dealing with imbalanced [40] datasets, which is the case of our study (the number of failures in the feature maps is lower than the number of non-failures with an average 10 to 20% ratio).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{Offline Evaluation} \\ \cline{2-5} & \multicolumn{2}{c}{\(M_{S}\)} & \multicolumn{2}{c}{\(M_{R}\)} \\ \cline{2-5} & distance & \(p\)-value & distance & \(p\)-value \\ \hline DS\({}_{1}\) vs DT & 0.04669 & 0.101\({}^{\dagger}\) & 0.03250 & 0.011 \\ DS\({}_{2}\) vs DT & 0.02648 & 0.020 & 0.02187 & 0.078\({}^{\dagger}\) \\ DSS vs DT & **0.03776** & 0.053\({}^{\dagger}\) & **0.00951** & 0.088\({}^{\dagger}\) \\ \hline \hline \multicolumn{5}{l}{\({}^{\dagger}\)_power \(>\) 0.8_} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for RQ\({}_{1}\). Bold-faced values indicate the best approach.
### Results
#### 3.4.1 Offline Evaluation (RQ\({}_{1}\))
Table 1 reports the results for our first research question. The first column shows the simulators being compared. Columns 2-5 report the Wasserstein distance between the prediction error densities of the corresponding simulators, and the \(p\)-value concerning the statistical significance of the differences between the two densities, for \(M_{S}\) and \(M_{R}\).
For \(M_{S}\) (Columns 2-3), our results show that the distance between the steering angle errors obtained for the combined digital siblings DSS and the errors obtained for the DT is lower than the distance of DS\({}_{1}\) (0.03776 vs 0.046) and higher than the distance of DS\({}_{2}\) (0.02648). The distribution of the steering angle errors of DS\({}_{2}\) is statistically different from the errors of the DT (i.e., \(p\)-value \(0.02<0.05\)), while the distribution of the steering angle errors of DSS is statistically indistinguishable from the errors of the DT (i.e., \(p\)-value \(0.053>0.05\) and power \(>0.8\)).
Regarding \(M_{R}\) (Columns 4-5), our results show that the distance between the steering angle errors obtained for the combined digital siblings DSS and the errors obtained for the DT is _2.8 times lower_ than the distance of each simulator taken individually (as a percentage, the distance of DSS is respectively 70% and 56% smaller than the distance of the two individual siblings, DS\({}_{1}\), DS\({}_{2}\)). The statistical test confirms that the error distributions of DSS and DT are statistically indistinguishable (\(p\)-value \(>0.05\) and power \(>0.8\)), which is not the case for the error distributions of DS\({}_{1}\) (\(p\)-value \(<\) 0.05).
Figure 5 offers a visual explanation of these scores. The subplots compare the steering angle error distributions, respectively, of DS\({}_{1}\), DS\({}_{2}\) and DSS (shown in light red) with that of DT (shown in light blue). The \(x\)-axis of each subplot represents the magnitude of the prediction errors of the model \(M_{R}\) w.r.t. the predictions of the autopilot, while the \(y\)-axis indicates their percentage for each bin.
From the plots we can see that, overall, at the model-level, \(M_{R}\) makes prediction errors with small magnitudes on DS\({}_{1}\), DS\({}_{2}\) and DSS (i.e., most of the errors are between 0.0 and 0.3). On the digital sibling DS\({}_{1}\) (i.e., BeamNG), \(M_{R}\) has a high agreement with the autopilot, as most errors have a low magnitude. It has a large number of small errors (\(<0.2\)), while it has only a negligible portion of the distribution being above 0.2. The agreement with the DT is low as \(M_{R}\)_under-approximates_ the true error distribution on the DT: \(M_{R}\) on the DT has less errors with low magnitude and has a longer tail of errors greater than 0.2 (even greater than 0.3 in some cases). Differently, on the digital sibling DS\({}_{2}\) (i.e., Udacity), the error distribution has a longer tail than that on the DT. Indeed, \(M_{R}\) executed on DS\({}_{2}\)_over-approximates_ the errors it would have on the DT, as the errors observed on DS\({}_{2}\) have higher magnitude than those observed on the DT.
The error distribution of the model on DSS shows why it is appropriate to combine the outcome of two simulators. At the model-level, DSS better approximates the true error distribution of the model on the DT, by providing an intermediate error between DS\({}_{1}\) and DS\({}_{2}\) for both \(M_{S}\) and \(M_{R}\).
#### 3.4.2 Failure Probability (RQ\({}_{2}\))
Table 2 shows the Pearson correlation (\(r\)), the \(p\)-value, and the AUC-PRC for the comparison between DS\({}_{1}\), DS\({}_{2}\), DSS and DT, respectively. The analysis is reported separately for \(M_{S}\) (Columns 2-4) and \(M_{R}\) (Columns 5-7).
Concerning \(M_{S}\)--i.e., the model driving with simulated driving scenes-- the failure probabilities have a high positive correlation with the true failure probability of the DT (Column 2). All such correlations are statistically significant for our DSS framework, as well as for each individual sibling DS\({}_{1}\) and DS\({}_{2}\) (\(p\)-values \(<\) 0.05, see Column 3). However, the correlation of the DSS is 9% higher than the best individual correlation (i.e., DS\({}_{1}\)) and 21% higher than the worst individual correlation (i.e., DS\({}_{2}\)). In terms of failure prediction, the DSS have the highest AUC-PRC value, 4% higher than DS\({}_{1}\) and 33% higher than DS\({}_{2}\).
Figure 6 shows the feature maps related to \(M_{S}\). The first three feature maps represent the failure probability of DS\({}_{1}\), DS\({}_{2}\) and DSS, respectively. The last feature map represents the ground truth failure probability of DT. The color of each cell ranges from green (i.e., non-failure, or failure probability = 0) to red (i.e., failure probability = 1). Let us analyze a _false positive_ case. The test cases at coordinates (3, 0.25), whose corresponding cells are highlighted with a dashed line, represent road tracks having three curves and a maximum curvature of 0.25. In the DT, this cell is green, i.e., all test cases for \(M_{S}\) driving on the DT succeed. On the other hand, \(M_{S}\) has contrasting behaviors when the same test cases are executed on DS\({}_{1}\) or DS\({}_{2}\). These test cases did not exhibit any failure in DS\({}_{1}\), whereas they did trigger failures in DS\({}_{2}\). This disagreement is canceled out when combining the two digital siblings with the product operator and the cell is green in the DSS map. As such, digital siblings are conservative w.r.t. failures, as a failure is reported only when both digital siblings are in agreement. This can be noticed for test cases at coordinates (1, 0.23), which represent road tracks having one curve with a maximum curvature of 0.23--an instance of a _true positive_ case (the corresponding cells in each map are highlighted with a solid line). Both DS\({}_{1}\) and DS\({}_{2}\) have a failure probability of 1 and, as a consequence, the DSS map also does. On the DT, \(M_{S}\) has also a high failure probability (0.5), which confirms the high effectiveness of the DSS framework at approximating the true failure probability of DT.
Concerning the failure probability for \(M_{R}\)--i.e., the model driving with pseudo-real driving scenes, the correlation of the DSS is comparable with the best individual correlation (i.e., 0.193 for DSS vs 0.194 for DS\({}_{1}\)). Similarly, the corresponding AUC-PRC values are equivalent (the AUC-PRC of DSS is 1% better than that
Figure 6: Feature maps representing the failure probability of \(M_{S}\) on the two digital siblings, DS\({}_{1}\) and DS\({}_{2}\), their combination (DSS) and on the DT. Solid line cells represent a true failure predicted by DSS while dashed line cells represent a false positive of DS\({}_{2}\). Best viewed in color.
of DS\({}_{1}\)). All correlations are not statistically significant; especially the correlation of DS\({}_{2}\) with the DT is particularly low (i.e., 0.077), which is also reflected in the low AUC-PRC score (i.e., 0.328). In this case the usage of the DSS framework mitigates (actually, eliminates) the risk of choosing a poorly performing simulator such as DS\({}_{2}\), which exhibits low correlation of the failure probability with the ground truth one and low failure prediction power.
**RQ2**: At the system-level, the failure probability of the digital siblings predicts the true failure probability of the DT better than each individual sibling (for \(M_{S}\)), or the same as the best sibling (\(M_{R}\)). In both settings, failures obtained on the DSS are a better predictor of the ground truth failures experienced on the DT.
#### 3.4.3 Quality of Driving (RQ\({}_{3}\))
Table 3 shows the Pearson correlation (_r_), the \(p\)-value, and the AUC-PRC for the comparison between DS\({}_{1}\), DS\({}_{2}\), DSS and DT, respectively. The comparison considers the correlation between the quality of driving metric experienced in DS\({}_{1}\), DS\({}_{2}\), DSS and the failure probability of the model on the DT, as well as the prediction of failures from the quality of driving metric. The analysis is reported separately for both \(M_{S}\) (Columns 2-4) and \(M_{R}\) (Columns 5-7) models.
For \(M_{S}\), the correlation between DSS and DT is lower than the best individual correlation (0.553 of DSS vs 0.621 of DS\({}_{1}\)). The DSS correlation is 22% higher than the worst individual correlation (0.553 of DSS vs 0.429 of DS\({}_{2}\)). For AUC-PRC, DSS and DS\({}_{2}\) have the same predictive power (i.e., 0.659), while DSS is 25% better than DS\({}_{2}\) (i.e., 0.659 vs 0.496). Thus, using the DSS framework mitigates the risk of relying on the testing results of a low-quality GPS (i.e., DS\({}_{2}\)).
Concerning \(M_{R}\), we observed a similar correlation and a similar AUC-PRC as with \(M_{S}\), the main difference being the slightly higher AUC-PRC for DSS (i.e., 0.500 vs 0.490) w.r.t. the best individual sibling (i.e., DS\({}_{1}\)) and a more pronounced difference w.r.t. the worst individual sibling (i.e., DS\({}_{2}\), 0.659 vs 0.496 in \(M_{S}\), a 33% increase, and 0.500 vs 0.336 in \(M_{R}\), a 49% increase).
Figure 7 shows the four feature maps related to the quality of driving of the \(M_{R}\) model on the two digital siblings and the failure probability of \(M_{R}\) on the DT. We can observe that the feature map of DS\({}_{1}\) and the feature map of the DSS are similar. As a consequence, the two correlations are similar. On the other hand, the feature map of DS\({}_{2}\) is quite different from the failure probability map of the DT, which causes the correlation to be low. We can observe that a failure case of the DT at coordinates (1, 0.14) is not caught by the quality metric values of any sibling (neither DS\({}_{1}\), DS\({}_{2}\), nor DSS, see the corresponding cells highlighted with a solid line). On the other hand, all siblings are able to capture the failure of the DT at coordinates (3, 0.23) (see the corresponding cells highlighted with a dashed line).
## 4 Discussion
When combining the two siblings using our framework, the worst case occurs when the two siblings disagree and the over-approximating sibling (e.g., predicting a failure) is not compensated by the under-approximating sibling (see Figure 6). In practical cases, we empirically observed that by predicting a failure only when there is agreement, the digital siblings framework is equivalent to the best of the two siblings (see RQ\({}_{3}\)).
Figure 7: Feature maps representing the quality of driving of \(M_{R}\) (i.e., the maximum lateral position) on the two digital siblings, DS\({}_{1}\) and DS\({}_{2}\), their combination (DSS) and the failure probability on the DT. Solid line cells represent a true failure missed by all siblings, while dashed line cells represent a true failure predicted by DSS. Best viewed in color.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{Quality of Driving} \\ \cline{2-7} & \multicolumn{2}{c}{\(M_{S}\)} & \multicolumn{2}{c}{\(M_{R}\)} \\ \cline{2-7} & \(r\) & \(p\)-value & AUC-PRC & \(r\) & \(p\)-value & AUC-PRC \\ \hline DS\({}_{1}\) vs DT & 0.621 & 0.000 & **0.659** & 0.211 & 0.062 & 0.490 \\ DS\({}_{2}\) vs DT & 0.429 & 0.000 & 0.496 & 0.056 & 0.626 & 0.336 \\ DSS vs DT & 0.553 & 0.000 & **0.659** & 0.193 & 0.088 & **0.500** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for RQ\({}_{3}\). Bold-faced values indicate the best approach.
We experimented with both simulated (\(M_{S}\)) and real-world models (\(M_{R}\)) as such setting is representative of the current industrial testing practices described by the NHTSA [52]. From the feature maps in Figure 6 and Figure 7, we can observe that the driving quality of \(M_{S}\) is superior w.r.t. \(M_{R}\), presumably because it is easier for a DNN to process plain artificial images from a simulator, rather than the images collected by a real-world camera during driving (i.e., sim2real gap). Our results show that, in both settings, using the digital siblings better approximates the behavior of the model on the DT, regardless of the different driving capabilities.
### Threats to Validity
#### 4.1.1 Internal validity
We compared all simulators under identical parameter settings. One threat to internal validity concerns our custom implementation of DeepHyperion within the simulators. We mitigated this threat by faithfully replicating the code available in the replication package of the paper [16]. Another threat may be due to our own data collection phase and training of DAVE-2, which may exhibit a large number of misbehaviors if trained inadequately. We mitigated this threat by training and fine-tuning a model which was able to drive on the training set roads consistently on all simulators.
#### 4.1.2 External validity
We considered only a limited number of DNN models and simulators, which poses a threat in terms of the generalizability of our results. We tried to mitigate this threat by choosing a popular real-world DNN model, which achieved competitive scores in the Udacity challenge. We considered two open-source GPS and we chose DonkeyCar as DT, as it was used as a proxy for full size self-driving cars also in previous studies [44; 56; 57; 64]. Generalizability to other GPS or DT would require further studies.
## 5 Related Work
### Digital Twins for AV Testing
Digital twins are used by researchers to reproduce real-world conditions within a simulation environment for testing purposes [4; 61; 27; 41; 2].
Yun et al. [61] test an object recognition system using the GTA videogame. Barosan et al. [4] describe a digital twin for testing an autonomous truck. No testing was performed using the digital twin to assess the faithfulness of the simulator at reproducing real-world failures.
Differently, in our paper we investigate testing transferability between digital siblings, i.e., a framework composed of multiple general-purpose simulators, and a digital twin, considering both simulated and pseudo-real images as input to the DNN.
### Empirical Studies
Recent work has confirmed the need for real-world testing of cyber-physical systems, as simulation platforms are often decoupled from the real world complexities [1]. Our work is the first to propose the usage of a multi-simulator approach, called digital siblings, to mitigate the fidelity gap in the field of autonomous driving testing.
Concerning comparative studies across simulators, to the best of our knowledge, the only study that empirically compares the same AV on different simulation platforms is by Borg et al. [10]. The authors investigate the use of multiple GPS for testing a pedestrian vision detection system. The study compares a large set of test scenarios on both PreScan [42] and Pro-SiVIC [23] and reports low agreement between testing results across the two simulation platforms. No assessment is performed of their correlation with a digital twin or a physical vehicle. In our paper, we take a step ahead and we show how the (dis)agreements can be leveraged to mitigate the fidelity gap: by combining the predictions of two general-purpose simulators we successfully covered the gap with a digital twin for a scaled physical vehicle.
Other studies compare model-level vs system-level testing metrics within a simulation environment [25]. In our empirical work, we focused on the difference between general-purpose and digital twin driving simulators. We use offline and online testing to measure the gap between single- and multi-simulator approaches at approximating a digital twin, a previously unexplored topic.
### AV Testing Approaches
Most approaches use _model-level testing_ (i.e., offline testing of single image predictions) to test DNN autopilots under corrupted images [51] or GAN-generated driving scenarios [63], without however testing the self-driving software in its operational domain. In our work, we assess the effectiveness of our digital siblings with model-level testing in terms of prediction error distributions, but we also consider online testing at the system-level.
Concerning _system-level testing_ for AVs, researchers proposed techniques to generate scenarios that cause AVs to misbehave [47; 20; 46; 43; 33; 63]. Among the existing test generators, in this work we adopted DeepHyperion by Zohdinasab et al. [67], a tool that uses illumination search to extensively cover a map of structural input features, which allowed us to easily group identical or equivalent failure conditions occurring in the same feature map cell. Ul Haq et al. [24] use ML regressors as surrogate models to mimic the simulator's outcome.
These works only consider single-simulator approaches to testing. Their generalizability to a multi-simulator approach, such as the digital siblings proposed in this paper, or to cross-simulator testing, is overlooked in the existing literature.
## 6 Conclusions and Future Work
In this paper, we propose the digital siblings framework to improve the testing of autonomous driving software. In our approach, we test the autonomous driving
software using two general-purpose simulators in order to better approximate the behavior of the driving model on a digital twin. We combine the testing outputs of the model on the two simulators in a conservative way, giving priority to the agreements on possible failures, where it is more likely to observe the same failing behavior on the digital twin.
At the model level, our results show that, by combining two general-purpose simulators, we can approximate the model predictions on the digital twin better than done by each individual simulator. At the system-level, the digital siblings are able to predict the failures of the model on the digital twin better than each single simulator.
In our future work we plan to extend our framework to more than two general-purpose simulators and to study different ways to combine them based on the characteristics of each simulator and those of the digital twin.
## 7 Declarations
### Funding and/or Conflicts of interests/Competing interests
This work was partially supported by the H2020 project PRECRIME, funded under the ERC Advanced Grant 2017 Program (ERC Grant Agreement n. 787703). We thank BeamNG GmbH for providing us the license for the driving simulator. The authors declared that they have no conflict of interest.
### Data Availability
The software artifacts and our results are publicly available [37].
|
2304.08349 | Deep Explainable Relational Reinforcement Learning: A Neuro-Symbolic
Approach | Despite numerous successes in Deep Reinforcement Learning (DRL), the learned
policies are not interpretable. Moreover, since DRL does not exploit symbolic
relational representations, it has difficulties in coping with structural
changes in its environment (such as increasing the number of objects).
Relational Reinforcement Learning, on the other hand, inherits the relational
representations from symbolic planning to learn reusable policies. However, it
has so far been unable to scale up and exploit the power of deep neural
networks. We propose Deep Explainable Relational Reinforcement Learning
(DERRL), a framework that exploits the best of both -- neural and symbolic
worlds. By resorting to a neuro-symbolic approach, DERRL combines relational
representations and constraints from symbolic planning with deep learning to
extract interpretable policies. These policies are in the form of logical rules
that explain how each decision (or action) is arrived at. Through several
experiments, in setups like the Countdown Game, Blocks World, Gridworld, and
Traffic, we show that the policies learned by DERRL can be applied to different
configurations and contexts, hence generalizing to environmental modifications. | Rishi Hazra, Luc De Raedt | 2023-04-17T15:11:40Z | http://arxiv.org/abs/2304.08349v2 | # Deep Explainable Relational Reinforcement Learning: A Neuro-Symbolic Approach
###### Abstract
Despite numerous successes in Deep Reinforcement Learning (DRL), the learned policies are not interpretable. Moreover, since DRL does not exploit symbolic relational representations, it has difficulties in coping with structural changes in its environment (such as increasing the number of objects). Relational Reinforcement Learning, on the other hand, inherits the relational representations from symbolic planning to learn reusable policies. However, it has so far been unable to scale up and exploit the power of deep neural networks. We propose Deep Explainable Relational Reinforcement Learning (DERRL), a framework that exploits the best of both - _neural_ and _symbolic_ worlds. By resorting to a neuro-symbolic approach, DERRL combines relational representations and constraints from symbolic planning with deep learning to extract interpretable policies. These policies are in the form of logical rules that explain how each decision (or action) is arrived at. Through several experiments, in setups like the Countdown Game, Blocks World, Gridworld, and Traffic, we show that the policies learned by DERRL can be applied to different configurations and contexts, hence generalizing to environmental modifications.
Keywords:Neuro-Symbolic AI Relational Reinforcement Learning Deep Reinforcement Learning, Explainability.
## 1 Introduction
Deep Reinforcement Learning (DRL) [2] has gained great success in many domains. However, so far, it has had limited success in relational domains, which are typically used in symbolic planning [40]. In the prototypical blocks world game (Figure 1), one goal is to place block \(a\) on block \(b\). An obvious plan for achieving this is to _unstack_ the blocks until both blocks \(a\) and \(b\) are at the top, upon which block \(a\) can be moved atop block \(b\). Standard DRL approaches struggle to adapt to out-of-domain data, such as placing block \(c\) on \(d\), or applying the learned strategies to changes in the stack size or the number of stacks, thus failing to learn generalized policies. Furthermore, the black-box nature of the learned policies makes it difficult to interpret action choices, especially in domains involving
transparency and safety [33, 47, 22]. Understanding a machine's decision-making is crucial for human operators to eliminate irrational reasoning [51, 17].
Relational Reinforcement Learning (RRL) [7, 49] combines symbolic planning and reinforcement learning and has origins in Statistical Relational AI and Inductive Logic Programming (ILP) [39, 5]. RRL uses logic programs to represent interpretable policies that are similar to symbolic planning languages [11, 16, 15]. These policies use relations and objects, rather than specific states and actions, allowing agents to reason about their actions at a higher level of abstraction, and applying the learned knowledge to different situations. Earlier RRL approaches were purely symbolic [7, 6, 26, 38], searching policy spaces guided by performance estimates, but did not exploit deep learning advancements and were not robust to noisy data. Recent approaches [56] use neural networks for scalability and improved internal representations, but learned policies are not human-readable.
We introduce **D**eep **E**xplainable **R**elational **R**einforcement **L**earning (DERRL: neural DRL + symbolic RRL), a neuro-symbolic RRL approach that combines the strengths of neural (differentiability and representational power) and symbolic methods (generalizability and interpretability) while addressing their respective shortcomings. More specifically, DERRL uses a neural network to search
Figure 1: Learned rules in all environments. [Row 1]left to right. Countdown Game: select operations (addition, subtraction, null) to make accumulated = target; Blocks World: place a specific block on another; DoorKey: unlock a door with a matching key to reach a goal. [Row 2] Traffic: minimize traffic at grid intersections. The figure shows a 5-agent grid with intersections 0 and 1 connected by lane \(a\); Gridworld: navigate a grid to reach the goal. Descriptions in Section 5.1.
the space of policies represented using First-Order Logic (FOL)-based rules3. Like other ILP methods, our framework provides interpretable solutions. However, instead of using search-based methods, we leverage the representational capacity of neural networks to generate interpretations of actions (called rules), while entirely bypassing the need to interpret the network itself. To be specific, we propose a parameterized rule generation framework where a neural network learns to generate a set of generalized _discriminative_ rules that are representations of the policy. For instance, as shown in the blocks world manipulation game in Figure 1, the two rules corresponding to \(move\) action are - \(move(X,Y)\gets top(X),on(X,Z),isFloor(Y)\), which triggers an _unstacking_ process, and \(move(X,Y)\gets top(X),top(Y),goal\_on(X,Y)\), which puts block \(a\) on \(b\) when both are top blocks in the stacks. Note, that the same rules are applicable for a new goal (say \(goal\_on(c,d)\)) or when blocks are increased to 10.
Footnote 3: DERRL uses a relational representation akin to Quinlanβs FOIL[42] with background knowledge comprising ground facts and non-recursive Datalog-formulated rules.
Additionally, we formulate a semantic loss [53] to guide the rule learning and restrict the hypothesis space of rules. This loss enforces semantic constraints on rule structure through a differentiable relaxation of semantic refinement [4] allowing users to encode background knowledge as axioms to mitigate rule redundancy. For example, in the rule \(r\gets less(X,Y),less(Y,Z),less(X,Z)\), due to the transitive relation \(less(X,Z)\gets less(X,Y),less(Y,Z)\), the term \(less(X,Z)\) is redundant. DERRL enables predefining such knowledge as axioms, penalizing models that violate them. We compare our framework with that of Neural Logic Reinforcement Learning (NLRL) [25] which uses FOL to represent reinforcement learning policies and is based on Differentiable Inductive Logic Programming (\(\partial\)ILP) [10]. Much like DERRL, NLRL uses policy gradients to train a differentiable ILP module by assigning learnable weights to rules. The authors demonstrate interpretability and generalizability of policies to different problem configurations. We demonstrate DERRL's advantages over NLRL in terms of computational efficiency, policy accuracy, and semantic constraint enforcement.
**Contributions.****(i)** A neuro-symbolic framework **DERRL** for learning interpretable RL policies in on-policy, model-free settings, demonstrating their adaptability to environmental changes; **(ii)** A differentiable relaxation of semantic refinement for guiding rule generation and constraining the hypothesis space.
## 2 Related Works
**Integrating Symbolic Planning and RL.** Recent research has sought to merge symbolic planning with deep reinforcement learning (DRL) to improve data efficiency and performance, as seen in works like PEORL [55], RePReL [30], and SDRL [36]. These approaches aim to integrate a high-level planner that suggests sub-goals to be executed by a low-level DRL model, thus relying on pre-defined environment dynamics, such as high-level action schemas with pre and postconditions. DERRL differs from planning-based methods as it is purely a RL approach, i.e. it does not have access to precise handcrafted action schemas or the
reward function, and instead learns suitable control policies through trial-and-error interactions with the environment. Therefore, we only compare DERRL with other RL baselines. **Explainable RL.** Previous studies on interpretable RL have utilized decision trees for their ease of interpretation. Standard decision trees consist of nested if-then rules, are non-differentiable, and cannot be trained using gradient descent methods. The online nature of RL problems, combined with the non-stationarity introduced by an improving policy, presents additional challenges for decision trees as the agent interacts with the environment. One straightforward but inefficient solution is to re-learn the decision trees from scratch [8]. More recently, researchers have explored the use of differentiable functions in decision trees [12]. Differentiable Decision Trees have also been adapted for the RL framework [44, 34], although their performance does not match that of deep neural networks. **Concurrent Works.** Our work on integrating differentiable logic programming into RL is concurrent with efforts such as NLRL [25] and dNL-ILP [41]. While dNL-ILP lacks goal generalization, we use the recent NLRL as our baseline. Recent research has also employed Graph Neural Networks [29] to capture relational representations [13, 14] with applications to DRL [24] demonstrating zero-shot generalization to varying problem sizes. DERRL additionally learns interpretable policies. **Adjusting Language Bias** The possible hypothesis space expands exponentially with input space, necessitating user adjustments to language biases based on domain knowledge. Relational learning systems use declarative bias via semantic refinement [4]. Differentiable rule learning methods like \(\partial\)ILP [10] and NLRL [25] use rule templates to limit rule body atoms to 2. However, these methods overlook background knowledge and face redundancies. DERRL mitigates redundancies and shrinks the search space through a differentiable relaxation of semantic refinement.
## 3 Preliminaries
### Logic Programming:
Logic Programming [35] rules are written as **clauses** of the form \(\alpha\leftarrow\alpha_{1},\ldots,\alpha_{m}\) composed of a **head** atom \(\alpha\) and a **body**\(\alpha_{1},\ldots,\alpha_{m}\). These clauses are defined using the standard _if-then_ rules, wherein, if the body is satisfied, the head is true. Each **atom** is a tuple \(p(v_{1},\ldots,v_{n})\) where \(p\) is a n-ary **predicate** and \(v_{1},\ldots,v_{n}\) are either **variables** or **constants**. A **ground** atom is one that contains only constants. A predicate can either be **extensional** when it is defined by a set of ground atoms, or **target** (intensional) when it is defined by a set of clauses.
An **alphabet**\(\mathcal{L}\) is defined by the tuple \(\mathcal{L}:=(\mathrm{P}_{\mathrm{tar}},\mathrm{P}_{\mathrm{ext}},arity,C,V)\) where, \(\mathrm{P}_{\mathrm{tar}}\) is a set of target predicates, \(\mathrm{P}_{\mathrm{ext}}\) is a set of extensional predicates, \(arity:\mathrm{P}_{\mathrm{ext}}\cup\mathrm{P}_{\mathrm{tar}}\mapsto\mathbb{N}\) is the number of arguments (variables or constants) that the predicate can take, \(C\) is a set of constants and \(V\) is a set of variables allowed in the clause. For the blocks world game in Figure 1, \(\mathrm{P}_{\mathrm{tar}}=\{move/2\}\), \(\mathrm{P}_{\mathrm{ext}}=\{top/1,on/2,\,goal\_on/2,isFloor/1\},V=\{X,Y,Z\}\), \(C=\{a,b,c\}\).
### Relational Markov Decision Process:
We model our problem as a Relational MDP (RMDP) given by the tuple \(\mathcal{E}:=(S,\mathcal{B},A,\delta,r,\gamma)\) which is just like a regular MDP, but with relational states and actions. Here, \(S\) is a set of states, where each state is represented as a set of ground atoms consisting of predicates in \(\mathrm{P}_{\mathrm{ext}}\) and constants in \(C\); \(\mathcal{B}\) is the background knowledge also represented in form of ground atoms consisting of predicates and constants, but unlike the state, it remains fixed over the episode; \(A\) is a set of actions consisting of ground atoms from predicates in \(\mathrm{P}_{\mathrm{tar}}\) and constants in \(C\); \(\delta:S\times A\mapsto S\) is an **unknown** transition function; \(r:S\times A\mapsto R\) is an **unknown** real-valued reward function; \(\gamma\) is the discount factor. In the blocks world game (Figure 1), for the tuple \(((a,b,c))\)4, the initial state \(s_{0}\) and \(\mathcal{B}\) are \(\{top(c),on(c,b),on(b,a),on(a,floor)\}\) and \(\{isFloor(floor),goal\_on(a,b)\}\), respectively. The actions are \(move(X,Y)\) where variables \(X\) and \(Y\) can be substituted with constants in \(C\). Although underlying models often use logical transition and reward rules [27], our approach is model-free, so we ignore them here.
Footnote 4: Here, the outer tuple denotes stacks and the inner tuples denote the blocks in the stack. For e.g., \(((a,b),(c,d))\) has two stacks: \((a,b)\) is stack \(1\) and \((c,d)\) is stack \(2\).
### Problem Statement:
**Given**, a tuple \((\mathcal{L},\mathcal{E})\) where \(\mathcal{L}\) is an alphabet, and \(\mathcal{E}\) is an RMDP;
**Find** an optimal policy \(\pi_{\theta}:S\cup\mathcal{B}\mapsto A\) as a set of clauses (also called **rules**) that maximizes the expected reward \(\mathbb{E}_{\tau\sim\pi_{\theta}}\big{[}R_{\tau}\big{]}\), where \(R_{\tau}:=\sum_{k=t+1}^{T-1}\gamma^{k-t-1}R_{k}\). Here, an episode trajectory is denoted by \(\tau\).
More formally, the rules are selected from the hypotheses space, which is the set of all possible clauses. The head atom of each such rule is an action and the body is the definition of the action. As shown in Figure 1, a rule for \(move(X,Y)\) in the blocks world environment is given as \(move(X,Y)\gets top(X),on(X,Z)\), \(isFloor(Y)\) which states that \(move(X,Y)\) is triggered when the rule definition (i.e., the body) is satisfied. Thus, if the policy selects the action \(move(c,floor)\), one can quickly inspect the body to find out **how** that action was taken. The rules are _discriminative_ (i.e. that help select the correct action by distinguishing it from alternative actions) and together provide an interpretation of the policy. A set of rules is learned for each action. Once trained, the rules for actions do not change and the rule body decides which action should be triggered at each time-step. In what follows, we provide a detailed explanation of rule generation and inference for each time-step of an episode. For simplicity, we expunge the time-step notation (for e.g., state at \(t^{th}\) time-step \(s_{t}\) is now \(s\)).
## 4 Proposed Approach
Consider an alphabet \(\mathcal{L}\) where \(\mathrm{P}_{\mathrm{tar}}=\{r/0,s/0\},\mathrm{P}_{\mathrm{ext}}=\{p/1,q/2\},V= \{X,Y\}\), \(C=\{a,b\}\). The set of all ground atoms \(G\) formed from the predicates in \(\mathrm{P}_{\mathrm{ext}}\) and
constants in \(C\) is \(\{p(a),p(b),q(a,a),q(a,b),q(b,a),q(b,b)\}\). We represent each ground atom \(g_{j}\in G\) along with its index \(j\) in Table 1.
Recall, that both state \(s\) and background knowledge \(\mathcal{B}\) are represented using ground atoms. Given a state \(s=\{p(a),q(a,a),q(a,b)\}\) at each time-step, and an empty background knowledge \(\mathcal{B}\), we encode it to a state vector \(\vec{v}\), such that each element \(v_{j}=1\) if \(g_{j}\in\{s,\mathcal{B}\}\) (i.e. if the current state \(s\) or the background knowledge \(\mathcal{B}\) contains the ground atom \(g_{j}\)), else \(0\). Let us now consider the set of all atoms \(K\) formed from the predicates in \(\mathrm{P}_{\mathrm{ext}}\) and variables in \(V\) (instead of the constants in \(C\)). Table 2 lists the atoms \(k_{j}\in K\) and the corresponding \(\vec{v}\).
We represent rules using rule vectors. As shown in Table 2, the rule vector for each action \(i\in A\) is given as \(\vec{b}^{i}\in\{0,1\}^{m}\), where \(m=\mid K\mid\) (i.e. the cardinality of the set of all atoms formed from the predicates in \(\mathrm{P}_{\mathrm{ext}}\) and variables in \(V\)). Here, \(b^{i}_{j}=1\), if the \(j^{th}\) atom is in the body of the \(i^{th}\) rule. From Table 2, \(\vec{b}^{r}=[0,1,0,0,1,0]^{\top}\) corresponds to the rule \(r\leftarrow\texttt{p}(Y),\texttt{q}(Y,X)\).
We impose the Object Identity (OI) assumption [28] which states that during grounding and unification, distinct variables must be mapped to distinct constants. For instance, ground rules for \(r\gets p(Y),q(Y,X)\) are \(r\gets p(b),q(b,a)\) and \(r\gets p(a),q(a,b)\) under substitutions \(\phi_{0}=\{a/X,b/Y\}\) and \(\phi_{1}=\{b/X,a/Y\}\), respectively, but a substitution \(\phi_{2}=\{a/X,a/Y\}\) is not allowed. Without loss of generality, one can model nullary predicates and negated atoms, by simply including additional dimensions (corresponding to atoms in \(K\)) in the vector \(\vec{b}^{i}\).
The DERRL framework learns a rule vector \(\vec{b}^{i}\) for each action \(i\in A\) by associating it with a trainable weight vector \(\vec{w}^{i}\). Each element \(w^{i}_{j}\in\vec{w}^{i}\) indicates the membership of the corresponding atom in the rule definition (i.e., if the weight of the atom is high, it is more likely to belong to the rule definition). Given the state vector \(\vec{v}\), action probabilities \(\pi_{\theta}(i\mid s,\mathcal{B})\) are calculated by performing a fuzzy conjunction on the rules (Section 4.2). The whole framework is trained end-to-end using the REINFORCE algorithm [52], with the loss function given as
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \(j\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & & \\ \(g_{j}\) & \(\mathbf{p}(\mathbf{a})\) & \(p(b)\) & \(\mathbf{q}(\mathbf{a},\mathbf{a})\) & \(\mathbf{q}(\mathbf{a},\mathbf{b})\) & \(q(b,a)\) & \(q(b,b)\) & \(s=\{p(a),q(a,a),q(a,b)\}\) \\ \(v_{j}\) & \(\mathbf{1}\) & \(0\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(0\) & \(0\) & \(\vec{v}=[1,0,1,1,0,0]\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table of all ground atoms \(G\) and their indices \(j\).
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \(j\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & & \\ \(k_{j}\) & \(p(X)\) & \(p(Y)\) & \(q(X,X)\) & \(q(X,Y)\) & \(q(Y,X)\) & \(q(Y,Y)\) & \\ \hline \(b^{r}_{j}\) & \(0\) & \(1\) & \(0\) & \(0\) & \(1\) & \(0\) & \(r\gets p(Y),q(Y,X)\) \\ \(P^{r}_{j}\) & \(0.1\) & \(0.8\) & \(0.3\) & \(0.4\) & \(0.7\) & \(0.2\) & \(\vec{w}^{r}=[0.8,0.7]^{\top}\) \\ \hline \(b^{s}_{j}\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(s\gets p(X),q(Y,Y)\) \\ \(P^{s}_{j}\) & \(0.6\) & \(0.3\) & \(0.4\) & \(0.2\) & \(0.1\) & \(0.9\) & \(\vec{w}^{s}=[0.6,0.9]^{\top}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: All atoms \(K\), their indices \(j\), the generated rule vectors \(\vec{b}^{r},\vec{b}^{s}\), and corresponding probability vectors \(\vec{P}^{r},\vec{P}^{s}\) for target actions \(\mathrm{P}_{\mathrm{tar}}=r/0,s/0\).
\(J(\pi_{\theta})=-\mathbb{E}_{\tau\sim\pi_{\theta}}\big{[}R_{\tau}\big{]}\). Here, \(R_{\tau}\) is the discounted sum of rewards over trajectory \(\tau\), and \(\theta\) is the set of trainable parameters.
Algorithm 1 summarizes the DERRL framework. The two main components of DERRL are **(i)** Rule Generator (Section 4.1), which at every time-step \(t\) and for each action \(i\in A\), generates a rule vector \(\mathbf{b}^{i}\), and a weight vector \(\mathbf{w}^{i}\); **(ii)** Forward chaining Inference (Section 4.2) that takes the generated rules vectors for all actions \(\{\mathbf{b}^{i}\}_{i=1}^{|A|}\), the corresponding weight vectors \(\{\mathbf{w}^{i}\}_{i=1}^{|A|}\), and the state valuation vector \(\mathbf{v}\) for the \(t^{th}\) time-step, and returns the action probabilities \(\pi_{\theta}(.\mid s,\mathcal{B})\). Note, that the rule generator parameters \(\theta\) are trained by calculating the gradients of the loss function with respect to weight vectors \(\mathbf{w}^{i}\).
### Rule Generation
The rule \(\mathcal{R}_{\theta}:i\mapsto\mathbf{b}^{i},\mathbf{w}^{i}\) uses a parameterized network \(\mathcal{R}_{\theta}\) to map each action (index) \(i\) to a rule vector \(\mathbf{b}^{i}\) and weight vector \(\mathbf{w}^{i}\). Here, \(b^{i}_{j}=1\) indicates the \(j^{th}\) atom is in the rule body. The rule generator outputs a probability vector \(\mathbf{P}^{i}\) where \(P^{i}_{j}\) represents the probability of the \(j^{th}\) atom in \(K\) belonging to the rule body. We use the Gumbel-max trick [23] on \(\mathbf{P}^{i}\) to sample the binary vector \(\mathbf{b}^{i}\).
\[b^{i}_{j}=\arg\max(\log(P^{i}_{j})+u_{0},\log(1-P^{i}_{j})+u_{1})\text{ where }u\sim\text{Gumbel}(0,1)\]
Here, \(\text{Gumbel}(0,1)\) is the standard Gumbel distribution given by the probability density function \(f(x)=e^{-(x+e^{-x})}\). During evaluation, we use \(\arg\max(.)\) operation without sampling. From \(\mathbf{P}^{i}\), we also obtain the weight vector \(\mathbf{w}^{i}\in\mathbb{R}^{\|\mathbf{b}^{i}\|_{1}}\) comprising the probabilities of only those atoms which have \(b^{i}_{j}=1\). From Table 2, the generated rule vector \(\mathbf{b}^{r}=[0,1,0,0,1,0]^{\top}\), the probability vector \(\mathbf{P}^{r}=[0.1,0.8,0.3,0.4,0.7,0.2]^{\top}\), and the corresponding weight vector \(\mathbf{w}^{r}=[0.8,0.7]^{\top}\).
### Inference
\begin{tabular}{c c c c c c c} \hline \hline \(i\) & \(\mathbf{X}^{i}\) & \(\mathbf{Y}^{i}\) & \(\mathbf{w}^{i}\) & \(\mathbf{z}^{i}\) & \(\mathcal{F}^{i}\) & \(\pi_{\theta}(i\mid s,\mathcal{B})\) \\ \hline \(r\gets p(Y),q(Y,X)\) & \(\begin{bmatrix}0,3\\ 1,4\end{bmatrix}\) & \(\begin{bmatrix}1,1\\ 0,0\end{bmatrix}\) & \(\begin{bmatrix}0.8\\ 0.7\end{bmatrix}\) & \(\begin{bmatrix}0.5\\ 0\end{bmatrix}\) & 0.5 & 0.62 \\ \hline \(s\gets p(X),q(Y,Y)\) & \(\begin{bmatrix}0,5\\ 1,2\end{bmatrix}\) & \(\begin{bmatrix}1,0\\ 0,1\end{bmatrix}\) & \(\begin{bmatrix}0.6\\ 0.9\end{bmatrix}\) & \(\begin{bmatrix}0\\ 0\end{bmatrix}\) & 0 & 0.38 \\ \hline \hline \end{tabular}
Consider the following rules generated by the rule generator. Each rule is passed through a substitution \(\phi:V\mapsto C\) to produce ground rules. Let, \(\mathbf{X}^{i}\in\mathbb{Z}_{\geq 0}^{N(\phi)\times\|\mathbf{b}^{i}\|_{1}}\) be the matrix representation of the ground rules. Here, \(\mathbb{Z}_{\geq 0}\) and \(N(\tilde{\phi})\) are the set of non-negative integers and the number of possible substitutions, respectively. Each row in \(\mathbf{X}^{i}\) is a vector of ground atom indices that belong to the ground rules. Using a substitution \(\phi=\{b/X,a/Y\}\), we obtain the ground rule \(r\gets p(a),q(a,b)\). From Table 1, this rule definition can be written as the vector \([0,3]\) (i.e., indices of \(p(a)\) and \(q(a,b)\) are 0, 3, respectively). Similarly, the substitution \(\phi=\{a/X,b/Y\}\) gives us \(r\gets p(b),q(b,a)\), and the vector \([1,4]\).
Next, the value(.) operation takes each element \(\mathbf{X}_{j}^{i}\) and returns its state value \(v_{j}\). From Table 1, \(\mathbf{Y}^{r}=\text{value}(\mathbf{X}^{r})=\text{value}(\begin{bmatrix}0,3\\ 1,4\end{bmatrix})=\begin{bmatrix}v_{0},v_{3}\\ v_{1},v_{4}\end{bmatrix}=\begin{bmatrix}1,1\\ 0,0\end{bmatrix}\). The row vectors of \(\mathbf{Y}^{i}=[\boldsymbol{y}_{1}^{i},\ldots,\boldsymbol{y}_{N(\phi)}^{i}] \in\mathbb{R}^{N(\phi)\times\|\boldsymbol{b}^{i}\|_{1}}\) can be regarded as truth values of the grounded rule (i.e., for each substitution \(\phi\)), based on \(\{s,\mathcal{B}\}\). If the rule definition is not satisfied, the corresponding row vectors will have sparse entries. To ensure differentiability, we use fuzzy norms for our rules.
#### 3.2.2 Fuzzy Conjunction Operators.
Fuzzy norms integrate logic reasoning with deep learning by approximating the truth values of predicates [10, 37]. Fuzzy conjunction operators \(*:[0,1]^{\|\boldsymbol{b}^{i}\|_{1}}\mapsto[0,1]\) can be of various types like Godel t-norm and Product t-norm (refer Section 11). We use Lukasiewicz t-norm (\(\top_{\text{Luk}}(a,b):=\max\{0,a+b-1\}\)) to compute the action values for each rule5. To encourage the rule generator to generate more precise rules with higher probability, we calculate a valuation vector by weighing each row \(\boldsymbol{y}_{k}^{i}\in\mathbb{R}^{\|\boldsymbol{b}^{i}\|_{1}}\) with the weight vector \(\boldsymbol{w}^{i}\in\mathbb{R}^{\|\boldsymbol{b}^{i}\|_{1}}\), and using the Lukasiewicz operator as \(z_{k}^{i}=\max(0,\langle\boldsymbol{y}_{k}^{i},\boldsymbol{w}^{i}\rangle- \mid\boldsymbol{w}^{i}\mid+1)\). Intuitively, the inner product \(\langle\boldsymbol{y}_{k}^{i},\boldsymbol{w}^{i}\rangle\) is a weighted sum over all atoms in the rule body that are true in \(s\cup\mathcal{B}\). This is akin to performing \((a+b)\) in t-norm operator. For \(\boldsymbol{y}_{0}^{r}=[1,1]^{\top},\boldsymbol{w}^{r}=[0.8,0.7]\):
Footnote 5: More generally, given a vector \(\boldsymbol{y}\in[0,1]^{n}\), Lukasiewicz t-norm \(\top_{\text{Luk}}\boldsymbol{y}:=\max(0,\langle\boldsymbol{y},\boldsymbol{1} \rangle-n+1)\). For the proof, refer Section 11.
\[z_{0}^{r}=\max(0,\langle\begin{bmatrix}1\\ 1\end{bmatrix},\begin{bmatrix}0.8\\ 0.7\end{bmatrix}\rangle-1)=0.5\]
With multiple substitutions (or groundings) for a generated rule, we find the maximum valuation as \(\mathcal{F}^{i}=\max(\boldsymbol{z}^{i})\). The final action probability is calculated as \(\pi_{\theta}(i\mid s,\mathcal{B})=\text{softmax}(\mathcal{F}^{i})\). Note, that if the generated rule is not satisfied for any substitution (i.e., has a sparse row vector in the matrix \(\mathbf{Y}^{i}\)), the valuation of the generated rule is lower (for e.g., from the above table \(\mathcal{F}^{s}<\mathcal{F}^{r}\)).
#### 3.2.3 Multiple rules for a single action.
We generalize DERRL to learn policies with multiple rules for each action, allowing it to switch between rules based on input (for e.g. in the blocks world game, the "move" action uses two rules executed at different steps depending on the goal blocks' position). We allow multiple rule networks per action, adjusting the final computation step to determine action probabilities based on the best-satisfied rule. Given \(\mathcal{F}_{1}^{i}\) and \(\mathcal{F}_{2}^{i}\) for two different rules for the same action, we first compute \(\tilde{\mathcal{F}}^{i}=\max(\mathcal{F}_{1}^{i},\mathcal{F}_{2}^{i})\) to determine which rule is more appropriate at a given time-step. Consider two different rules generated for action \(r\) with arbitrary \(\mathbf{Y}^{i}\):
```
Input: Alphabet: \(\mathcal{L}\), RMDP: \(\mathcal{E}\) Output: (set of) rules that encode the policy \(\pi_{\theta}\) Initialize rule generator parameters \(\theta\) for each episode do for\(t=0\) to \(T-1\)do \(\mathbf{v}=\text{encode}(s,\mathcal{B})\)\(\triangleright\) state vector for each action \(i\)do \(\mathbf{b}^{i},\mathbf{w}^{i}\sim\mathcal{R}_{\theta}(i)\)\(\triangleright\) Rule Generation (Section 4.1) endfor \(\pi_{\theta}(.\mid s,\mathcal{B})\) = \(Inference(\mathbf{v},\{\mathbf{b}^{i}\}_{i=1}^{|A|},\{\mathbf{w}^{i}\}_{i=1}^{|A|})\)\(\triangleright\) (Section 4.2) \(a\sim\pi_{\theta}(.\mid s,\mathcal{B})\) \(s^{\prime}\leftarrow\delta(s,a)\); \(R_{t}\gets r(s,a)\) endfor \(R_{\tau}\leftarrow\sum_{t+1}^{T-1}\gamma^{k-t-1}R_{k}\) \(\theta\leftarrow\theta-\eta\nabla_{\theta}\mathbb{E}_{\tau\sim\pi_{\theta}}[R _{\tau}]\) endfor
```
**Algorithm 1** Deep Explainable Relational Reinforcement Learning (DERRL)
Here, \(\tilde{\mathcal{F}}^{r}=\max(0.5,0)=0.5\) is the valuation for rule \(r\). Intuitively, depending on the current state \(s\) and background knowledge \(\mathcal{B}\), one of the rules will be more appropriate (i.e., lower sparsity in rows of \(\mathbf{Y}^{i}\)) than the others, prompting the policy to switch to that rule for decision making6.
Footnote 6: This assumes a specified upper bound on the number of rules for each action, similar to selecting the number of clusters in a clustering algorithm [54].
### Semantic Constraints
The set of possible rules to consider grows exponentially with the number of predicates and their arity. While traditional relational learning systems have used declarative bias in form of semantic refinement [4], prior works in differentiable rule learning [10, 25] employ rule templates to restrict the hypothesis space (e.g. rules of size 2). However, these methods frequently encounter redundancies. For example, rules \(r\gets less(X,Y),less(Y,Z),less(X,Z)\) and \(s\gets equal(X,Y),equal(Y,X)\) exhibit transitive and symmetric relations, respectively, making some atoms redundant. A rule \(r\) is redundant w.r.t. a constraint \(h\gets b_{1},...,b_{n}\) if the rule \(false\gets h,b_{1},...,b_{n}\) subsumes the rule \(r\). To avoid redundancies, generated rule vectors with \(b_{j}^{i}=1\) for both atoms
\(equal(X,Y)\) and \(equal(Y,X)\) should be penalized. To this end, we propose a differentiable relaxation of semantic refinement by applying a supervised loss on probability vectors \(\{\mathbf{P}^{i}\}_{i=1}^{|A|}\). We declare semantic constraints \(\mathcal{S}_{c}\) as axioms which can either be a relation (symmetric or transitive), or some background fact (like \(false\gets on(X,Y),on(Y,X)\)). Then we calculate the semantic loss as \(\mathcal{L}_{sem}=\sum_{x\in\mathcal{S}_{c}}\sum_{i\in A}\prod_{j\in x}P_{j}^ {i}\).
Here, the outer summation is over each semantic constraint \(x\in\mathcal{S}_{c}\), and the inner summation is over each generated rule \(i\in A\). The product is over the probability of each atom (with index \(j\)) in the body of axiom \(x\). For instance, given a single axiom \(false\gets p(Y),q(Y,X)\), for the generated rule \(r\gets p(Y),q(Y,X)\), from Table 2, the loss is \(\mathcal{L}_{sem}=P_{1}^{i}\times P_{4}^{i}=0.56\). Here, the loss is high because according to the given constraint, \(p(Y)\) and \(q(Y,X)\) should not appear together in the body of the rule. Intuitively, the loss is highest if the membership probabilities of both atoms are high warranting a penalization. \(\mathcal{L}_{sem}\) is summed over the entire episode and the final loss is given as \(\bar{J}(\pi_{\theta})=J(\pi_{\theta})+\lambda_{sem}\mathcal{L}_{sem}\). Here, \(\lambda_{sem}\) is a regularization term. See Appendix 8 for constraints in all environments.
## 5 Experiments
Through our experiments, we aim to answer the following questions: (1) Can the proposed approach learn interpretable policies while performing on par with neural-based baselines? (Section 6.1); (2) Are the learned rules agnostic to modifications in the environment? (Section 6.2); (3) How efficient and scalable is the proposed approach compared to the current state-of-the-art NLRL? (Section 6.3)
### Experimental Setup
**The Countdown Game.** The agent manipulates a stack of numbers and an initial accumulated value \(acc(X)\) to match a target number \(goal(X)\) by applying operations like addition (\(add\)), subtraction (\(sub\)), or no operation (\(null\)). The stack comprises of the top number (\(curr(X)\)), number below it (\(next(X,Y)\)), and bottom-most number (\(last(X)\)). From Figure 1, the state \(s_{t}\) includes the stack, accumulated number, and goal number. Operations are performed between the accumulated value and the top number of the stack7. The background knowledge \(\mathcal{B}\) comprises the target number8, and atoms of the form \(less(X,Y)\) which denote that number \(X\) is less than \(Y\). A reward of \(r=1\) is given when the target and accumulated values match at the end of the episode, otherwise \(r=-\frac{|goal-acc|}{N_{1}}\) where \(N_{1}\) is a normalizing constant. An initial range of numbers \([-4,6]\) and stack of length\(=2\) is used for training. The learned models are tested for generalization on the following tasks (i) dynamic stack lengths of \(\{3,4,5\}\); (ii) held-out target unseen during training; (iii) held-out initial stack sequences. We also train a stochastic game version with 10% probability of altering an action to null.
Footnote 7: \(add\): acc \(+=\) top, \(sub\): acc \(-=\) top, \(null\): acc
Footnote 8: \(goal(X)\) is provided as background since it does not change during the episode.
**Blocks World Manipulation.** Given an initial configuration of blocks, the goal is to put a specified block atop another specified block (Figure 1). Stacks are represented using predicates: \(top(X)\) means that block \(X\) is the top block, \(on(X,Y)\) means that block \(X\) is on top of block \(Y\). The actions are \(move(X,Y)\) with \(X=\{a,b,c\}\) and \(Y=\{a,b,c,floor\}\). A reward \(r=1\) is provided if the task is achieved. To enforce optimal planning, we impose a penalty of \(r=-0.02\) for every action. Training includes a fixed number of blocks \(=3\) and a fixed goal - to stack block \(a\) on block \(b\)\((goal\_on(a,b))\). We train it with initial configurations: \(((a,b,c));((c,a,b));((a,c),(b));((b,c),(a))\). Here, each tuple is a stack. For generalization, we use variations like (i) held-out configuration unseen during training like \(((a,b),(c));((b,c,a));((b,a,c))\); (ii) dynamic (number of) blocks \(\{4,5\}\); (iii) dynamic (number of) stacks \(\{2,3,4\}\); (iv) unseen goals like \(goal\_on(b,a)\) and \(goal\_on(a,c)\).
**Gridworld.** The agent navigates a grid with obstacles to reach the goal (Figure 1). The agent can move vertically (\(up/down\)) or sideways (\(left/right\)). The state information consists of the current position of the agent \(curr(X,Y)\) where \(X\) and \(Y\) are the coordinates, and the compass direction of the target (_North, South, East, West, Northeast, Northwest, Southeast, Southwest_). The background information consists of target coordinates (\(target(X,Y)\)), obstacle coordinates (\(obs(X,Y)\)), and successor information \(succ(X,Y)\) where \(Y=X+1\). The action space is \(\{up,down,left,right\}\). The agent receives a reward of \(r=1\) for reaching the target, otherwise \(r=-\frac{\|position_{goal}-position_{agent}\|_{2}}{N_{2}}\). Here \(N_{2}\) is a normalizing constant. During training, a fixed size grid of \(3\times 3\) and \(5\times 5\) is used with the number of obstacles being fixed \(=2\). For generalization, we use the following variations: (i) dynamic (number of) obstacles \(\{3,4\}\); (ii) held-out (agent-goal) configurations. Unlike graph search algorithms like A\({}^{*}\) that assume access to the dynamics model, DERRL learns actions through exploration.
**Traffic.** We used the Simulation of Urban MObility (SUMO) traffic simulator [31] to simulate traffic flow, where intersections (3-way and 4-way) function as agents denoted by \(intersection(Y)\), and are connected by a network of 2-way lanes represented as \(between(X,Y,Z)\), indicating a connection between intersections \(Y\) and \(Z\) by lane \(X\). The goal is to minimize the traffic at the intersections, hence reward is the negative queue length at each intersection. Each agent is provided with the lane that has the highest traffic, labeled as \(highest(X)\) for lane \(X\), and is responsible for controlling the traffic lights for that lane, enabling them to turn the lights green (\(green(X)\)) for a specific lane \(X\). Therefore, 3 and 4-way intersections have an action space of size \(3,4\) respectively. Although only two models (one for all 3-way intersections and another for all 4-way intersections) suffice, we train each intersection independently to demonstrate the scalability of DERRL to multi-agent setups, and also for future developments in cooperative multi-agent setups [57, 21]. We train on a grid comprising 5 agents and transfer the learned rules to an 8-agent grid. We show the mean rewards of all agents in Table 3 - note, that the best possible reward \(\approx 0\).
**DoorKey Minigrid.** The agent task is to unlock a locked door (\(locked(Y)\),
\(door(Y)\)) and reach a goal (\(goal(Z)\)). Various colored keys (\(key(X)\)) are scattered
throughout the room, and the agent must select the key that matches the color of the door (\(samecolor(X,Y)\)) to unlock it. We use high-level actions. The agent is only allowed to carry one key at a time and can navigate to and pick up a key \(X\) using the \(pick(X)\) action if it is not carrying any keys (\(notcarrying\)), otherwise, it drops the key before picking the new one. The \(open(X)\) action enables it to unlock a door \(X\) if it carries the key that matches the door's color. The \(goto(X)\) action enables the agent to navigate to a specific object \(X\). The reward \(=1\) for successfully reaching the goal, else \(0\). The learned model is tested for generalization with additional doors and keys of varying colors.
We evaluate our DERRL against Neural Logic Reinforcement Learning (**NLRL**) baseline. We also compare with model-free DRL approaches with variations in the deep learning module like (i) Graph Convolution Network (**GCN**) [29] that perform well at relational learning [32], and are invariant to the number of nodes in the graph; (ii) Multilayer Perceptron (**MLP**). Finally, we compare with a Random (**Random**) baseline where the weights of the MLP are randomized to set a lower limit on the performance. We use a single-layer neural network for our rule generator (\(2m\) parameters). For the GCN and MLP baselines, we experimented with 2-layer networks (O(\(m^{3}\)) parameters).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Setup** & & **DERRL** & **NLRL** & **GCN** & **MLP** & **Random** \\ \hline \multirow{3}{*}{Countdown Game} & training & 0.98 & 0.95 & 0.98 & **1.00** & 0.30 \\ \cline{2-7} & dynamic stack & **0.98** & 0.95 & 0.95 & 0.54 & 0.38 \\ \cline{2-7} & held-out target & **0.98** & 0.85 & 0.95 & 0.35 & 0.33 \\ \cline{2-7} & held-out initial & 0.98 & 0.55 & **1.00** & 0.35 & 0.18 \\ \hline \multirow{3}{*}{Countdown Game(stochastic)} & training & 0.98 & 0.95 & 0.98 & **1.00** & 0.15 \\ \cline{2-7} & training & **0.97** & **0.97** & **0.97** & **0.97** & -0.18 \\ \cline{2-7} & held-out config. & **0.97** & 0.70 & 0.55 & 0.45 & -0.18 \\ \cline{2-7} & dynamic blocks & **0.92** & 0.51 & -0.20 & -0.21 & -0.22 \\ \cline{2-7} & dynamic stacks & **0.96** & 0.90 & 0.90 & 0.85 & -0.18 \\ \cline{2-7} & unseen goal & **0.96** & 0.45 & -0.18 & -0.18 & -0.18 \\ \hline \multirow{3}{*}{DoorKey Minigrid} & training & 0.80 & 0.45 & 0.75 & **0.90** & 0.10 \\ \cline{2-7} & dynamic keys/doors & **0.78** & 0.25 & 0.35 & 0.20 & 0.05 \\ \hline \multirow{3}{*}{Traffic} & training (5-agents) & **-0.76** & -0.91 & -0.90 & -0.95 & -1.54 \\ \cline{2-7} & 8-agents & **-1.02** & -1.28 & -1.45 & -1.75 & -2.17 \\ \hline \multirow{3}{*}{Gridworld Game} & training & 0.75 & 0.72 & 0.70 & **0.81** & 0.03 \\ \cline{2-7} & dynamic obstacles & **0.70** & 0.55 & 0.46 & 0.51 & -0.15 \\ \cline{2-7} & held-out config. & **0.81** & 0.70 & 0.17 & -0.61 & -0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Generalization Scores (average rewards over 50 episodes across 3 runs) compare DERRL to other baselines. DERRL outperforms baselines, including the state-of-the-art NLRL. In Traffic, the mean reward for all agents is reported.
## 6 Results
### Interpretation of policies (Q1)
In this section, we provide interpretations of the learned rules in each training environment, as shown in Figure 1. **The Countdown Game.** The policy selects \(add\) action when \(acc(X)<goal(Y)\), \(sub\) action when \(acc(X)>goal(Y)\), and null action when both are equal. **Blocks World Manipulation.** The policy learns two rules for \(move(X,Y)\). Given a goal to put block \(a\) atop block \(b\), the first rule is applicable when at least one of the blocks is not the top block. Hence, the policy learns to unstack the blocks - the top block \(X\) of the stack is moved to the floor \(Y\). The second rule is applied when both \(a\) and \(b\) are at the top. **Traffic.** The general rule for each intersection is \(green(X)\gets highest(X),between(X,Y,Z),intersection(Y),intersection(Z)\). Intuitively, the lights corresponding to the lane with the highest traffic \(X\), connecting intersections \(Y\) and \(Z\), are turned green. **DoorKey Minigrid.** The learned rule for action \(pickup(X)\) tells the agent to pickup the key \(X\) that matches the color of the door \(Y\), provided that it is not carrying any other items (\(notcarrying\)). Similarly, for \(open(X)\), the learned rule states that the agent can unlock a locked door \(X\) using the key \(Y\) only if the colors of the door and the key match. The \(goto(X)\) action directs the agent to navigate to the goal object \(X\) when the door is \(unlocked\). **Gridworld Game.** In this setting, the policy learns two rules for each action. The first is used for navigation to the target, such as moving \(up\) if the target is to the north (or northeast and northwest) of the grid. The second helps navigate around obstacles, e.g. move \(up\) if the obstacle (given by \((Z,Y)\)) is to the immediate right of the agent (given by \((X,Y)\)). However, the policy may not follow the shortest path9 or have consistent traversal strategies, resulting in varied rules for different instances without performance loss.
Footnote 9: When the target is to the southeast, and the agent encounters a target to its right, it will travel north (\(up\)) rather than south (\(down\)).
DERRL also bears similarities with Program Synthesis [48, 3, 20], which involves finding a program that meets user specifications and constraints. Learned policies can be rewritten as programs (see Appendix 10). For example, a program to solve the countdown game involves operating on current and accumulated values (using \(add,sub,null\) operations) until accumulated value = goal value.
### Generalization performance (Q2)
From Table 3, we observe that DERRL learns general rules, generalizing to environment modifications, and outperforms baselines in generalization tasks. Unlike symbolic planning, DERRL's performance remains unaffected by noisy training, such as stochastic Countdown. Secondly, although GCNs perform well in relational learning, their generalization is marginally better than MLP, potentially failing to capture task agnostic relational patterns. However, GCN surpasses MLP and NLRL in the countdown game. Lastly, DERRL's training performance
is comparable to MLP, but it outperforms MLP in generalization tasks, where MLP is similar to the Random baseline. Additionally, DERRL's convergence speed is on par with MLP, as shown in Figure 2.
### Comparison with NLRL (Q3)
**Computational Complexity.** NLRL assigns trainable weights to all possible rules with a body size of 2, while DERRL allocates weights to each atom in the rule body. Given, \(m\) atoms from \(\mathrm{P}_{\mathrm{ext}}\) and \(V\), NLRL has C(m,2) learnable weights, while DERRL has 2m. Therefore, the training reduces from learning the best set of rules (in NLRL) to learning the best membership of the rules (in DERRL), leading to a lower computation time in DERRL. The computation time per episode is reduced by a factor of \(\approx 10\) (Figure 3).
**Comparing Learned Rules.** NLRL learned rules in blocksworld are:
\[move(X,Y)\gets top(X),pred(X,Y);\quad move(X,Y)\gets top(X),goal \_on(X,Y)\] \[pred(X,Y)\gets isFloor(Y),pred2(X);\quad pred2(X)\gets on (X,Y),on(Y,Z)\]
With invented predicates \(pred\) and \(pred2\), this plan differs from DERRL in that the second rule doesn't verify if both X and Y are movable, failing to solve configurations where block b is below block a. **Size of hypothesis space.** DERRL restricts the hypothesis space through the use of semantic constraints in the optimization problem, whereas large hypothesis space in NLRL limits its convergence in DoorKey and Traffic domains. The convergence in DERRL is slower without semantic constraints (see Appendix 9 for ablations on DERRL with and without semantic constraints). **Expressiveness.** NLRL can learn recursive rules by using templates as in meta-interpretive learning and predicate invention. While this is expressive, it can be hard to master. In contrast, DERRL learns non-recursive Datalog as Quinlan's FOIL [42] but combines it with constraints that can be recursive to rule out redundancies.
## 7 Conclusion
We proposed a neuro-symbolic approach to learn interpretable policies that are also generalizable. The representations that DERRL and RRL use are very sim
Figure 2: [Best viewed in color] Comparison of training rewards at convergence for different baselines plotted by averaging the rewards over 3 independent runs. [Left to right] Countdown Game, Blocks World, and Gridworld.
ilar to those used in the planning community. We also significantly improve upon the scalability of existing state-of-the-art NLRL. Upgrading the approach to enable automatic learning of the required number of rules can be a potential research direction. Also, as a part of future work, it will be interesting to explore ways in which the proposed approach can be scaled to real-life applications requiring the need to process raw sensory inputs.
|
2307.09507 | Hooks & Bends in the Radial Acceleration Relation: Discriminatory Tests
for Dark Matter and MOND | The Radial Acceleration Relation (RAR) connects the total gravitational
acceleration of a galaxy at a given radius, $a_{\rm tot}(r)$, with that
accounted for by baryons at the same radius, $a_{\rm bar}(r)$. The shape and
tightness of the RAR for rotationally-supported galaxies have characteristics
in line with MOdified Newtonian Dynamics (MOND) and can also arise within the
Cosmological Constant + Cold Dark Matter ($\Lambda$CDM) paradigm. We use zoom
simulations of 20 galaxies with stellar masses of $M_{\star} \, \simeq
10^{7-11} \, M_{\odot}$ to study the RAR in the \texttt{FIRE-2} simulations. We
highlight the existence of simulated galaxies with non-monotonic RAR tracks
that ``hook'' down from the average relation. These hooks are challenging to
explain in Modified Inertia theories of MOND, but naturally arise in all of our
\lcdm-simulated galaxies that are dark-matter dominated at small radii and have
feedback-induced cores in their dark matter haloes. We show, analytically and
numerically, that downward hooks are expected in such cored haloes because they
have non-monotonic acceleration profiles. We also extend the relation to
accelerations below those traced by disc galaxy rotation curves. In this
regime, our simulations exhibit ``bends'' off of the MOND-inspired
extrapolation of the RAR, which, at large radii, approach $a_{\rm tot} \,
\approx \, a_{\rm bar} \, /f_{\rm b}$, where $f_{\rm b}$ is the cosmic baryon
fraction. Future efforts to search for these hooks and bends in real galaxies
will provide interesting tests for MOND and $\Lambda$CDM. | Francisco J. Mercado, James S. Bullock, Jorge Moreno, Michael Boylan-Kolchin, Philip F. Hopkins, Andrew Wetzel, Claude-André Faucher-Giguère, Jenna Samuel | 2023-07-18T18:00:02Z | http://arxiv.org/abs/2307.09507v2 | # Hooks & Bends in the Radial Acceleration Relation: Tests for Dark Matter and Challenges for MOND
###### Abstract
The Radial Acceleration Relation (RAR) connects the total gravitational acceleration of a galaxy at a given radius, \(a_{\rm tot}(r)\), with that accounted for by baryons at the same radius, \(a_{\rm bar}(r)\). The shape and tightness of the RAR for rotationally-supported galaxies have characteristics in line with MOdified Newtonian Dynamics (MOND) and can also arise within the Cosmological Constant + Cold Dark Matter (\(\Lambda\)CDM) paradigm. We use zoom simulations of 20 galaxies with stellar masses of \(M_{\star}\simeq 10^{7-11}\)\(M_{\odot}\) to demonstrate that the observed average and scatter about the RAR is reproduced in FIRE-2 simulations. We highlight the existence of many observed galaxies with non-monotonic RAR tracks that "hook" down from the average relation. These hooks are challenging to explain in MOND, but we see them in all of our simulated galaxies that are dark-matter dominated and have feedback-induced cores in their dark matter haloes. We show analytically that downward hooks are expected in such cored haloes because they have non-monotonic acceleration profiles. We also make RAR predictions for the outer reaches of our simulated galactic haloes, extending the relation to accelerations below those traced by disc galaxy rotation curves. In this regime, our simulations predict "bends" off of the MOND-inspired extrapolation of the RAR, which, at large radii, approach \(a_{\rm tot}\approx a_{\rm bar}/f_{\rm b}\), where \(f_{\rm b}\) is the cosmic baryon fraction. Future efforts to search for these bends at low accelerations around real galaxies will provide tests for MOND and \(\Lambda\)CDM.
keywords: galaxies: formation - cosmology: theory
## 1 Introduction
The cosmological constant + cold dark matter (\(\Lambda\)CDM) model proposes the existence of non-luminous, collisionless (dark) matter that governs galactic dynamics and is essential for structure formation in the Universe (see review; Salucci, 2019). An alternative to \(\Lambda\)CDM for explaining the dynamics of galaxies is M0dified Newtonian Dynamics (MOND; Milgrom, 1983a,b,c), which replaces the need for dark matter with a modification to Newton's laws of motion at accelerations below a characteristic scale \(a_{0}\sim 10^{-10}\,{\rm m\,s^{-}}\). Whilst we focus on MOND for the duration of this paper, we note that it is not the only attempt to describe the Universe without dark matter (see Shankaranarayanan & Johnson (2022) for a review of modified theories of gravity). Several empirical "mass-to-light" scaling relations have been introduced and discussed in the literature within the context of both \(\Lambda\)CDM and MOND (Faber & Jackson, 1976; Tully & Fisher, 1977; McGaugh et al., 2000; McGaugh, 2015; McGaugh et al., 2016). Of particular note is the Radial Acceleration Relation (RAR; McGaugh et al., 2016).
In the original RAR paper, McGaugh et al. (2016) showed that galaxies in the Spitzer Photometry and Accurate Rotation Curve (SPARC) database (Lelli et al., 2016) scatter tightly around a one-to-one relationship between the centripetal acceleration profile, \(a_{\rm tot}\) (inferred by rotation curves), and the Newtonian acceleration due to the baryonic matter alone, \(a_{\rm bar}\). These quantities can be expressed in terms of either the circular velocity at a given radius, \(v_{\rm rot}(r)\)1 and \(v_{\rm bar}(r)\), or the total mass within that radius, \(M_{\rm tot}(r)\) and \(M_{\rm bar}(r)\), as follows:
Footnote 1: Here \(v_{\rm rot}=\sqrt{GM/r}\) and is often inferred from galaxy rotation curves.
\[a_{\rm tot}(r)=\frac{v_{\rm rot}^{2}(r)}{r}=\frac{GM_{\rm tot}(r)}{r^{2}}, \tag{1}\]
\[a_{\rm bar}(r)=\frac{v_{\rm bar}^{2}(r)}{r}=\frac{GM_{\rm bar}(r)}{r^{2}}. \tag{2}\]
McGaugh et al. (2016) provide a fit to the empirical RAR with asymptotic behaviour that tracks the MONDian expectation:
\[a_{\rm tot}(r)=\frac{a_{\rm bar}(r)}{1-e^{-\sqrt{a_{\rm bar}(r)/a_{0}}}}\,, \tag{3}\]
where \(a_{0}=1.20\pm 0.26\times 10^{-10}\,{\rm m\ s^{-2}}\). For large accelerations, \(a_{\rm bar}\gg a_{0}\), we have \(a_{\rm tot}\propto a_{\rm bar}\). At small accelerations, \(a_{\rm bar}\ll a_{0}\), the relation approaches the low-acceleration MOND prediction \(a_{\rm tot}\propto a_{\rm bar}^{1/2}\). Given the characteristically low scatter of the observed RAR, it is important to consider the emergence of the RAR within the context of \(\Lambda\)CDM.
Within a dark-matter framework like \(\Lambda\)CDM there is no single parameter that defines a characteristic acceleration scale, but such a scale emerges as a consequence of dissipative galaxy formation (Kaplinghat & Turner, 2002). The observed RAR provides an even stricter test: how does a tight RAR with the observed normalisation _and_ shape arise within the context of \(\Lambda\)CDM? Several studies employ galaxy formation simulations to show that an RAR does arise without fine tuning in \(\Lambda\)CDM (Keller & Wadsley, 2017; Ludlow et al., 2017; Tenneti et al., 2018; Garadil et al., 2018; Dutton et al., 2019). Though different simulation groups rely on different implementations of star formation and feedback, they all produce fairly tight RARs, albeit with slightly different median trends and scatter than that of the observed relation. Wheeler et al. (2019) argue that the RAR is an algebraic consequence of the Baryonic Tully Fisher Relation (BTFR). Grudic et al. (2020) provide a picture in which a characteristic acceleration scale emerges from stellar feedback physics such that \(a_{0}\) can be expressed using fundamental constants. More recently, Paranjape & Sheth (2021) present a framework in which the RAR is a result of the interplay between baryonic feedback physics and the distribution of dark matter in galaxies (for accelerations \(10^{-12}\,{\rm m\ s^{-2}}\)\(\lesssim a_{\rm bar}\lesssim 10^{-10}\,{\rm m\ s^{-2}}\)). Other studies use halo abundance matching to build semi-empirical models that result in relations with similar normalisation and scatter to the observed RAR (Di Cintio & Lelli, 2016; Desmond, 2017; Navarro et al., 2017; Li et al., 2022). Notably, Li et al. (2022) predict upward-bending "hook" features in the tracks of low mass galaxies that deviate from the observed RAR (McGaugh et al., 2016). This systematic increase in the total acceleration at the inner regions of such galaxies is a consequence of their cuspy inner dark matter density profiles. Furthermore, Li et al. (2022) show that these deviations from the observed RAR are amplified when considering adiabatic contraction of the NFW halo due to baryonic compression and conclude that other effects, such as stellar feedback, would need to be considered to make more accurate predictions.
In this work, we compare the RAR for 20 FIRE-2 zoom simulations, which do include stellar feedback within the context of \(\Lambda\)CDM, against the empirical RAR for real galaxies. SS2 describes the RAR as an analytic scaling relation and suggests that non-monotonic "hooks" should arise naturally in a dark-matter framework. In SS3 we present examples of such hooked RAR profiles in observed galaxies from the SPARC sample. SS4 introduces our simulations and SS5 demonstrates that these reproduce the observed RAR in aggregate, and also include instances with hook features - which appear in connection to cored dark matter density profiles in the inner regions of low mass galaxies. In SS6, we use our simulations to make predictions for "bends" in the RAR profiles of galaxies that appear at very low accelerations well beyond the regions probed by galaxy rotation curves. These bends are a consequence of total baryonic mass profiles reaching baryonic closure at large radii. In SS7, we provide a discussion of how our results can serve as a basis to test models using the RAR. Finally, SS8 summarises our results.
## 2 Analytic expectations
First we provide an analytic framework to guide expectations. It will be useful to characterise the total and baryonic mass profiles as local power-laws with slopes \(p(r)\) that vary slowly with radius: as \(M_{\rm tot}\propto r^{p_{\rm bar}}\) and \(M_{\rm bar}\propto r^{p_{\rm bar}}\). Equations 1 and 2 then imply that
\[a_{\rm tot}\propto r^{p_{\rm bar}-2}\,\,\,\,\,a_{\rm bar}\propto r^{p_{\rm bar }-2}. \tag{4}\]
Note that for radii large enough to contain the total mass, \(p(r)\to 0\), yielding the expected Keplerian scaling \(a\propto 1/r^{2}\) as \(r\to\infty\). Equation 4 allows us to write the scaling behaviour of the RAR as
\[a_{\rm tot}(r)\quad\propto\quad a_{\rm bar}(r)^{m}\,\,\,;\,\,\,m\equiv\frac{p_ {\rm tot}-2}{p_{\rm bar}-2}. \tag{5}\]
For many familiar mass profiles, the acceleration is monotonic with radius and always largest at small radii (\(p_{\rm bar}<2\) and \(p_{\rm tot}<2\)) with \(p(r)\) decreasing as \(r\) increases; in such cases, the relationship between \(a_{\rm bar}(r)\) and \(a_{\rm tot}(r)\) will also be _monotonic_. Note, however, that if the value of \(m(r)\) ever changes sign as a function of radius, the relationship between \(a_{\rm bar}(r)\) and \(a_{\rm tot}(r)\) will not be monotonic. The MOND-inspired RAR parameterization provided by McGaugh et al. (2016) is explicitly monotonic (see our Equation 3) and has \(m=1\) at large accelerations, \(a_{\rm bar}\gg a_{0}\), and \(m=1/2\) at small accelerations, \(a_{\rm bar}\ll a_{0}\).
We can understand the asymptotic scaling of the RAR for _massive_ galaxies as follows. At small radii and large accelerations, such galaxies are typically baryon dominated (Tollerud et al., 2011; Cappellari et al., 2013; Lovell et al., 2018). In this case \(m=1\) occurs naturally because \(M_{\rm tot}\simeq M_{\rm bar}\), \(p_{\rm tot}\simeq p_{\rm bar}\), and \(m\simeq 1\). At large radii and low accelerations, the baryonic acceleration must track the Keplerian expectation with \(p_{\rm bar}\simeq 0\). If, as is usually observed, the total rotation curve is flat out to the galaxy's edge, \(M_{\rm tot}\propto r\) and \(p_{\rm tot}=1\). With \(p_{\rm bar}=0\) and \(p_{\rm tot}=1\) we have \(m=1/2\) at large \(r\) and small \(a\).2
Footnote 2: Whilst this asymptotic behaviour makes sense, it is important to recognise that the observed existence of flat rotation curves below \(a_{0}\) is a key motivation for MOND in the first place. In the context of \(\Lambda\)CDM, the question is whether the flattening occurs as observed. As discussed in the introduction and shown in Β§5, many \(\Lambda\)CDM simulations produce galaxies with acceleration profiles that track the observed RAR from high to low accelerations across the \(a_{\rm bar}\simeq a_{0}\) transition remarkably well.
Now consider galaxies that are dark-matter dominated in their centres, as is often the case for low-mass galaxies (Carignan & Freeman, 1988; Martinbeau et al., 1994; de Blok & McGaugh, 1997). In this limit \(M_{\rm bar}(r)\ll M_{\rm tot}(r)\simeq M_{\rm dm}(r)\), where \(M_{\rm dm}(r)\) is the dark matter mass distribution. If the dark matter obeys a density profile of the form \(\rho_{\rm dm}\propto r^{-n}\) at small radii, then in this limit \(p_{\rm tot}\simeq p_{\rm dm}\simeq 3-n\). For an NFW-like "cuspy" profile (Navarro et al., 1997) we have \(n\to 1\) at radii smaller than the halo scale radius, which gives \(p_{\rm tot}\to 2\) at small radii. Interestingly, baryons arrayed in an exponential disc have \(p_{\rm bar}\to 2\) for radii much smaller than the galaxy scale radius. However, since galaxy scale radii are typically smaller than dark matter scale radii, we expect \(p_{\rm bar}\lesssim p_{\rm tot}\simeq 2\) such that \(m\) is close to, but less than, unity at the centres of dark-matter-dominated galaxies: \(1/2<m\lesssim 1\). We refer the reader to Navarro et al. (2017) for a more
thorough discussion of how the RAR scaling arises within cuspy dark matter haloes.
Whilst the above discussion may help us to explain _on average_ why \(m\sim 1/2\) at large \(r\) (low \(a\)) and \(m\sim 1\) at small \(r\) (high \(a\)) may arise in a \(\Lambda\)CDM context, the argument is much less robust for dark-matter dominated galaxies than for baryon-dominated galaxies where \(m=1\) is achieved by definition. Specifically, if at any point along the acceleration profiles of a galaxy, the value of the quantity \(m=(p_{\rm tot}-2)/(p_{\rm bar}-2)\) in Equation 5 changes sign from positive to negative as we approach the inner galaxy, then a "hook" in the RAR would emerge. Given that we expect both \(p_{\rm bar}\approx 2\) and \(p_{\rm tot}\approx 2\) to be reasonable values at small radii in dark-matter-dominated galaxies, it would be surprising if cases _never_ occurred where one of the slopes had \(p\gtrsim 2\) and the other had \(p\lesssim 2\) such that hooks appeared. For example, if we have a dark-matter dominated galaxy where the inner dark matter profile was core-like, with \(\rho_{\rm dm}\propto r^{-n}\) and \(n<1\), then this will give \(p_{\rm tot}>2\) and provide conditions where a non-monotonic, downward hook is likely.
Figure 1 provides two schematic examples of how the density distributions (left panels) of baryons (cyan) and total matter (magenta) translate into acceleration profiles (middle panels) and RAR relations (right panels). The upper panels correspond to a "standard RAR" whilst the lower panels display a "downward hook". In both cases we assume the same large-\(r\) behaviour for the baryons: the density falls off quickly with \(r\), such that the baryonic acceleration is Keplerian with \(a_{\rm bar}\propto r^{-2}\). 3 We also assume that the total density profile produces a flat rotation curve at large radii, with \(\rho_{\rm tot}\propto r^{-2}\) and \(a_{\rm tot}\propto r^{-1}\). These assumptions produce the familiar low-acceleration behaviour in the RAR: \(a_{\rm tot}\propto a_{\rm bar}\)1/2.
Footnote 3: The precise slope of the baryonic density profile at large r does not matter as long as it large enough (steeper than \(r^{-5}\)) to contain the majority of the baryonic mass within a finite radius, which will drive baryonic acceleration towards the Keplerian behaviour beyond that point.
In the upper panels, labeled "baryon-dominated inner profile," we assume a total density profile dominated by baryons at small radii, with a cuspy inner slope \(\rho_{\rm tot}\simeq\rho_{\rm bar}\propto r^{-1.3}\). The specific value of the cusp slope is not important, only that it is steeper than \(r^{-1}\), which produces a monotonic acceleration profile. With this specific choice we have \(a_{\rm tot}\simeq a_{\rm bar}\propto r^{-0.3}\) and \(a_{\rm tot}\propto a_{\rm bar}\) at large \(a_{\rm bar}\). In the upper right panel we see that the two asymptotic slopes match the fiducial RAR values.
In the lower panels, labeled "DM-dominated cored profile," we assume that the total density profile is dominated by a cored dark matter halo with \(\rho_{\rm tot}\simeq\rho_{\rm dm}\propto r^{0}\). This implies that the total acceleration profile obeys \(a_{\rm tot}\propto r^{1}\) at small \(r\), and immediately demands that the \(a_{\rm tot}(r)\) profile is non-monotonic with radius. If we assume that the baryonic profile is monotonic, following the same scaling assumed in the upper panel, then this produces a non-monotonic RAR with \(a_{\rm tot}\propto a_{\rm bar}\)\({}^{-3.3}\) at large \(a_{\rm bar}\) (corresponding to small radii). The shape
Figure 1: _Schematic examples: a standard RAR and a downward hook._ The upper and lower panels show simple examples of how spherically-symmetric 3D density profiles (left) of total mass distributions (magenta) and baryonic mass distributions (cyan) map to radial acceleration profiles (middle) and ultimately to the RAR (right). Each panel assumes a log-log axis scaling. The dashed grey arrow in the middle and right panels is pointed in the direction of decreasing radius. In the upper panels, we assume a baryon-dominated, inner cuspy profile, and this naturally produces a standard-type RAR relation (upper right). In the lower set of figures, we assume a dark-matter-dominated inner mass profile, with a cored density distribution. This assumption gives rise to an RAR profile with a downward hook, of the type shown for real galaxies in Figure 2 and simulated galaxies in Figure 3. See the end of Β§2 for a more detailed description. _Takeaway:_ Reasonable assumptions for the density makeup of baryon-dominated galaxies allows us to understand the observed average scaling of the RAR in a natural way (top); these expectations break down for dark-matter dominated galaxies with cored inner dark matter density profiles, which should often deviate from the average scaling (bottom).
this makes in the lower right panel is what we refer to as a downward "hook."
## 3 Rar HOoks in real galaxies
Visual inspection of the RAR tracks of the 175 SPARC galaxies4 reveals downward hooks in \(\sim 15\%\) (\(N=26\)) of the observed sample, all of which are outside of the baryon-dominated regime (\(M_{\bullet}\leq 10^{10}\)\(M_{\odot}\)). Table 1 lists these galaxies, along with their baryonic masses. Figure 2 shows RAR tracks for three examples (DDO154, NGC0055 and UGC06667), chosen to represent the diversity of hook behaviour in the SPARC database, and the aggregate RAR for all SPARC galaxies (grey 2D histogram). The black dashed line is the best fitting curve to the data introduced by McGaugh et al. (2016) (our Equation 3), and the grey dotted line represents a 1-to-1 relationship. Though we do not discuss them at length in this paper, we also note that a small fraction of SPARC galaxies (\(\sim 5\%\), N = 8) exhibit _upward_ hooks off of the median RAR scaling towards smaller radii and higher accelerations. Table 1 includes these instances. One example (UGC02259) is plotted as the set of green points in Figure 2.
Footnote 4: We visually classify hooks using the individual frames of the RAR video provided on the SPARC website: [http://astroweb.cwru.edu/SPARC/](http://astroweb.cwru.edu/SPARC/).
Different behaviours for \(a_{\rm tot}(r)\) and \(a_{\rm bar}(r)\) lead to different _kinds_ of hooks. For example, DDO154 and NGC0055 both have non-monotonic \(a_{\rm tot}(r)\) profiles accompanied by monotonic \(a_{\rm bar}(r)\) profiles. As can be seen from inspecting Equation 5, such a situation can naturally produce downward hooks, where the value of \(m=\left(p_{\rm tot}-2\right)/\left(p_{\rm bar}-2\right)\) changes from positive to negative. As the radius decreases, \(a_{\rm tot}(r)\) peaks (\(p_{\rm tot}=2\) and \(m=0\)) and then begins to decline (\(p_{\rm tot}>2\), \(m<0\)) whilst \(a_{\rm bar}(r)\) continues to rise (\(p_{\rm bar}<2\)).
A slightly more complicated example of a downward hook is UGC06667 (yellow squares). This galaxy has double-valued acceleration profiles for both \(a_{\rm tot}(r)\)_and_\(a_{\rm bar}(r)\), but the turnover points occur at different radii. Specifically, \(a_{\rm tot}(r)\) peaks and begins to decline at a larger radius than \(a_{\rm bar}(r)\). This means that as we track the RAR profile from the outer part of UGC06667 inward (from low \(a_{\rm bar}\) to high \(a_{\rm bar}\)), the slope will transition from positive, \(m>0\), to negative, \(m<0\), as we cross the radius where \(a_{\rm tot}(r)\) peaks (where \(p_{\rm tot}\) first becomes \(>2\)). As can be seen in Equation 5, the slope of the RAR will remain negative (\(m<0\)) whilst \(p_{\rm bar}<2\) and \(p_{\rm tot}>2\), until we pass the radius where \(a_{\rm bar}(r)\) also peaks (such that now \(p_{\rm bar}<2\)). At this point the hook bends back on itself with \(m>0\) again.
Finally, UGC02259 (green triangles) exhibits upward hooks. This galaxy has a monotonic \(a_{\rm tot}(r)\) that always increases as \(r\) decreases (\(p_{\rm tot}<2\)), but an \(a_{\rm bar}(r)\) profile that is non-monotonic, peaking at a finite radius where \(p_{\rm bar}=2\). As we follow \(a_{\rm bar}(r)\) from the outside in, it approaches its peak, such that \(p_{\rm bar}\to 2\) (from below) whilst \(p_{\rm tot}<2\), which drives \(m=\left(p_{\rm tot}-2\right)/\left(p_{\rm bar}-2\right)\gg 1\). Such a steep positive slope means that its RAR peels steeply upward away from the average relation before hooking back towards \(m<0\) as the \(a_{\rm bar}(r)\) profile begins to decline (\(p_{\rm bar}>2\)).
The four galaxies discussed here provide examples of more general cases where we expect hooks - non-monotonic \(a_{\rm bar}(r)\) and/or \(a_{\rm tot}(r)\) profiles - in RAR space5. First, if \(a_{\rm tot}(r)\) peaks and \(a_{\rm bar}(r)\) does not, then the RAR hook will be downward: \(m\sim 1/2\to m<0\) as \(a_{\rm bar}\) increases. If \(a_{\rm bar}(r)\) peaks and \(a_{\rm tot}(r)\) does not, then the hook will
\begin{table}
\begin{tabular}{c c|c c} \hline \hline Galaxy & Baryonic Mass & Galaxy & Baryonic Mass \\ Name & \(\left[\log(\mbox{M}_{\rm bar}/\mbox{M}_{\odot})\right]\) & Name & \(\left[\log(\mbox{M}_{\rm b}/\mbox{M}_{\odot})\right]\) \\ \hline Downward Hooks & & & \\ \hline D564β8 & 7.74 & UGC00731 & 9.41 \\ D631β7 & 8.68 & UGC004278 & 9.33 \\ DD0154 & 8.59 & UGC05414 & 9.12 \\ DD0168 & 8.81 & UGC05764 & 8.41 \\ ESO116+6012 & 9.55 & UGC05986 & 9.77 \\ F574β1 & 9.90 & UGC06667 & 9.25 \\ IC2574 & 9.28 & UGC06917 & 9.79 \\ KK98β251 & 8.29 & UGC07089 & 9.53 \\ NGC0055 & 9.64 & UGC07151 & 9.29 \\ NGC0100 & 9.63 & UGC07399 & 9.20 \\ NGC2403 & 9.97 & UGC07603 & 8.73 \\ NGC3109 & 8.86 & UGC08837 & 8.83 \\ NGC4010 & 10.09 & UGC4442 & 8.62 \\ \hline Upward Hooks & & & \\ \hline DD0170 & 9.10 & NGC4100 & 10.53 \\ NGC0024 & 9.45 & NGC5585 & 9.57 \\ NGC0247 & 9.78 & UGC02259 & 9.18 \\ NGC3877 & 10.58 & UGC04325 & 9.28 \\ \hline \hline \end{tabular}
\end{table}
Table 1: SPARC Galaxies that we visually identify as having non-monotonic downward hooks in RAR space (top group) and upward hooks in RAR space (bottom group). Examples of these categories are shown as the coloured points in Figure 2. Columns 1 & 3: galaxy names. Columns 2 & 4: SPARC-quoted baryonic mass.
Figure 2: _The observed radial acceleration relation._\(a_{\rm tot}\) versus \(a_{\rm bar}\) for all 175 SPARC galaxies is illustrated by the grey 2D histogram. The black dashed line is the fit to the data introduced by McGaugh et al. (2016). The grey dotted line represents a 1-to-1 relationship. The red, orange, and yellow sets of points are examples of downward βhooksβ in the RAR tracks of specific SPARC galaxies; the green points show an example of an upward hook. _Takeaway_: Not all galaxies demonstrate monotonic relationships between \(a_{\rm tot}\) and \(a_{\rm bar}\), as would be expected in MOND. Most of the non-monotonic tracks we find are downward hooks (see Table 1).
be upward: \(m\sim 1/2\to m\gg 1\) as \(a_{\rm bar}\) increases. If they both peak, we will have downward hooks if \(a_{\rm tot}(r)\) peaks at a larger radius than \(a_{\rm bar}(r)\). Conversely, we will have upward hooks if \(a_{\rm bar}(r)\) peaks at a larger radius than \(a_{\rm tot}(r)\).
Note that the list of "hook" RAR galaxies listed in Table 1 include systems that are unambiguously non-monotonic and leaves out galaxies that have tracks with more ambiguous shapes. In this sense our quoted fractions of SPARC galaxies that appear as downward (\(\sim 15\%\)) and upward (\(\sim 5\%\)) hooks are conservative estimates. Of course, there are uncertainties on these measurements, which rely heavily on stellar mass estimates and non-trivial rotation velocity determinations. More work will be needed to determine whether these identified hooks are robust to all relevant uncertainties. Nevertheless, it is important to point these instances out for follow up work that probes the innermost regions of these galaxies.
If robust to observational uncertainties, the existence of hook features in the RAR tracks of observed galaxies present a significant challenge to MOND, since it would mean that \(a_{\rm tot}\) and \(a_{\rm bar}\) do not always follow a monotonic relation. We note recent studies have identified a number of SPARC galaxy RAR tracks (many of the same galaxies we have listed in Table 1) that deviate significantly from MOND predictions (Frandsen and Petersen, 2018; Petersen and Frandsen, 2020; Eriksen et al., 2021).
## 4 Simulations
We employ cosmological zoom simulations run with the multi-method gravity plus hydrodynamics code GIZMO (Hopkins, 2015) from the Feedback In Realistic Environments project 6. Our simulations are initialised following the method described in Onorbe et al. (2014) and run using the FIRE-2 feedback implementation (Hopkins et al., 2018), utilising the mesh-free Lagrangian Godunov (MFM) method. The MFM approach provides adaptive spatial resolution and maintains conservation of mass, energy, and momentum. The FIRE-2 model includes gas heating and cooling with a temperature range of \(\rm T=10-10^{10}\,K\). Gas cooling is due to molecular transitions and metal-line fine structure transitions at low temperatures whilst cooling at temperatures of \(\geq 10^{4}\,K\) is due to primordial and metal line cooling and free-free emission. The simulations include a uniform cosmic ionising background (Faucher-Giguere et al., 2009) and multiple channels of stellar feedback. The stellar feedback model includes Type II and Type Ia supernovae, whose from OB stars and AGB mass loss, and radiative feedback (photoionisation, photoelectric heating, and radiation pressure). Relevant inputs are taken from stellar evolution models (Leitherer et al., 1999, STARBURST99). The simulations generate and track eleven separate chemical species (H, He, C, N, O, Ne, Mg, Si, S, Ca, and Fe) for both gas and stars. Star formation occurs for self-shielding, molecular gas that is above a threshold density of \(n_{\rm crit}\geq 1000\) cm\({}^{-3}\), self-gravitating, and Jeans unstable. After a star particle is formed, it is treated as a single stellar population with a Kroupa IMF (Kroupa, 2002) with mass and metallicity inherited from its progenitor gas particle.
Footnote 6: [https://fire.northwestern.edu/](https://fire.northwestern.edu/)
In this work, we define dark matter haloes to be spherical systems with viral radii, \(r_{\rm vir}\), inside which the average density is equal to \(\Delta_{\rm vir}(z)r_{\rm crit}(z)\). Here, the critical density, \(\rho_{\rm crit}\), is defined to be equal to \(3H^{2}(z)/8\pi G\) and \(\Delta_{\rm vir}(z)\) is the redshift-evolving virial overdensity defined by Bryan and Norman (1998). The dark matter halo virial mass, \(M_{\rm vir}\), is then defined as the dark matter mass within \(r_{\rm vir}\). Finally, we take the stellar mass (\(M_{\bullet}\)) and the baryonic mass (\(M_{\rm bar}\)) to be the sum of the stellar mass and baryonic mass within 10 percent of \(r_{\rm vir}\) respectively.
Our analysis includes 20 simulated galaxies spanning a stellar mass range of \(M_{\bullet}\sim 10^{7-11}\)\(M_{\odot}\) and a halo virial mass range of \(M_{\rm vir}\sim 10^{10-12}\)\(M_{\odot}\) at \(z=0\). Six galaxies (m12*) are isolated MW-mass analogs and are part of the Latte suite (Wetzel et al., 2016; Garrison-Kimmel et al., 2017; Hopkins, 2017; Garrison-Kimmel et al., 2019; Samuel et al., 2020). Another six (Romeo and Juliet, Thelma and Louise, Romulus and Remus) are pairs from 3 simulations run as part of the ELVIS on FIRE project (Garrison-Kimmel et al., 2019, 2019). These galaxies are set in environments with configurations similar to the Local Group (LG) (just as in Garrison-Kimmel et al., 2014). Namely, each simulation contains MW and M31 analogues with similar relative separations and velocities to the real MW-M31 pair. The other eight galaxies in our sample (m11* & m10*) are isolated and less massive with stellar masses \(M_{\bullet}\simeq 10^{7.5-9.6}\)\(M_{\odot}\) and virial masses of \(M_{\rm vir}\simeq 10^{10.3-11.4}\)\(M_{\odot}\)(see; El-Badry et al., 2018; Graus et al., 2019). Table 2 indicates properties of our simulated galaxies and relevant references. For the public data release and more information on the core suite of FIRE-2 simulations (m11* & m12*s), please see Wetzel et al. (2023). Finally, we emphasize in the next section, that all of the m11* and m10* exhibit hook features in RAR space
\begin{table}
\begin{tabular}{c c c c} \hline \hline Simulation & Baryonic Mass & Virial Mass & Virial Radius \\ Name & [\(\log(M_{\rm bar}/M_{\odot})\)] & [\(\log(M_{\rm vir}/M_{\odot})\)] & [kpc] \\ \hline Isolated m12*s & & & \\ \hline m120(\({}^{\rm A}\)) & 11.07 & 12.04 & 335 \\ m126(\({}^{\rm A}\)) & 10.93 & 12.03 & 328 \\ m127(\({}^{\rm B}\)) & 11.10 & 12.10 & 355 \\ m121(\({}^{\rm C}\)) & 10.97 & 11.96 & 314 \\ m128(\({}^{\rm D}\)) & 11.19 & 12.06 & 342 \\ m129(\({}^{\rm E}\)) & 10.85 & 11.92 & 301 \\ \hline Elvis Pairs & & & \\ \hline Ronee\({}^{\rm(A)}\) & 11.02 & 12.01 & 317 \\ Juliet\({}^{\rm(A)}\) & 10.81 & 11.93 & 302 \\ Thelma\({}^{\rm(A)}\) & 11.07 & 12.03 & 332 \\ Louise\({}^{\rm(A)}\) & 10.69 & 11.93 & 310 \\ Romulus\({}^{\rm(P)}\) & 11.19 & 12.18 & 375 \\ Remus\({}^{\rm(P)}\) & 10.87 & 11.99 & 320 \\ \hline m11*s & & & \\ \hline m114(\({}^{\rm G}\)) & 9.81 & 11.42 & 204 \\ m11e\({}^{\rm(G)}\) & 9.47 & 11.15 & 166 \\ m11h\({}^{\rm(G)}\) & 9.89 & 11.24 & 177 \\ m11i\({}^{\rm(G)}\) & 9.37 & 10.83 & 128 \\ \hline m10*s & & & \\ \hline m10xb\({}^{\rm(H)}\) & 8.46 & 10.35 & 66 \\ m10xc\({}^{\rm(H)}\) & 8.75 & 10.50 & 74 \\ m10xc\({}^{\rm(H)}\) & 8.19 & 10.59 & 79 \\ m10xc\({}^{\rm(H)}\) & 8.97 & 10.66 & 83 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Columns from left to right: (1) Simulations names. The superscript letter corresponds to the reference papers for each simulation. (2) Total baryonic mass within ten percent of the virial radius. (3) Halo virial mass. (4) Halo virial radius. Reference papers for simulations β A: Garrison-Kimmel et al. (2019); B: Garrison-Kimmel et al. (2017); C: Wetzel et al. (2016); D: Hopkins et al. (2018); E: Samuel et al. (2020); F: Garrison-Kimmel et al. (2019); G: El-Badry et al. (2018); H: Graus et al. (2019). Note that all of our simulations with baryonic masses less than \(10^{10}\)\(M_{\odot}\) (the m11*s and m10*s) have core-like inner dark matter profiles and appear as downward hooks in RAR space.
Figure 3.β _The simulated & observed radial acceleration relation. Top Panel:_\(a_{\rm tot}\) versus \(a_{\rm bar}\) for our simulated galaxy sample (circles) colour-coded by the radius, in units of \(r_{\rm vir}\), at which the measurement was performed. The SPARC data used in McGaugh et al. (2016) are illustrated by the grey 2D histogram in the background of this panel. The grey dotted line represents a 1-to-1 relationship whilst the black line is the fit to the SPARC data introduced by McGaugh et al. (2016). The dashed portion of the black line represents the same fit extrapolated down to accelerations not probed by the SPARC data. Inster: A histogram of the residuals about the black line for the observed and simulated data in grey and red, respectively. _Bottom Panel:_ The residuals relative to the McGaugh et al. (2016) fit (black line) as a function of \(a_{\rm bar}\) for the simulated and observed data. _Takeaway:_ As ensembles, the simulations and observations show strikingly similar RARs, both in normalisation and in scatter. In addition, several simulated tracks show downward βhooks,β reminiscent of the downward hooks highlighted in Figure 2.
similar to those we discussed in SS2. As discussed in Lazar et al. (2020), each of these "hook" galaxies also has a dark matter profile that is core-like at small radii, with \(\rho_{\rm dm}\propto r^{n}\), \(n<1\).
## 5 The Simulated Rar and Hooks at Small Radii
Figure 3, presents a comparison between the simulated and the _observed_ RAR. The top panel shows the total centripetal acceleration, \(a_{\rm lcr}(r)\), as a function of the baryonic centripetal acceleration, \(a_{\rm bar}(r)\), for our simulated sample (circles), colour-coded by the radius of measurement, \(r\), in units of \(r_{\rm vir}\). For each galaxy we calculated pairs of acceleration values at radii spanning \(0.01\,r_{\rm vir}\leq r\leq 0.1\,r_{\rm vir}\). We make this choice in order provide a reasonable comparison to the radial rotation curve ranges in the SPARC data reported by McGaugh et al. (2016). Note that we compute \(a_{\rm lcr}(r)\) and \(a_{\rm bar}(r)\) directly from the simulations using \(M_{\rm lcr}(r)\) and \(M_{\rm bar}(r)\), respectively. On the other hand, the same quantities for the SPARC sample (illustrated by the grey 2D histogram in the background of this panel) are inferred by modeling observed galaxy rotation curves and surface brightness profiles. The grey dotted line shows a 1-to-1 relationship whilst the solid black curve is the MONDian fit (Equation 3). The black dashed portion of the curve represents the same fit extrapolated down to accelerations not probed by the SPARC data. The inset shows a histogram of the residuals about the black curve for the observed and simulated data in grey and red, respectively. Finally, in the bottom panel, we plot the residuals relative to the McGaugh et al. (2016) fit (black line) as a function of \(a_{\rm bar}\) for the simulated and observed data. It is clear that the RAR arises from the simulations that is similar in
Figure 4: _Understanding the hooks._ The RAR (_top left_), the total radial centripetal acceleration profiles (_top right_), the total radial density profiles (_bottom left_), and the baryonic radial centripetal acceleration (_bottom right_) of a subset of our simulated galaxies as well as three individual galaxies from the SPARC data: DDO154 (crimson circles), NGC0055 (orange upside-down triangles), and UGC03546 (magenta plus signs). The lines representing the simulated profiles are colour-coded by the log slope of the dark matter profile, \(\alpha_{\rm dm}={\rm dm}\rho_{\rm dm}/{\rm dm}\,r\), measured between 0.5 and 1 kpc, as shown by the colour bar on the right. _Takeaway:_ Simulated galaxies with cored central dark matter density profiles also exhibit double-valued total, and sometimes baryonic, radial acceleration profiles and appear as downward hooks in the RAR. Note that DDO154 and NGC0055 are just 2 of 26 galaxies that we visually determine to exhibit downward RAR hooks out of the 175 galaxies in the SPARC database (see Β§3).
normalisation and scatter to the observed relation. This is in agreement with past work that shows that an RAR can arise as a natural consequence of the \(\Lambda\)CDM cosmological model (Desmond, 2017; Ludlow et al., 2017; Navarro et al., 2017; Dutton et al., 2019; Wheeler et al., 2019; Grudic et al., 2020; Paranjape and Sheth, 2021).
We now draw attention to the hook features in the simulated data near \(a_{\rm bar}~{}\sim~{}10^{-11}\) m s\({}^{-2}\). These features appear well below the characteristic acceleration scale, \(a_{0}\), where \(a_{\rm tot}\) should be proportional to \(a_{\rm bar}^{m}\) with \(m\simeq 1/2\) according to MOND. These downward hooks, as predicted in SS2, are manifestly different than the MONDian prediction, and therefore represent an important way to test simulation results like ours against that framework. Note that we see no upward hooks amongst the 20 simulated galaxies in our sample. If we take the \(\sim 5\%\) of SPARC galaxies with such hooks as an expectation (see SS3), then we might have expected one in our sample, which is not grossly inconsistent with the zero we see.
Figure 4 explores the origin of the hook features. We plot the RAR (top left), the total radial centripetal acceleration profiles (top right), the total density profiles (bottom left), and the baryonic radial acceleration profiles (bottom right) of a subset of our simulated galaxies (selected to represent the range of simulated galaxy profiles), as well as three individual galaxies from the SPARC data. Namely, we focus on DDO154 (crimson circles), NGC0055 (orange upside-down triangles), and UGC03546 (magenta plus signs). The lines represent the simulated profiles and are colour-coded according to the log slope of the dark matter profile between 0.5 and 1 kpc. Yellow lines correspond to cuspy profiles and purple lines are more core-like. Note that both DDO154 and UGC03546 exhibit clear hook features in RAR space and have masses and acceleration profiles (total and baryonic) similar to simulated galaxies with cored inner central dark matter density profiles. On the other hand, UGC03546 is a more massive galaxy with a monotonic track in RAR space. Its properties are similar to those of simulated massive galaxies with cuspy central dark matter density profiles. We conclude that simulated galaxies with cored central dark matter density profiles exhibit double-valued total, and sometimes baryonic, radial acceleration profiles and appear as hooks in the RAR. This is consistent with the analytic expectations discussed in SS2. Ultimately, the predicted hooks in the RAR are a consequence of stellar feedback that redistributes dark matter within the centre-most regions of low-mass galaxies (see Ogiya and Mori, 2011; Pontzen and Governato, 2012; Di Cintio et al., 2014; Onorbe et al., 2015; Chan et al., 2015; Lazar et al., 2020, and references therein).
## 6 Bends at low acceleration and large radii
We now extend our analysis out to very large galactocentric radii in order to probe the lowest acceleration scales (\(a_{\rm bar}~{}\lesssim~{}10^{-12}\) m s\({}^{-2}\)). Figure 5 shows the RAR for our simulated sample (circles) colour-coded by the radius, in units of \(r_{\rm vir}\), at which the measurement was performed. This time, we provide the accelerations for each galaxy out to five times the virial radius (\(5\,r_{\rm vir}\)); the points making up each galaxy track are colour coded by \(r/r_{\rm vir}\) as indicated by the colour bar. Note that the baryonic mass here includes stars and _all_ gas. This is important because the baryonic mass (and therefore acceleration) at large radii is dominated by diffuse circumgalactic gas (e.g. Li et al., 2018; Hafen et al., 2019). Halo gas is not as relevant at the smaller radii traced by galaxy rotation curves such as those in the SPARC sample.
The two dotted grey lines represent a 1-to-1 relationship (labeled as "1:1") and the line that tracks a 1-to-1 relation with a normalisation set by the cosmic baryon fraction, \(a_{\rm tot}\propto a_{\rm bar}/f_{\rm b}\). The dashed black
Figure 5: _The RAR out to large radii._ The RAR for our simulated sample (circles) colour-coded by the radius, in units of \(r_{\rm vir}\), at which the measurement was performed. The accelerations for each galaxy are provided out to five times the virial radius (\(5\,r_{\rm vir}\)). The two dotted grey lines represent a 1-to-1 relationship (labeled as β1:1β) and a line that tracks the cosmic baryon fraction (labeled "\(f_{\rm b}=\Omega_{\rm b}/\Omega_{\rm m}\)β) as \(a_{\rm tot}\)= \(a_{\rm bar}/f_{\rm b}\) with \(f_{\rm b}=0.165\). The dashed black line represents the relation provided by McGaugh et al. (2016). _Takeaway:_ The simulated galaxy tracks lie very close to the fit to the SPARC data at accelerations \(a_{\rm bary}~{}\gtrsim~{}10^{-12}\) m s\({}^{-2}\) but bend off at lower accelerations as a result of cosmological homogeneity and the necessity of baryonic closure at large radii.
Figure 6: _The ratio of \(a_{\rm tot}\) to \(a_{\rm bar}\) versus radius._ The ratio of the total radial acceleration profile to the baryonic radial acceleration profile as a function of radius, normalized to the virial radius (\(r_{\rm vir}\)) for the simulated galaxies in our sample colour-coded by the virial mass of each galaxy, \(M_{\rm vir}\). The position on the y-axis where the ratio equals unity and the inverse of the cosmic baryon fraction are represented by horizontal, dotted grey lines. _Takeaway:_ Regardless of stellar mass, all galaxies have total to baryonic acceleration ratios that asymptotically approach the inverse baryon fraction at large radius. Galaxies with lower stellar masses have tracks that become baryon deficient at intermediate radii but eventually bend back towards the limit set by cosmology.
line represents the relation fitted to the SPARC data by McGaugh et al. (2016), extrapolated down to low accelerations. This figure, similar to Figure 3, shows that the simulated galaxies in our sample follow the fit to the observed RAR fairly well at acceleration scales probed by the SPARC data (\(a_{\rm bar}\ \gtrsim\ 10^{-12}\,{\rm m\,s^{-2}}\)). However, at lower accelerations, the simulated galaxies bend off of the extrapolated analytic relation and eventually approach the dotted line set by the cosmic baryon fraction. These "bends" are driven by the fact that, at large radii, the fraction of mass in baryons begins to increase towards the cosmic baryon fraction \(f_{\rm b}\) set by cosmology. By inspecting equations 1 and 2, we eventually reach the limit where \(M_{\rm bar}=f_{\rm b}\,M_{\rm tot}\), which implies \(a_{\rm tot}\)= \(a_{\rm bar}/f_{\rm b}\). Searching for bends in the RAR traced to very large radii around galaxies will provide an interesting discriminatory test of dark matter and MOND.
In Figure 6, we attempt to better understand the bending behaviour by plotting the ratio of the total radial acceleration to the baryonic radial acceleration as a function of galactocentric radius, normalized by the virial radius. The curves are colour-coded by the virial mass. The horizontal, dotted grey lines mark the positions on the y-axis where the ratio equals unity (labeled as "1:1") and the inverse of the cosmic baryon fraction (\(f_{\rm b}^{-1}=\Omega_{\rm m}/\Omega_{\rm b}=6.06\)). Notice that all galaxies, regardless of mass, have acceleration profile ratios (or total mass to baryon mass ratios) that are near unity at small galactocentric radii (\(r\ll r_{\rm vir}\)) but approach the value set by the cosmic baryon fraction at very large galactocentric radii (\(r\gg r_{\rm vir}\)). More massive galaxies (yellow curves) reach baryonic closure by \(r\simeq r_{\rm vir}\), whilst their less-massive counterparts (purple curves) have acceleration ratios that stray further from the \(f_{\rm b}\) normalisation and only reach baryonic closure at very large radii. This behaviour is driven by the relative power of stellar feedback as a function of galaxy mass. The shallow potential wells of low mass galaxies make it possible for stellar feedback to blow baryons out beyond their virial radii. Additionally, the susceptibility of low mass galaxies to UV background radiation can also prevent the accretion of more baryons. As a result, the baryon fraction lies well below the cosmic value out to quite large radii (\(0.5\,r_{\rm vir}<r<3\,r_{\rm vir}\)) and is not recovered even at \(\sim 5\,r_{\rm vir}\) in some cases. On the other hand, the more massive MW-like galaxies have deep enough potential wells that feedback cannot deplete their baryon content as effectively. As a result, the curves of more massive galaxies reach the cosmic baryon fraction scaling at much smaller radii than their less-massive counterparts.
## 7 Discussion and Implications
In this paper we focus on instances of non-monotonic relationships between \(a_{\rm bar}\) and \(a_{\rm tot}\) for real and simulated galaxies. Behaviour of this kind is not predicted by MOND and thus can be used as a tool to distinguish between what is allowed within a dark matter framework and in a MOND-inspired theory.
Figure 2 and Table 1 draw attention to galaxies in the SPARC sample (Lelli et al., 2016) that have distinctive downward "hook" features in their RAR tracks that trace small radii behaviour at accelerations below the MOND acceleration scale \(a_{0}\sim 10^{-10}\) m s\({}^{-2}\). Figure 3 shows that similar hook features exist in our MCDM simulated galaxies, specifically low-mass galaxies with \(10^{7}\,M_{\odot}\lesssim M_{\star}\lesssim 10^{10}\,M_{\odot}\). Figures 1 and 4 illustrate how these hook features arise in galaxies with cored inner dark matter density profiles, which have double-valued total radial acceleration profiles. Cored dark matter profiles arise in our simulations as a result of star-formation feedback. Note, however, that in non-CDM models such as self-interacting dark matter (SIDM), cores can arise even without feedback affecting dark matter structure (e.g. Spergel & Steinhardt, 2000; Vogelsberger et al., 2012; Rocha et al., 2013; Kaplinghat et al., 2016; Tulin & Yu, 2018); and this could provide an alternative way to explain non-monotonic RAR tracks. The detailed shapes of observed tracks could even provide a way to distinguish between CDM and SIDM in cases where feedback is equally strong (Straight et al., in preparation).
We note that Li et al. (2022) show that upward hook-like features should arise in the RAR tracks of low-mass galaxies if they have cuspy inner profiles. In doing so they make a similar point to ours: the inner structure of galaxies and the processes that give rise to them could have important imprints on the RAR. We have provided examples of observed SPARC galaxies that exhibit both downward and upward-bending hooks, and these may provide important constraints on the processes at play in building galaxies. However, we stress the importance of confirming that these features are not artifacts resulting from observational error. We also advocate for a continued search for more examples of galaxies that appear as hooks in the RAR by probing the innermost regions of low-mass galaxies.
In Figure 5 we point to predicted galaxy tracks that bend off the MOND-inspired fit to the RAR at low accelerations and very large radii, well beyond the radii probed by galaxy rotation curves. This clear departure from what is expected by MOND serves as yet another tool to discriminate between the two models. Brouwer et al. (2021) attempt to extend the RAR to low accelerations by measuring the total acceleration, \(a_{\rm tot}\), using galaxy-galaxy lensing out to large radii (\(R\approx 100\,{\rm kpc}\)) and comparing it to the acceleration expected for the baryons, \(a_{\rm bar}\). If they include only the baryonic mass estimated from the HI gas and stellar mass, they find that the resulting RAR at large radii continues to follow the relation predicted by MOND (black points in their Figure 4). However, these data include no direct measurement of the hot, ionised gas. We expect that the baryon content at large radii of high-mass galaxies is dominated by hot (T \(>10^{6}\) K) gas. This means that their result represents a lower limit on \(a_{\rm bar}\) in the outer regions of their galaxies. Brouwer et al. (2021) also show that adding an extended ionised gaseous contribution to their \(a_{\rm bar}\) estimates results in a RAR that bends below the expected MONDian relation (orange points in their Figure 4) in a way that is quite similar to what we predict with our simulations. Detecting significant gaseous components for galaxies at large radii would move the observed RAR away from the MONDian prediction and likely strengthen the position of dark matter models.
## 8 Summary and Conclusions
In this paper we examine the radial acceleration relation (\(a_{\rm tot}\) vs. \(a_{\rm bar}\)) tracks of 20 FIRE-2 simulated galaxies and compare our results to SPARC-observed galaxies (Lelli et al., 2016). A summary of our results is as follows:
* After visual inspection of 175 individual RAR tracks from the SPARC galaxy sample (Lelli et al., 2016), we find that 15% of them exhibit non-monotonic downward hooks in their RAR tracks (Table 1 and Figure 2). Hooks of this kind are expected in dark-matter-dominated systems with inner cored density profiles (see SS2), but are difficult to explain in a MONDian context. In addition, we find that 5% of galaxies in the SPARC sample have _upward_ hooks, with a steeper slope than the MONDian expectation (Milgrom, 1983a,b,c). Upward hooks of this kind can occur when the baryonic acceleration profile is non-monotonic and the total acceleration profile is not (see SS3).
* When treated as an ensemble, our FIRE-2 galaxies closely follow the empirical RAR with similar normalisation and scatter
at acceleration scales probed by McGaugh et al. (2016), \(a_{\rm tot}\gtrsim 10^{-12}\,{\rm m\,s^{-2}}\). This supports the idea that the RAR can arise in \(\Lambda\)CDM based models of galaxy formation (Figure 3).
* Downward hook features appear in the RAR tracks of all eight of our simulated galaxies with stellar masses lower than \(10^{10}\)\(M_{\odot}\). Each has a cored inner dark matter density profile and the downward hooks are a consequence of them having non-monotonic total radial acceleration profiles (Figure 4).
* Extending the RAR to very large radii from galaxy centres, we predict relations that bend away from the low-acceleration extrapolation of the McGaugh et al. (2016) fit, which is equivalent to the scaling predicted by MOND (Figure 5). This behaviour in our simulations is driven by the fact that at large radii the total baryonic mass enclosed recovers the cosmic baryon fraction, \(f_{b}=\Omega_{\rm b}/\Omega_{\rm m}=0.165\), ultimately demanding \(a_{\rm tot}\)= \(a_{\rm bbar}/f_{\rm b}\) at \(r\gg r_{\rm vir}\).
Downward hooks (at high acceleration, small radii) and bends (at low acceleration, large radii) in the RAR tracks of galaxies, as predicted in our \(\Lambda\)CDM simulations, are explicitly distinct from the expectations of MOND and can thus be used as tests to discriminate between the two. Whilst we have identified a number of galaxies in the SPARC database that do appear to display RAR profiles with downward hooks, more work will be required to determine if these features are robust to observational uncertainties. If so, they would seem to be quite challenging to MOND-inspired theories of cosmology.
In our simulations, downward hooks are prevalent in larger dwarf galaxies, \(M_{\star}\simeq 10^{7.5-9.6}\)\(M_{\odot}\), which are most prone to feedback-induced core formation. Such galaxies would be the best targets for followup studies looking for RAR hooks. A larger number of simulations will be required to determine if the \(\sim 5\%\) of SPARC galaxies with upward hooks can be explained naturally within our simulation framework.
The best places to look for the outer RAR bends are around high-mass galaxies. Whilst galaxies of all masses in our simulations predict such bends, only around the most massive galaxies do these bends become prominent within the virial radius. Hot gas from X-ray studies and Sunyaev-Zeldovich signals will be easiest to detect around such massive galaxies as well. The existence or absence of bends of this kind at large radii, as discussed by Brouwer et al. (2021), provide another avenue for testing competing models for the RAR that have been developed to match results at smaller radii.
## 9 Acknowledgments
FJM and JSB were supported by the National Science Foundation (NSF) grant AST-1910965 and NASA grant 80NSSC22K0827. JM is supported by the Hirsch Foundation. CAFG was supported by NSF through grants AST-2108230 and CARERER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grant HST-GO-16730.016-A; and by CXO through grant TM2-23005X. MBK acknowledges support from NSF CARERER award AST-1752913, NSF grants AST-1910346 and AST-2108962, NASA grant 80NSSC22K0827, and HST-AR-15809, HST-GO-15658, HST-GO-15901, HST-GO-15902, HST-AR-16159, HST-GO-16226, HST-GO-16686, HST-AR-17028, and HST-AR-17043 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. JS is supported by the NSF Astronomy and Astrophysics Postdoctoral Fellowship. We thank Federico Lelli for providing us with advice for working with SPARC data. The functionalities provided by the following python packages played a critical role in the analysis and visualization presented in this paper: matplotlib(Hunter, 2007), NumPy(Van Der Walt et al., 2011), SciPy(Virtanen et al., 2020) and iPython(Perez & Granger, 2007). Additionally, we used the WebPlotDigitizer tool (Rohatgi, 2022) to obtain the SPARC-observed data plotted in Figures 2 & 4. We honour the invaluable labour of the maintenance and clerical staff at our institutions, whose contributions make our scientific discoveries a reality. This research was conducted on Acjachemen and Tongva Indigenous land.
## 10 Data Availability
The data supporting the plots within this article are available on reasonable request to the corresponding author. A public version of the GIZMO code is available at [http://www.tapir.caltech.edu/](http://www.tapir.caltech.edu/) phobpro-kins/Site/GIZMO.html. FIRE-2 simulations are publicly available at [http://flatub.flatironinstitute.org/fire](http://flatub.flatironinstitute.org/fire). Additional data including simulation snapshots, initial conditions, and derived data products are available at [https://fire.northwestern.edu/data/](https://fire.northwestern.edu/data/).
|
2304.10506 | Modeling CDRX and PDRX during hot forming of zircaloy-4 | A recently developed full field level-set model of continuous dynamic
recrystallization is applied to simulate zircaloy-4 recrystallization during
hot compression and subsequent heat treatment. The influence of strain rate,
final strain and initial microstructure is investigated, by experimental and
simulation tools. The recrystallization heterogeneity is quantified. This
enables to confirm that quenched microstructures display a higher extent of
heterogeneity. The simulation results replicate satisfactorily experimental
observations. The simulation framework is especially able to capture such
recrystallization heterogeneity induced by a different initial microstructure.
Finally, the role of intragranular dislocation density heterogeneities over the
preferential growth of recrystallized grains is pointed out thanks to
additional simulations with different numerical formulations. | Victor Grand, Baptiste Flipon, Alexis Gaillac, Marc Bernacki | 2023-03-24T23:21:59Z | http://arxiv.org/abs/2304.10506v1 | # Modeling CDRX and PDRX during hot forming of zircaloy-4
###### Abstract
A recently developed full field level-set model of continuous dynamic recrystallization is applied to simulate zircaloy-4 recrystallization during hot compression and subsequent heat treatment. The influence of strain rate, final strain and initial microstructure is investigated, by experimental and simulation tools. The recrystallization heterogeneity is quantified. This enables to confirm that quenched microstructures display a higher extent of heterogeneity. The simulation results replicate satisfactorily experimental observations. The simulation framework is especially able to capture such recrystallization heterogeneity induced by a different initial microstructure. Finally, the role of intragranular dislocation density heterogeneities over the preferential growth of recrystallized grains is pointed out thanks to additional simulations with different numerical formulations.
Continuous dynamic recrystallization zirconium alloy hot forming
## 1 Introduction
Nuclear energy constitutes an alternative to fossil fuels for fulfilling the increasing global energy demands. It presents a very low carbon footprint and its production can be controlled to match the needs. To ensure the highest quality and security requirements, the manufacturing processes of every nuclear component must be mastered. Fuel assemblies, mainly composed of zirconium alloy parts, are no exception to the rule. Thus, it is essential to continue improving the knowledge of microstructure evolution mechanisms taking place during hot forming of zirconium alloys. Recrystallization is responsible for the formation of new grains with a low dislocation density that can consume the deformed microstructure [1]. Previous works have concluded that under common hot forming conditions, zirconium alloys and in particular zircaloy-4 present typical features of continuous dynamic recrystallization (CDRX) [2, 3, 4, 5]. Chauvy _et al._ proved that low angle grain boundaries (LAGB) form during hot deformation and progressively transform to high angle grain boundaries (HAGB) [2]. The other studies confirmed that hot deformation of lamellar microstructures are characterized by the
formation of fine grains surrounded by large colonies that are slightly deformed. If these studies focus upon the deformation of such complex lamellar microstructures, they do not address in details the influence of thermomechanical conditions and initial microstructure on recrystallization.
The few existing simulation results regarding zircaloy-4 recrystallization were obtained applying mean field models. Doing so, Dunlop _et al._ were able to reproduce the static recrystallization kinetics for various heat treatment conditions. Gaudout _et al._, who applied the Gourdet-Montheillet CDRX model to zircaloy-4, predict the recrystallized fraction for different hot forming conditions [4]. Since both strategies relied on a mean field model, they did not consider the presence of local microstructure heterogeneities which are very important for such materials.
The present article discusses in details the influence of thermomechanical conditions and initial microstructure upon recrystallization mechanisms, such as CDRX and post-dynamic recrystallization (PDRX), by coupling experimental and numerical investigations. To do so, samples with different initial microstructures are hot compressed and microstructures are characterized by EBSD. Full field simulations are performed using a level-set model [6] embedded within a finite element framework (FE-LS) [7]. Experiment and simulation results are compared to evaluate the model abilities and limitations. Different underlying assumptions of the numerical model are assessed. Doing so, the impact of several microstructural features are highlighted.
## 2 Materials and methods
### Materials
Zircaloy-4 samples presenting three different typical microstructures are selected. One presents an equiaxed (**Eq**) grain topology, with an average equivalent circle diameter (\(\overline{ECD}\)) equal to 5.3 \(\mu m\). The others display a Widmanstatten microstructure inherited from the quench with a basket-weaved (**BW**) morphology for the second one and parallel plates (**PP**) for the last one.
The initial microstructures are characterized by EBSD. Orientation maps are provided in figure 1. The pole figures for **Eq** microstructures are plotted in figure 2. Since the number of grains is very low for both **BW** and **PP** microstructures, the pole figures are not displayed since they are not representative of the sample global texture. Nevertheless, the orientation of the \(\alpha\) grains nucleating during quench being determined by the variant selection rule, the texture of the **BW** and **PP** samples can be defined as pseudo isotropic. Figure 2 shows that **Eq** sample is textured, with \(<\)c\(>\) axis being distributed within a radial plane.
### Methods
#### 2.2.1 Experimental methods
Thermomechanical testingHot compression tests, with or without subsequent heat treatment, are performed according to the conditions described in table 1. They are made using a hydraulic testing machine (_MTS Landmark 370-25_). A radiant furnace is used to heat the sample and dies. At the end of the experiment, the sample is quenched. The whole test is filmed in order to measure the delay between the end of the experiment and the effective quench. The average delay is equal to 1 \(s\). Samples and dies are coated with silicon nitride to limit friction. Before each test, the samples are held at temperature for 10 minutes to ensure temperature homogeneity.
The strain levels displayed in table 1, denoted \(\varepsilon_{local}\), are computed from simulations performed using _Forge NxT\({}^{\otimes}\)_ software. To do so, friction coefficient is estimated by inverse analysis by measuring sample bulging. An average value for the friction coefficient is selected to compute \(\varepsilon_{local}\). The values differ slightly between equiaxed and lamellar microstructures due to initial sample geometry differences. These differences being small, we estimate we can reasonably compare the results with different initial microstructures.
Characterization methodsAfter hot compression, the samples are characterized by electron back-scattered diffraction (EBSD) analysis. A _Carl Zeiss SUPRA 40_ field emission gun scanning electron microscope (FEG-SEM) equipped with a _Bruker Quantax_ system (EBSD \(e^{-}\)_Flash\({}^{HR}\)_) is used. Acceleration voltage is set to 20 \(kV\) and step size to 100 \(nm\). Each map is represented according to the sample reference frame with \(Y\) being the compression direction and \(X\) the radial one. The standard color code used for attributing colors to orientations is obtained from inverse pole figure color code in which \(Y\) axis is projected in the standard triangle. The standard triangle corresponds to the crystal axis system reduced thanks to symmetry operations. These two conventions are presented in figure 0(a).
Figure 1: EBSD orientation maps representative of the three initial microstructures. Color codes are attributed thanks to an IPF Y color code, as illustrated in subfigure 0(a). The last subfigure presents sample geometry and EBSD observation zone.
Figure 2: \(\{0002\}\) and \(\{10\overline{1}0\}\) pole figures from EBSD data.
Samples are cut such as the compression direction lies within the cutting plan. The observation zone is located at sample center. Its dimensions are: \(130\times 85\)\(\mu m^{2}\).
Post-processing of EBSD data is performed using MTEX toolbox [8]. A half quadratic filter is applied before any post-processing operation to reduce the noise [9]. If data are used as input for simulations, a filling operation is applied. Missing orientation data at non-indexed pixels are interpolated using the half quadratic filter.
Isolated indexed pixels are filtered out. To do so, a minimal number of 10 pixels per grain is considered. It corresponds to areas that are 300 \(nm\) wide. Threshold angle values are set to \(3^{\circ}\) for low angle grain boundaries (\(\Delta\theta_{\textsc{LAG}}=3^{\circ}\)) and \(15^{\circ}\) for high angle grain boundaries (\(\Delta\theta_{\textsc{LAG}}=15^{\circ}\)). Geometrically necessary dislocation (GND) density is estimated using the method established by Pantleon [10]. A grain is defined as recrystallized if its average GND density is lower than \(10^{14}\)\(m^{-2}\). GND density is preferred over other grain properties related to grain internal disorientation (grain orientation spread, GOS, or grain average kernel average misorientation, GAKAM, for instance). It presents the advantage of being directly transposable in simulations.
#### 2.2.2 Numerical tools
A LS formulation, embedded within a FE framework is used to model CDRX and PDRX. LS functions (\(\phi(x,t)\)) track the position of interfaces with time [6, 7, 11]. They are initialized as signed Euclidean distance functions to grain boundaries (GB):
\[\begin{cases}\phi(x,t)=\pm\ d(x,\Gamma(t))\,,\ x\in\Omega,\\ \Gamma(t)=\{x\in\Omega,\ \phi(x,t)=0\},\end{cases} \tag{1}\]
where \(d\) is the signed Euclidean distance function and \(\Omega\) the simulation domain. By convention, \(\phi(x,t)\) is taken positive inside the grain and negative outside.
The movement of GB is predicted by solving the following transport equation:
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline \(\varepsilon_{local}\) & \multicolumn{3}{c}{**0.01 s\({}^{-1}\)**} & \multicolumn{3}{c}{**0.1 s\({}^{-1}\)**} & \multicolumn{3}{c}{**1.0 s\({}^{-1}\)**} \\ \hline
**450\({}^{\circ}\)C** & 0.45 & 0.75 & 1.0 & 0.45 & 0.75 & 1.0 & 0.45 & 0.75 & 1.0 \\
**550\({}^{\circ}\)C** & 0.45 & 0.75 & 1.0 & 0.45 & 0.75 & 1.0 & 0.45 & 0.75 & 1.0 \\
**650\({}^{\circ}\)C** & 0.45 & 0.75 & 1.0 & 0.45 & 0.75 & 1.0 & 0.45 & 0.75 & 1.0 \\ \hline \hline \end{tabular}
* (a) Hot compression with equiaxed microstructures (**Eq**).
* (b) Hot compression with lamellar microstructures (**BW** and **PP**).
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Holding time (s)} & \multicolumn{3}{c}{**0.1 s\({}^{-1}\)**} & \multicolumn{3}{c}{**1.0 s\({}^{-1}\)**} \\ \hline
**650\({}^{\circ}\)C** & **0.45** & - & - & - & - & - & 7 & 12 & 25 & 50 & 100 & 200 \\
**1.0** & 7 & 12 & 25 & 50 & 100 & 200 & 7 & 12 & 25 & 50 & 100 & 200 \\ \hline \hline \end{tabular}
* (c) Hot compression and holding at temperature with equiaxed microstructures (**Eq**).
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Holding time (s)} & \multicolumn{3}{c}{**0.1 s\({}^{-1}\)**} & \multicolumn{3}{c}{**1.0 s\({}^{-1}\)**} \\ \hline
**650\({}^{\circ}\)C** & **0.55** & - & - & - & - & - & 12 & 25 & 50 & 100 & 200 \\
**1.0** & 7 & 12 & 25 & 50 & 100 & 200 & 7 & 12 & 25 & 50 & 100 & 200 \\ \hline \hline \end{tabular}
* (d) Hot compression and holding at temperature with lamellar microstructures (**BW** and **PP**).
\end{table}
Table 1: Description of the thermomechanical conditions of the hot compression campaigns.
\[\frac{\partial\phi}{\partial t}+\overrightarrow{v}\cdot\overrightarrow{\nabla \phi}=0, \tag{2}\]
with \(\overrightarrow{v}\) being the velocity of interfaces. After each resolution, level-set functions are reinitialized to ensure LS functions remain signed distance functions [12]. Voids and overlaps are corrected according to the method described by Merriman _et al._[6].
GB velocity at mesoscopic scale can be defined as the sum of surface and volume energy effects. The term related to capillarity is computed naturally thanks to distance function properties:
\[\overrightarrow{v}_{c}=-M\left(\overrightarrow{\nabla\gamma}\cdot \overrightarrow{n}-\gamma\kappa\right)\overrightarrow{n}, \tag{3}\]
with \(\overrightarrow{n}=-\overrightarrow{\nabla\phi}\), \(\kappa=-\Delta\phi\) and \(\gamma\) the GB energy. The way to extend \(\gamma\) in the vicinity of GB interfaces to evaluate \(\overrightarrow{\nabla\gamma}\) and to take into account this velocity through a convective-diffusive resolution of the transport LS equation (Eq.2) are detailed in [13, 14].
The other, related to differences of stored energy across GB, requires a dedicated treatment. Initially, Bernacki et al. [7] proposed to weight the contribution of each grain stored energy by a function of the distance to the GB. This is done using equation 4.
\[\overrightarrow{v}_{e}\left(x,t\right)=\sum_{i=1}^{N_{d}}\sum_{j=1}^{N_{d}}M_ {ij}\chi_{i}(x,t)f(\phi_{i}(x,t),l)\times\left[E_{j}(x,t)-E_{i}(x,t)\right] \overrightarrow{n}, \tag{4}\]
with \(\chi_{i}(x,t)\) the characteristic function of the \(i^{th}\) LS function, i.e. \(\chi_{i}(x,t)=1\) where the LS function is positive and \(\chi_{i}(x,t)=0\) everywhere else. \(f\) is a decreasing function being equal to 1 for \(\phi_{i}=0\) and 0 for \(\phi_{i}=l\). To limit the number of LS functions in order to save computation time and memory, graph coloring and recoloring techniques have been developed to describe numerous non neighboring grains in each LS function [15]. The set of all distance functions required to describe all the grains is now: \(\Phi=\left\{\phi_{i},\;i=1,\;...,\;N_{d}\right\}\), with \(N_{d}\) the number of distance functions, greatly smaller than \(N_{g}\) the number of grains. This implementation implies the use of \(N_{d}\) energy fields, that are initialized as: \(E_{i}(x,0)=\chi_{i}(x,0)\times E(x,0)=\tau\rho_{i}(x,0)\), with \(\tau=\frac{1}{2}\mu b^{2}\) the dislocation line energy, \(b\) the norm of the Burgers vector and \(\mu\) the shear modulus. These energy fields are tracked and updated to ensure they are consistent with the LS functions and the exact location of interfaces.
Some additional details about this method are provided in several articles [7, 16] and an extension is proposed in the following **Stored energy gradients** paragraph. The implementations dedicated to CDRX are described extensively in ref. [17].
Low angle boundary characteristicsCDRX is characterized by a high LAGB fraction. As a consequence, to model such phenomenon, it is essential to correctly describe the formation and properties of such interfaces. Formation of new LAGB is addressed using equations from the model developed by Gourdet and Montheillet [18]. At each deformation increment \(d\varepsilon\), the quantity of LAGB formed is described according to equation 5.
\[dS^{+}=\frac{\alpha bK_{2}\rho d\varepsilon}{\eta\theta_{0}}, \tag{5}\]
where \(dS^{+}\) refers to the length (respectively surface) of boundaries formed for 2D simulations (respectively 3D) and \(\alpha=1-\exp\left(\frac{D}{D_{0}}\right)^{m}\) is a coefficient describing the fraction of dislocations recovered to form new subgrains. \(D\) is the ECD, \(D_{0}\) is a reference ECD and \(m\) is a fixed coefficient. \(K_{2}\) is the recovery parameter of the Yoshie-Laasraoui-Jonas equation [19], \(\rho\) the dislocation density, \(\eta\) is the number of sets of dislocations and \(\theta_{0}\) the disorientation of newly formed subgrains.
The pre-existing LAGB experience a progressive disorientation prescribed by:
\[d\theta=\frac{b}{2\eta}\left(1-\alpha\right)DK_{2}\rho d\varepsilon. \tag{6}\]
The GB mobility is assumed isotropic. Its evolution with temperature is predicted using an Arrhenius relation, such as \(M=M_{0}\times\exp\left(\frac{-Q_{m}}{RT}\right)\), with \(M_{0}\) the pre-exponential mobility factor and \(Q_{m}\) the mobility apparent activation energy. The GB energy is described according to Read-Shockley equation [20]:
\[\gamma(\theta)=\begin{cases}\gamma_{max}\left(\frac{\theta}{\theta_{max}} \right)\left(1-\ln\frac{\theta}{\theta_{max}}\right),\text{ }\theta<\theta_{max},\\ \gamma_{max},\text{ }\theta\geq\theta_{max}.\end{cases} \tag{7}\]
\(\gamma_{max}\) is the HAGB energy and \(\theta_{max}\) the LAGB/HAGB disorientation threshold, set to \(15^{\circ}\).
Second phase particle (SPP) pinning effectAt each SPP position, remeshing operations are performed and elements removed. By imposing a null Neumann boundary conditions at mesh boundaries for our FE-LS resolution, the holes representing SPP are naturally exerting a tension on GB that are consistent with the Young-Herring equilibrium for incoherent particles [15, 21, 22, 23].
Hardening and recoveryThe evolution of dislocation density due to deformation is taken into account using Yoshie-Laasraoui-Jonas equation [19]:
\[d\rho=\left(K_{1}-K_{2}\rho\right)d\varepsilon. \tag{8}\]
\(K_{1}\) is the hardening parameter. In the present study, \(K_{1}\) varies from one grain to another. For each initial digital microstructure, it is set according to a distribution measured experimentally.
Evolution of \(\left(K_{1},\text{ }K_{2}\right)\) with thermomechanical conditions is described using equations 9 and 10 [24]. The yield stress, \(\sigma_{0}\), is described as a linear function of the Zener-Hollomon parameter logarithm (equation 11) [25].
\[K_{1}=K_{1}^{0}\varepsilon^{m_{h}}\exp\left(\frac{m_{h}Q_{h}}{RT}\right), \tag{9}\]
\[K_{2}=K_{2}^{0}\varepsilon^{-m_{r}}\exp\left(-\frac{m_{r}Q_{r}}{RT}\right), \tag{10}\]
\[\sigma_{0}=\sigma_{0}^{s}\times\ln\left(Z\right)+\sigma_{0}^{i}. \tag{11}\]
where \(\sigma_{0}^{s}\) and \(\sigma_{0}^{i}\) designate the slope and the intercept of the linear function, respectively.
Stored energy gradientsFull field front-capturing recrystallization models such as the LS one or the multi phase-field approaches generally consider a unique and uniform value per grain for stored energy of deformation. However, in reality, dislocation density can vary significantly within a grain. Since the impact of localized and intense gradients over GB migration remains difficult to quantify, modeling it at mesoscopic scale is not straightforward. In context of LS method, Ilin _et al._ proposed to consider a constant GB velocity per interface [26]. To do so, dislocation density is averaged at GB vicinity. The distance to the interface in which dislocation density is averaged (\(w_{avg}\)) is a parameter. Another modeling strategy is also considered here. This strategy consists in computing locally, at each FE node, the energy difference. The different approaches are illustrated in Figure 3. The left side of this figure illustrates the initial configurations where for each grain, the energy is averaged per grain (top), per interface (middle) or not averaged (bottom). Few finite elements in black are described to illustrate the energy fields in the corresponding finite element nodes. The different notations used in these figures are described by the following equations:
\[E_{k}\left(t\right)=\tau\rho_{k}\left(t\right)=\tau\int_{G_{k}}\rho\left( \mathbf{x},t\right)d\mathbf{x},\]
\[E_{kl}\left(t\right)=\tau\rho_{kl}\left(t\right)=\tau\int_{G_{kl}}\rho\left( \mathbf{x},t\right)d\mathbf{x},\]
with,
\[G_{k}\left(\mathbf{x},t\right)=\left\{\mathbf{x}\in\Omega,\phi_{k}\left( \mathbf{x},t\right)\geq 0\right\},\ G_{kl}\left(\mathbf{x},t\right)=\left\{ \mathbf{x}\in\Omega,min\left(\phi_{k}\left(\mathbf{x},t\right),\phi_{l}\left( \mathbf{x},t\right)+w_{avg}\right)\geq 0\right\}\]
At this stage, for the three strategies, the energy fields of each grain is perfectly known in the corresponding grain. However these fields must be extended outwards their respective LS functions to be able to compute the velocity field defined in Eq.4. The direct reinitialization algorithm introduced in [12] computes, for each node, the distance to the nearest facet (segment in 2D, triangle in 3D) constituting the piece-wise linear 0-isovalue of the LS function as illustrated by the blue vectors in the right side of the Fig.3. These facets which are not used in the FE resolution are also illustrated in blue (dotted segments) in the right side of the Fig.3. This algorithm has been modified and now returns also all the associated features of the crossed element. Thus, when the \(\phi_{i}\) function is reinitialized, the field describing the stored energy is extended in a fixed thickness in the neighboring grains. The right side of the Fig.3 illustrates, by using this procedure, all the information shared by the FE nodes around the interface when the energy is averaged per grain (top), per interface (middle) and not averaged (bottom). These information can then be used to evaluate, in each FE node, the terms \(E_{j}(x,t)-E_{i}(x,t)\) of Eq.4.
To evaluate the effect of each implementation, five different approximation cases will be considered:
* dislocation density averaged per grain,
* dislocation density averaged per interface up to 100 \(nm\) from the GB (_Per interface; \(w_{avg}=100\)\(nm\)_),
* dislocation density averaged per interface up to 200 \(nm\) from the GB (_Per interface; \(w_{avg}=200\)\(nm\)_).
* Local dislocation density (_Local_),
* local dislocation density, with a pre-processing operation that consists in applying a 2D Gaussian filter to the dislocation density map. The standard deviation of the Gaussian filter is taken equal to \(2\)\(px=200\)\(nm\) (_Local, Gaussian filtered_).
## 3 Results
### Characterization of CDRX and PDRX
Orientation maps corresponding to samples hot deformed at three deformation levels are provided in figure 4. These figures confirm that to accommodate deformation, intragranular orientation gradients form progressively and lead to the development of subgrains. With deformation, grains become more and more elongated along horizontal direction and the number of small grains tend to increase notably. These small grains seem to be grouped in clusters.
To analyze quantitatively the impact of hot deformation upon microstructure topology, the evolution of LAGB specific length and of grain \(\overline{ECD}\) are provided in figure 5. Figure 5b confirms the significant presence of LAGB. For all the conditions displayed here, at least 34% of the GB are LAGB. If the LAGB specific length generally increases with strain, it seems to stabilize at high strain levels. The LAGB length ratio decreases with strain, which indicates that the HAGB specific length increases at a faster rate. Finally, figure 5c shows that the grain size decreases significantly with deformation.
Figure 6 presents GND density map for a given hot deformation condition: \(T=650^{\circ}C;\ \dot{\varepsilon}=0.01\)\(s^{-1};\ \varepsilon=1.0\). This particular condition has been chosen since the high temperature, strain level and low strain rate favor CDRX. From this figure, one can observe that some of the smaller grains have a lower GND density. However, they do not stand out from the general grain population. Consequently, the recrystallized fraction remains low after hot deformation.
These results illustrate that CDRX operates by a progressive formation of LAGB. This phenomenon starts and prevails at low strains (\(\varepsilon\leq 0.5\)). Then, if sufficient stored energy of deformation is accumulated, LAGB progressively transform to HAGB. A large number of small grains is then observable and the LAGB length ratio decreases significantly. These small grains, however, do not present a very low GND density, as one could have expected. These microstructure changes are very progressive with deformation and distributed throughout the whole microstructure. It confirms that Zy-4 is experiencing CDRX over the range of conditions studied within the present work.
Figure 3: Illustrating schemes concerning the evaluation of \(E_{j}(x,t)-E_{i}(x,t)\) around the GB interfaces: (left side) available information per FE node before the reinitialization procedure, and (right side) after the reinitialization of \(\phi_{i}\) and \(\phi_{j}\).(top) Energy averaged per grain, (middle) energy averaged per interface and (bottom) not averaged.
#### 3.1.1 Effect of thermomechanical conditions over continuous and post-dynamic recrystallization
\(\overline{ECD}_{rx}\) and recrystallized fraction are plotted versus holding time, for different hot deformation conditions, in figure 7. It appears from figure (a)a that decreasing final strain or strain rate leads to a significant decrease of recrystallization kinetics during subsequent holding at temperature. Figure (b)b shows that decreasing the strain level from 1.0 to 0.45 does not modify the recrystallized grain growth kinetics. On the other hand, decreasing the strain rate from 1.0 \(s^{-1}\) to 0.1 \(s^{-1}\) significantly lowers that kinetics.
To complete this observation, the grain average GND density distribution are plotted in figure 8. One can observe from figure (a)a that the grain average GND density distribution is shifted towards higher values with the strain rate. On the other hand, it appears that increasing the final strain tends to increase the number of grains with a low average dislocation density.
These results corroborate the following hypothesis:
* A strain rate increase leads to an increase of the stored energy of deformation and of the driving force for recrystallized grain growth. As a consequence, the average recrystallized grain size and the post-dynamic recrystallization kinetics both increase with strain rate.
Figure 4: EBSD orientation maps showing microstructure evolution during hot deformation (\(T=550^{\circ}C;\ \dot{\varepsilon}=0.1\ s^{-1}\)). **Eq** samples. IPF Y color code.
* A strain increase results in a higher number of grains presenting a stored energy advantage. Therefore, the recrystallization kinetics increases with the deformation level but the average size of recrystallized grains does not.
Figure 5: Evolution of the microstructure topology during hot deformation at \(T=650^{\circ}C\). **Eq** microstructure.
Figure 6: GND density map. **Eq** initial microstructure. Hot deformation conditions are \(T=650^{\circ}C;\ \dot{\varepsilon}=0.01\ s^{-1};\ \varepsilon=1.0\).
#### 3.1.2 Effect of initial microstructure over continuous and post-dynamic recrystallization
Six orientation maps are displayed in figure 9. They correspond to samples after hot deformation and after \(100~{}s\) at \(650^{\circ}C\). One can notice that right after deformation, microstructures present different degrees of heterogeneity at the observation scale. It seems that an initial parallel plate microstructure displays a higher heterogeneity extent than an initial basket-weaved one which presents more heterogeneities than an equiaxed one. This feature appears to be retained after subsequent heat treatment. This is confirmed by EBSD maps displaying larger observation zones and provided in appendix A (figure 16).
One hypothesis is proposed to explain the initial microstructure impact over recrystallization. It assumes that the heterogeneities observed during CDRX and PDRX are caused by deformation heterogeneities. These deformation heterogeneities being conditioned by initial texture, grain morphology and grain size. **Eq** samples present a strong texture and most of the grains initially have their \(<\)c\(>\) axis orthogonal to the compression direction. Since the spread of initial grain orientation is low, most of the grains deform rather similarly and the deformation incompatibilities are weak. In lamellar microstructures, on the other hand, there is no initial texture. The grain initial orientations are distributed in the orientation space following the variant selection rule. Therefore, grains are more susceptible to deform differently. This generates deformation incompatibilities close to grain boundaries. Grain boundary vicinities thus present a higher GND density and constitute preferential sites for formation and growth of recrystallized grains. Therefore, lamellar microstructures present a more heterogeneous recrystallization behavior.
Between **BW** and **PP** samples, the extent of heterogeneity is also rather different. In **BW** microstructures, most of the lamellae are in contact with lamellae presenting different orientations. Since the lamella thickness is low, the deformation incompatibilities affect almost all the microstructure. In **PP** microstructures, only lamellae located at ex-\(\beta\) GB are in contact with lamellae of different orientations. Therefore, the areas around ex-\(\beta\) GB are impacted by deformation incompatibilities whereas an important part of the microstructure
Figure 8: Grain average GND density number distribution.
Figure 7: Evolution of properties related to recrystallization during holding at \(T=650^{\circ}C\). **Eq** samples.
does not experience such effect. Therefore, there are fewer preferential zones for formation and growth of recrystallized grains and the recrystallization is even more heterogeneous for **PP** microstructures.
Figure 9: EBSD orientation maps of samples with different initial microstructure.
### Modeling CDRX and PDRX
The model parameters related to mechanical behavior are taken from literature. Grain boundary mobility parameters are fitted using experimental results from heat treatments performed on fully recrystallized samples and corresponding pure GG LS simulations as detailed in [27] for nickel based superalloys. The parameters for subgrain formation and evolution are set based on the recommendations of Gourdet [28].
Hardening and recovery parameters are identified using hot compression test results. \(\overline{K_{1}}\) and \(K_{2}\) are identified by fitting the experimental macroscopic stress-strain curves from the hot compression tests thanks to Yoshie-Lasraaoui-Jonas and Taylor equations. Then, the \(K_{1}\) distribution is set by measuring the experimental GAGND density distribution. This is based upon Yoshie-Lasraaoui-Jonas equation which predicts that: \(\rho_{sat}=K_{1}/K_{2}\). Therefore, if all grains have reached their saturation GND density value, the \(K_{1}\) distribution is equal to the \(\rho_{sat}\) distribution multiplied by a factor \(K_{2}\). Of course, such reasoning holds if all grains have reached their saturation GND density. Since no increase of average GND density is observed on all maps at different strain levels, such assumption can be considered.
The values used for the present study are provided in table 2 (appendix B).
Experimental data are immersed in the considered FE-LS strategy and an average dislocation field per grain is considered here. Six simulation cases with different initial microstructures and thermomechanical conditions are performed:
* **Eq** initial microstructure, \(T=650^{\circ}C\); \(\dot{\varepsilon}=1.0\ s^{-1}\); \(\varepsilon=1.0\); \(dt=200\ s\),
* **Eq** initial microstructure, \(T=650^{\circ}C\); \(\dot{\varepsilon}=0.1\ s^{-1}\); \(\varepsilon=1.0\); \(dt=200\ s\),
* **Eq** initial microstructure, \(T=650^{\circ}C\); \(\dot{\varepsilon}=1.0\ s^{-1}\); \(\varepsilon=0.45\); \(dt=200\ s\),
* **BW** initial microstructure, \(T=650^{\circ}C\); \(\dot{\varepsilon}=1.0\ s^{-1}\); \(\varepsilon=1.2\); \(dt=200\ s\),
* **PP** initial microstructure, \(T=650^{\circ}C\); \(\dot{\varepsilon}=1.0\ s^{-1}\); \(\varepsilon=1.2\); \(dt=200\ s\).
Experimental and digital microstructures after 25 and 100 \(s\) are displayed in figure 10 for the three different initial microstructures. Additional results are provided in appendix C.
Several qualitative observations can be made from figure 10:
* the qualitative correspondence between experimental and digital microstructures is satisfactory. The model ability to reproduce preferential zones where recrystallized grains form and grow is noteworthy. In addition, the model captures the different levels of heterogeneity observed depending on the initial microstructure.
* The model underestimates the GND density at low holding times,
* The number of small grains seems underestimated. Experimentally, a large number of small grains are kept up to 100 \(s\) at 650\({}^{\circ}C\). This is not observed in simulation where these small grains tend to shrink and disappear.
Recrystallized fraction, average recrystallized grain size and LAGB specific length are plotted versus holding time in figures 11 and 12.
These results confirm that the model predictions for recrystallized fraction and average recrystallized grain size are satisfactory. It must be pointed out that the model presents some discrepancies. It significantly underestimates the recrystallized fraction and overestimates the average recrystallized grain size, for the following respective conditions: \(T=650^{\circ}C;\ \dot{\varepsilon}=1.0\ s^{-1};\ \varepsilon=1.0\) and \(T=650^{\circ}C;\ \dot{\varepsilon}=0.1\ s^{-1};\ \varepsilon=1.0\) (**Eq** microstructure). Finally, the model overestimates the decrease of LAGB specific length at the beginning of the PDRX regime, which is consistent with the observation regarding the shrinking of small grains and subgrains.
Figure 13 provides new insights by displaying both grain ECD and grain average GND density distributions. From this plot, it appears clearly that simulations are significantly underpredicting the spread of both distributions. The simulations miss especially the growth of a limited number of subgrains and grains that present a low GND value and the persistence of large deformed grains.
Different modeling assumptions could potentially explain the differences between experimental and simulation results. The most probable are related to the anisotropy of GB properties and to the definition of GND density as a homogeneous variable within a subgrain or grain. Given that the formation of any particular texture component during PDRX is not observed, a dedicated study of the second hypothesis is preferred.
#### 3.2.1 Considering intragranular energy gradients in post-dynamic recrystallization simulations
The deformation model considered within the present study does not allow to predict the formation of intragranular heterogeneities. Therefore, to evaluate the influence of such features, simulations of only the PDRX regime are proposed. In this case, initial microstructure after deformation is initialized using EBSD data. This enables to define an initial deformed microstructure presenting intragranular GND heterogeneities.
Figure 11: Evolution of recrystallized fraction, average recrystallized grain size and LAGB specific length with holding time. Three different initial microstructures are considered: **Eq**, **BW** and **PP**. Hot deformation conditions are \(T=650^{\circ}C;\ \dot{\varepsilon}=1.0\ s^{-1};\varepsilon=1.0-1.2\).
Simulations with the five configurations presented in section 2.2.2 are run for the following conditions: **BW** initial microstructure, \(T=650^{o}C;\ \varepsilon=1.0\ s^{-1};\ \varepsilon=1.2;\ dt=200\ s\).
Figure 14 presents the recrystallized fraction evolution with holding time. Figure 17, provided in appendix C shows the five digital microstructures after a 100 \(s\) heat treatment. Figure 14 points out that the four formulations that consider heterogeneous intragranular GND density predict a slower recrystallization kinetics which is in agreement with the experimental results. On the other hand, figure 17 confirm that all four formulations predict different local microstructure topology. As expected, considering local GND density difference leads to a significant increase of the interface tortuosity, especially at the beginning of the simulation. A zoom on a small zone displaying this aspect is presented in figure 18. This could even lead to the formation of new grains. Such phenomenon present many similarities with the strain induced boundary migration (SIBM) mechanism.
Figure 15 displays experimental and simulation combined scatter plots after 100 \(s\) for three of the five formulations.
These results highlight that _Local_ or _Per interface_ formulations predict significantly different grain size and grain average GND density distributions. These formulations better reproduce both GND density and grain size distributions. They enable to capture to a better extent the diversity of such microstructure. Large grains with a high GND density and small grains with a wide range of GND density are conserved up to longer holding times. Nevertheless, two major features are still not predicted with such numerical formulations. The number of large grains with a low dislocation density remain underestimated. This can be observed in figure 17, provided in appendix. At the same time, the number of grains with a low average GND density is largely overestimated.
Figure 15: Scatter plots displaying the grain ECD and grain average GND density distributions, for three simulations with different formulations for computation of driving pressure. Marginal number distributions. **BW** initial microstructure. Hot deformation conditions are \(T=650^{\circ}C;\ \varepsilon=1.0\ s^{-1};\ \varepsilon=1.0-1.2;\ dt=100\ s\).
## 4 Conclusion
Zircaloy-4 dynamic and post-dynamic recrystallization have been studied using both experimental data and simulation tools. Samples presenting three initial microstructures (equivaxed, basket-weaved or parallel plate microstructures) have been hot deformed under various thermomechanical conditions. EBSD results have confirmed that:
* zircaloy-4 recrystallization is characterized by a large number of LAGB, formed progressively and throughout the whole microstructure. Grains are progressively broken up and, with increasing strain, a large number of small grains and subgrains form.
* After hot deformation, the recrystallized fraction is almost systematically null. Indeed, one cannot distinguish a significant fraction of grains presenting a low internal energy.
* The increase of PDRX kinetics caused by a final strain or strain rate increase can be attributed to the higher number of grains with a low dislocation density or to the higher dislocation density, respectively.
* Initial microstructure impacts significantly recrystallization. It has been shown that the degree of heterogeneity is different for each of the three initial microstructures considered. One can classify the three typical microstructures in ascending order of heterogeneity: \(\mathbf{PP}>\mathbf{BW}>\mathbf{Eq}\).
A LS model recently extended has been applied to simulate CDRX and PDRX, for different thermomechanical conditions and initial microstructures. The results have shown that the model is able to capture with a very good agreement the initial microstructure influence over recrystallization. It is able to predict the persistence of large deformed grains upon subsequent heat treatment. Finally, additional simulations of PDRX considering intragranular heterogenous GND density fields have pointed out the impact of some simulation assumptions. The results confirmed that intragranular heterogeneities play a significant role in the formation of a wider grain size distribution. They also question about the adequate numerical formulation one should select to correctly take into account the effect of high stored energy gradients over GB migration during PDRX and the way to predict them during CDRX without using high numerical cost methodologies such as crystal plasticity FE formulations. 3D simulations and extension to other zirconium alloys are other perspectives of this work.
Figure 10: Snapshots on experimental (top) and simulation (bottom) GND density maps for one (a) \(\mathbf{Eq}\) case, (b) \(\mathbf{BW}\) case, and (c) \(\mathbf{PP}\) case. Hot deformation conditions are \(T=650^{\circ}C;\ \dot{\varepsilon}=1.0\ s^{-1};\varepsilon=1.0-1.2\).
Figure 12: Evolution of recrystallized fraction, average recrystallized grain size and LAGB specific length with holding time. **Eq** samples.
Figure 14: Evolution of recrystallized fraction with holding time, for the five different formulations. **BW** initial microstructure. Hot deformation conditions are \(T=650^{\circ}C;\ \dot{\varepsilon}=1.0\ s^{-1};\ \varepsilon=1.2\).
Figure 13: Scatter plots displaying the grain ECD and grain average GND density distributions. Marginal number distributions. **Eq** initial microstructure. Hot deformation conditions are \(T=650^{\circ}C;\ \dot{\varepsilon}=1.0\ s^{-1};\ \varepsilon=1.0;\ dt=100\ s\).
## Appendix A Supplementary EBSD orientation maps
Figure 16 consists in orientation maps of samples deformed in the following conditions \(T=650^{\circ}C\); \(\dot{\varepsilon}=1.0~{}s^{-1}\); \(\varepsilon=1.0-1.2\) and held at \(650^{\circ}C\) for \(100~{}s\).
Figure 16: EBSD orientation maps for different initial microstructures (\(T=650^{\circ}C\); \(\dot{\varepsilon}=1.0~{}s^{-1};\varepsilon=1.0-1.2\); \(dt=100~{}s\)). IPF Y color code.
## Appendix B Model parameters
Figure 17: GND density maps from simulation results considering intragranular heterogeneous GND density fields. **BW** initial microstructure. \((T=650^{\circ}C;~{}\dot{\varepsilon}=1.0~{}s^{-1};\varepsilon=1.2;~{}dt=100~{}s)\). |
2302.13381 | DFT+U study of UO$_2$: Correct lattice parameter and electronic band-gap | Hubbard-corrected density functional theory, denoted by DFT+U method, was
developed to enable correct prediction of insulating properties for
strongly-correlated electron systems. UO$_2$ is an example having O-$2p$,
U-$6d$, and U-$5f$ incomplete electronic shells. Usually, researchers apply the
Hubbard correction only to the localized incomplete $5f$ electrons of U atoms
and succeed to predict insulating property and good geometric properties by
tweaking the Hubbard-U parameter. However, it turned out that in such a way it
was impossible to obtain reasonable values for both geometry and electronic
band-gap at the same time. In this work, we show that it is possible to produce
good values for those properties just by applying and tuning the Hubbard
corrections to all incomplete shells of O-$2p$, U-$6d$, and U-$5f$. | Mahmoud Payami | 2023-02-26T18:37:59Z | http://arxiv.org/abs/2302.13381v1 | # DFT+U study of UO\({}_{2}\): Correct lattice parameter and electronic band-gap
###### Abstract
Hubbard-corrected density functional theory, denoted by DFT+U method, was developed to enable correct prediction of insulating properties for strongly-correlated electron systems. UO\({}_{2}\) is an example having O-2\(p\), U-6\(d\), and U-5\(f\) incomplete electronic shells. Usually, researchers apply the Hubbard correction only to the localized incomplete 5\(f\) electrons of U atoms and succeed to predict insulating property and good geometric properties by tweaking the Hubbard-U parameter. However, it turned out that in such a way it was impossible to obtain reasonable values for both geometry and electronic band-gap at the same time. In this work, we show that it is possible to produce good values for those properties just by applying and tuning the Hubbard corrections to all incomplete shells of O-2\(p\), U-6\(d\), and U-5\(f\).
## I Introduction
UO\({}_{2}\), as a common fuel for nuclear power reactors, has attracted the interests of researchers for a better theoretical description within DFT+U approachc.[1; 2; 3; 4] Uranium dioxide has a 3D anti-ferromagnetic (AFM) crystal structure at temperatures less than 30 K,[5; 6] but usually a simpler 1D-AFM model is used for the description. Recent XRD experiment[7] has shown that UO\({}_{2}\) crystallizes with a cubic space group \(Pa\bar{3}\) (No. 205). However, if the structure is modeled by a slightly different but more symmetric cubic space group \(Fm\bar{3}m\) (No. 225) with experimental lattice constant of 5.47A, which is shown in Fig. 1(a), then the structure can be represented by a simple tetragonal unit cell with 6 atoms as shown in Fig. 1(b).
Experiment has shown[8] that UO\({}_{2}\) is electrically an insulator with a gap of 2.10 eV. Ordinary approximations in density-functional theory (DFT) such as local-density approximation (LDA)[9; 10] or semi-local approximations such as generalized gradient approximation (GGA) [11] for the localized orbitals usually lead to incorrect metallic behavior. One workaround is to estimate the interactions of localized orbitals using the Hubbard model and add it to the DFT energy functional and then subtract the double-counting contributions from the DFT energy functional:[1; 12; 13; 14]
\[E_{DFT+U}=E_{DFT}-E_{dc}+E_{Hub}. \tag{1}\]
The interaction term in Hubbard model, when the Hamiltonian is represented in the basis of strongly localized Wannier functions, is written as:
\[E_{Hub}=U\sum_{i}n_{i\uparrow}n_{i\downarrow} \tag{2}\]
where \(U\) is a real number, \(n_{i\sigma}\) with \(\sigma=\uparrow,\downarrow\) denote the particle number operators, and \(i\) specifies the lattice site \(\mathbf{R}_{i}\). For positive values of \(U\), the interaction behaves as on-site repulsion among the electrons, while on the other hand, negative values of \(U\) means that there exist on-site attraction among electrons.
In previous DFT+U calculations for UO\({}_{2}\), the on-site Hubbard correction with positive interaction parameter \(U-5f\) was applied to only 5\(f\) electrons of uranium atoms which led to gap opening and thus correct insulating behavior. However, the gap size and geometric properties such as lattice constant both depend on the interaction parameter. By tuning this on-site parameter, it is possible to reproduce only one of those properties: gap size or lattice constant. As is seen from Fig. 2, for the approximation used here, the correct band gap is reproduced by assuming \(U_{\mathrm{U-5f}}\)=3.2 eV while the correct lattice constant is reproduced by taking \(U_{\mathrm{U-5f}}\)=4.8 eV. In this work, we have extended the Hubbard correction to cover 6\(d\) of U atoms as well as 2\(p\) of O atoms, and determine the relevant interaction parameter values that reproduce both band-gap and lattice constant of the GS in very good agreement with experiment.
Figure 1: (a)- UO\({}_{2}\) crystal structure with cubic space group \(Fm\bar{3}m\) (No. 225) and lattice constant of 5.47Γ
; (b)- description by a simple tetragonal crystal structure with six atoms. Gray and small red balls represent uranium and oxygen atoms, respectively.
The organization of this paper is as follows. In Section II, we explain the computational details; in Section III the calculated results are presented and discussed; and finally in Section IV we concludes this work.
## II Computational Details
The DFT+U calculations are done by solution of the KS equations using the Quantum-ESPRESSO code package [15; 16]. Ultra-soft pseudo-potentials (USPP) are used for U and O atoms that has been generated by the _atomic_ code, using the generation inputs (with small modifications for more desired results) from the _pslibrary_, [17] at [https://github.com/dalcorso/pslibrary](https://github.com/dalcorso/pslibrary). The valence configurations of U(\(6s^{2},\,6p^{6},\,7s^{2},\,7p^{0},\,6d^{1},\,5f^{3}\)) and O(\(2s^{2},\,2p^{4}\)) were used in the generation. The relativistic effects were accounted at the level of scalar-relativistic (SR) approximation,[18] which has been shown to give reasonable GS geometric results[4] for \(U_{\mathrm{U-5f}}\)=4.53 eV when the Perdew-Zunger [19] (PZ) LDA approximation was used for the exchange-correlation, and the projection on to Hubbard orbitals were chosen to be atomic ones that were not orthonormalized. The appropriate kinetic energy cutoffs for the plane-wave expansions were chosen as 90 and 720 Ry for the wavefunctions and densities, respectively. Also, the Methfessel-Paxton smearing method [20] for the occupations with a width of 0.01 Ry is used for better convergency process. For the Brillouin-zone integrations in geometry optimizations, a \(8\times 8\times 6\) grid were used. All geometries were fully optimized for total residual pressures on unit cells to within 0.5 kbar, and residual forces on atoms to within \(10^{-3}\) mRy/a.u. The occupation matrix control (OMC)[1] is used to avoid metastable states. The starting magnetization for oxygen atoms are set to zero values and for U atoms they are set to \(\pm 0.5\) to make anti-ferromagnetic (AFM) configuration along the \(z\) direction. Since in the present work we apply Hubbard corrections to \(5f\), \(6d\) localized orbitals of U atoms and \(2p\) orbitals of O atoms, we have occupation matrices of dimensions \(7\times 7\), \(5\times 5\), and \(3\times 3\), respectively. Applying the Hubbard correction for each of the orbitals U-\(5f\), U-\(6d\) and O-\(2p\) separately one at a time showed that only the Hubbard corrections to U-\(5f\) orbitals lead to metastable states and the other two are insensitive to initial occupations. Since in the U-atom pseudo-potential the \(5f\) orbital is occupied by 3 electrons, then we have \(C_{3}^{7}=35\) different ways for occupying the diagonal elements of \(7\times 7\) matrix by 3 electrons: [1110000], [1101000], [1100100], \(\cdots\), [0001011], [0000111].
## III Results and Discussions
Examining Hubbard corrections for \(5f\), \(6d\) of U-atom and \(2p\) of O-atom separately one at a time shows that only correction on \(5f\) is able to open an energy gap and give reasonable lattice constant. The situation is shown in Figs. 2-4.
Inspecting Fig. 2(a) it is seen that the value for experimental lattice constant is reproduced around \(U_{\mathrm{U-5f}}\sim 4.8\) eV while the correct band gap is reproduced by assuming \(U_{\mathrm{U-5f}}\)=3.2 eV. This implies that with only one correction parameter (i.e., \(U_{\mathrm{U-5f}}\)) one fails to reproduce reasonable values for both lattice constant and the band gap at the same time.
In Fig. 2(b), the deviation from cubic geometry \((c-a)\) is shown to be very small, \(\sim 0.01\AA\), so that modeling the system by 1D-AFM (instead of 3D-AFM) does not cause any significant error in this study. On the other hand, Figs. 3 and 4 show that the value of lattice constant is relatively insensitive to the values \(U_{\mathrm{U-6d}}\) and \(U_{\mathrm{O-2p}}\). These results hint that one should do fine-tuning of \(U_{\mathrm{U-5f}}\) around the value of 4.0 eV. In addition, similar to the result in Fig. 2(b), the deviations from cubic geometries in Figs. 3-4 are negligible.
In the next step, we apply Hubbard corrections to two orbitals at a time. In above it was shown that the correct
Figure 2: (a)- Lattice constant \(a\) in \(\AA\) and band gap in eV as functions of Hubbard correction strength \(U_{\mathrm{U-5f}}\); (b)-Deviation form cubic geometry, \((c-a)\), in \(\AA\) as function of Hubbard correction strength \(U_{\mathrm{U-5f}}\) of U-atom.
lattice constants were reproduced by applying the correction to only U-5\(f\) with the strength of \(\sim 4.00\) eV. So, we consider the correction to U-5\(f\) with strength 4.00 eV as the main correction and add that for U-6\(d\) as a background one with different values. The result is shown in Fig. 5. As is seen from Fig. 5, adding the background correction for U-6\(d\) almost does not change the lattice constant for \(U_{\rm U-6d}<3.0\) eV and so we ignore the background correction for U-6\(d\).
We now concentrate on adding the background correction of O-2\(p\) orbitals. As is seen from Fig. 6, in contrast to the case of U-6\(d\), here the background correction to O-2\(p\) orbitals significantly modifies the results attained by the correction on U-5\(f\). That is, in order to maintain the reasonable value for the lattice constant, one should use negative values for the Hubbard correction parameter for O-2\(p\) orbitals, meaning that the background correction is as an on-site attraction. To summarize, Fig. 6 indicates that the combination of two simultaneous corrections with \(U_{\rm U-5f}=4.0\) eV and \(U_{\rm O-2p}=-3.00\) eV revives the reasonable value for the lattice constant. But now we expect that the electronic band gap is changed from the value 2.91 eV, obtained if only U-5\(f\) correction was applied.[4] Fig. 6 also indicates that the deviation from cubic geometry is still acceptable.
To have a closer inspection on the effect of negative values for \(U_{\rm O-2p}\), we have calculated the GS lattice constants and electronic band gaps for different values of \(U_{\rm U-5f}\), keeping \(U_{\rm O-2p}\) fixed at three values of -3.00, -3.50, and -4.00 eV. The results are presented in Table 1.
In order to estimate the proper combinations of Hubbard strengths for \(U_{\rm U-5f}\) and \(U_{\rm O-2p}\) for a desired value of band gap (2.00, 2.10, 2.20 eV), we have plotted the data of Table 1 in Fig. 7.
As we see from Fig. 7, since here we have chosen three fixed values for \(U_{\rm O-2p}\), there exist three different combinations of the Hubbard strengths \(U_{\rm U-5f}\) and \(U_{\rm O-2p}\) for each desired value of band gap. To verify the validity of this guess, we have calculat
Figure 4: The same as in Fig. 3 but for Hubbard correction strength \(U_{\rm O-2p}\).
Figure 3: Lattice constant \(a\) and deviation from cubic geometry \((c-a)\) in \(\AA\) as functions of Hubbard correction strength \(U_{\rm U-6d}\).
Figure 6: The same as in Fig. 5 as functions of Hubbard correction strength \(U_{\rm O-2p}\) for fixed value of \(U_{\rm U-5f}\)=4.00 eV.
nine combinations of Hubbard strengths hinted by plots of Fig. 7 and presented the results in Table 2.
From the data in Table 2, we see that applying simultaneous Hubbard on-site corrections on the U-5\(f\) and O-2\(p\) orbitals it is possible to tune both lattice constant and band gap to their experimental values.
## IV Conclusions
In previous theoretical studies of UO\({}_{2}\) crystal, in order to predict correct insulating behavior, researchers used Hubbard corrections for the U-5\(f\) localized orbitals in the DFT+U approach. It was already shown that depending on what XC functional is used and whether the Hubbard orbitals were orthonormalized or not, for a given Hubbard-\(U\) parameter (say 4.0 eV) different results were obtained for equilibrium lattice constant and the KS band gap. None of those results were satisfactory in predicting simultaneous reasonable values for the lattice constant and the size of band gap. In this work, employing LDA-PZ scheme for the XC energy functional, we have shown that applying the on-site Hubbard corrections simultaneously to U-5\(f\) and O-2\(p\) orbitals one can choose certain values to obtain results for both the lattice constant and energy band gap of the ground state in good agreement with experiment.
## Acknowledgement
This work is part of research program in School of Physics and Accelerators, NSTRI, AEOI.
## Data Availability
The raw or processed data required to reproduce these results can be shared with anybody interested upon sending an email to M. Payami.
\begin{table}
\begin{tabular}{l c c c} \(\tilde{E}_{gap}\) (eV) & \(U_{U}\), \(U_{0}\) (eV) & \(a\) (\(c\)) (\(\hat{\Lambda}\)) & \(E_{gap}\) (eV) \\ \hline
2.00 & 3.40, -3.00 & 5.4560 (5.4735) & 2.03 \\ & 3.45, -3.50 & 5.4592 (5.4768) & 2.03 \\ & 3.48, -4.00 & 5.4622 (5.4802) & 2.01 \\ \hline
2.10 & 3.50, -3.00 & 5.4579 (5.4751) & 2.09 \\ & 3.60, -3.50 & 5.4623 (5.4795) & 2.11 \\ & 3.65, -4.00 & 5.4658 (5.4832) & 2.10 \\ \hline
2.20 & 3.70, -3.00 & 5.4619 (5.4784) & 2.21 \\ & 3.78, -3.50 & 5.4661 (5.4827) & 2.21 \\ & 3.84, -4.00 & 5.4697 (5.4865) & 2.21 \\ \end{tabular}
\end{table}
Table 2: Hubbard parameters \(U_{\rm U}\), \(U_{\rm O}\), in eV, needed to be used to obtain a desired band gap \(\tilde{E}_{gap}\) along with the resulted lattice constants \(a\) (\(c\)) and band gap \(E_{gap}\).
\begin{table}
\begin{tabular}{c c c c} \(U_{\rm O}\) (eV) & \(U_{\rm U}\) (eV) & \(a\) (\(c\)) (\(\hat{\Lambda}\)) & \(E_{gap}\) (eV) \\ \hline -3.00 & 3.00 & 5.4477 (5.4667) & 1.7651 \\ & 3.10 & 5.4498 (5.4684) & 1.8397 \\ & 3.20 & 5.4518 (5.4701) & 1.9096 \\ & 3.30 & 5.45391 (5.4718) & 1.9706 \\ & 3.40 & 5.4559 (5.4734) & 2.0308 \\ & 3.50 & 5.4579 (5.4751) & 2.0900 \\ & 3.60 & 5.4599 (5.4767) & 2.1484 \\ & 3.70 & 5.4619 (5.4784) & 2.2059 \\ & 3.80 & 5.4644 (5.4795) & 2.2623 \\ & 3.90 & 5.4666 (5.4817) & 2.3180 \\ & 4.00 & 5.4682 (5.4838) & 2.3731 \\ \hline -3.50 & 3.00 & 5.4499 (5.4693) & 1.7387 \\ & 3.10 & 5.4520 (5.4710) & 1.8125 \\ & 3.20 & 5.4541 (5.4727) & 1.8789 \\ & 3.30 & 5.4561 (5.4744) & 1.9937 \\ & 3.40 & 5.4581 (5.4776) & 2.0556 \\ & 3.60 & 5.4622 (5.4794) & 2.1129 \\ & 3.70 & 5.4642 (5.4812) & 2.1691 \\ & 3.80 & 5.4665 (5.4830) & 2.2245 \\ & 3.90 & 5.4685 (5.4847) & 2.2789 \\ & 4.00 & 5.4705 (5.4864) & 2.3320 \\ \hline -4.00 & 3.00 & 5.4522 (5.4720) & 1.7112 \\ & 3.10 & 5.4543 (5.4737) & 1.7842 \\ & 3.20 & 5.4564 (5.4754) & 1.8472 \\ & 3.30 & 5.4584 (5.4769) & 1.9057 \\ & 3.40 & 5.4604 (5.4786) & 1.9636 \\ & 3.50 & 5.4626 (5.4805) & 2.0206 \\ & 3.60 & 5.4648 (5.4823) & 2.0765 \\ & 3.70 & 5.4669 (5.4841) & 2.1315 \\ & 3.80 & 5.4689 (5.4858) & 2.1855 \\ & 3.90 & 5.4708 (5.4874) & 2.2383 \\ & 4.00 & 5.4728 (5.4891) & 2.2902 \\ \end{tabular}
\end{table}
Table 1: Lattice constants in \(\hat{A}\) and \(E_{gap}\) in eV as functions of \(U_{\rm U-5f}\) and fixed values of background correction \(U_{\rm O-2p}\)=-3.00, -3.50, and -4.00 eV.
Figure 7: Variation of band gap as function of \(U_{\rm U-5f}\) for different fixed values of \(U_{\rm O-2p}\). It is seen that, in this example, to obtain each desired values of \(E_{gap}\)=2.00, 2.10, and 2.20 eV one has three choices for the Hubbard strength combinations. |
2310.14827 | Pseudo-orthogonal Yang-Mills theories and connections to gravity | We formulate gauge theories on noncompact Lorentzian manifolds. For
definiteness we choose an SO(1,4) gauge theory -- the isometry group of the
five dimensional Minkowski space. We make use of the natural inner product to
construct the Yang-Mills gauge action on four dimensional spacetime, on which
the natural tetrad and metric are induced, thus breaking the symmetry to that
of general relativity. In the low energy limit -- if a suitable gauge field
condensate develops -- the theory reduces to the Cartan-Einstein gravity, which
harbors nondynamical torsion, and is consistent with all observations. We also
discuss how to couple our gauge theory of gravity to scalar and vector matter.
The Hamiltonian analysis shows that the theory possesses no Ostrogradsky
instabilities, however it harbors a kinetic instability. We conjecture that
such a kinetic instability can be removed either by generalizing the theory to
the nonlinear Born-Infeld theory, or by constraining the kinetic instability.
This work is an attempt to formulate gravity as a unitary, renormalizable gauge
theory without instabilities, in which the fundamental propagating degrees of
freedom are in the spin-one tetrad connection. | Giovanni Mistretta, Tomislav Prokopec | 2023-10-23T11:47:12Z | http://arxiv.org/abs/2310.14827v2 | # Pseudo-orthogonal Yang-Mills theories and connections to gravity
###### Abstract
We formulate gauge theories on noncompact Lorentzian manifolds. For definiteness we choose an SO(1,4) gauge theory - the isometry group of the five dimensional Minkowski space. We make use of the natural inner product to construct the Yang-Mills gauge action on four dimensional spacetime, on which the natural tetrad and metric are induced, thus breaking the symmetry to that of general relativity. In the low energy limit - if a suitable gauge field condensate develops - the theory reduces to the Cartan-Einstein gravity, which harbors nondynamical torsion, and is consistent with all observations. We also discuss how to couple our gauge theory of gravity to scalar and vector matter. The Hamiltonian analysis shows that the theory possesses no Ostrogradsky instabilities, however it harbors a kinetic instability. We conjecture that such a kinetic instability can be removed either by generalizing the theory to the nonlinear Born-Infeld theory, or by constraining the kinetic instability. This work is an attempt to formulate gravity as a unitary, renormalizable gauge theory without instabilities, in which the fundamental propagating degrees of freedom are in the spin-one tetrad connection.
## 1 Introduction
The first gravitational theory oversaw the birth of theoretical physics, perhaps the last one will see its end. For almost a century, researchers from all over the world tried to properly quantize the gravitational field. Up to now, no convincing solution to this problem has been found. Starting from general relativity in 1915, the theory has been extended and generalized in many different ways (see [1] for a review). Among these theories, string theory is the only one in which Einstein's field equations are obtained without adding by hand the Hilbert-Einstein action to the theory, or by introducing it as a counterterm as in some induced gravity theories. Indeed, gravity (on a world-sheet) emerges in string theory by setting to zero the beta functions of the theory, imposing conformality at the quantum level [2]. The goal of this paper is to show that there exists another class of theories obtained by a suitable generalization of the well-known Yang-Mills theories that contains Einstein theory in its torsionless low energy limit.
As the Standard Model teaches us, fundamental interactions in Nature are mediated (at low-energy scales) by gauge fields of the Yang-Mills type. These theories are proved to be renormalizable, which motivates us to look at gauge theories for a possible solution to the renormalizability problem of quantum gravity. Moreover, it is well-known that the renormalization of flat spacetime theories generically induces higher order geometric scalars with respect to the Ricci scalar present in the Hilbert action [3]. Usually these terms provide instabilities of the Ostrogradsky kind [4], due
o the presence in the action of quadratic second-order time derivatives with respect to some components of the metric tensor. Since the Yang-Mills theory fundamentally provides a theory of a field strength squared, it seems natural to use an appropriate gauge group in order to reproduce at least these counterterms in a stable way. This is possible due to the fact that a Yang-Mills theory is a first order formalism similar to the Palatini formalism in GR (_i.e._ the variational principle of Hilbert action for which Christoffel symbols are considered free and they are not the Levi-Civita connection [5]). Another reason why Yang-Mills theories have a chance of explaining gravity is the _geometrical structure_ that underlies them. General relativity and Yang-Mills theories [6] are two of the most important examples of differential geometry in theoretical physics. It is therefore natural to pose the question whether one can use the geometry of gauge theories to derive general relativity.
Guided by the fact that linear gravity is a spin-2 field theory [7], and by the observation that general relativity (GR) can be viewed as the low-energy limit of the more general effective theory of gravity [8], we are inspired to use spin-1 Yang-Mills fields (whose product is known to form a spin-2 representation) to describe a theory that reduces to GR in the low-energy limit. 1 This work is an attempt to define a _geometrical Yang-Mills theory_ in four spacetime dimensions, whose low energy limit is GR.
Footnote 1: Interestingly, Witten proved [9] that the 2+1 general relativity is equivalent to the Chern-Simons theory β a topological Yang-Mills theory.
Following the work by James T. Wheeler [10] and Juan Trujillo [11], we studied the possibility of obtaining a gravitational theory such as Weyl squared gravity from a Yang-Mills theory of the conformal group. Soon we realized that, in order to twist the geometry of spacetime with the geometrical structure of their gauge theory, we needed a way to define the metric tensor in a non-trivial gauge theoretical way. Here we develop a new class of Yang-Mills theories, called geometrical Yang-Mills theory. We will show that it is possible to define a metric for spacetime using part of the gauge fields as cotetrad fields. Pseudo-orthogonal groups will result as the
Figure 1: Spacetime (yellow) imbedded in de Sitter space (purple); the fibers represent the frame bundle of the de Sitter group.
best choice for our gauge groups. In particular, de Sitter theory SO(1,4) will reduce, in its torsionless low energy limit, to general relativity with the appearance of a gauge theoretical Planck mass and cosmological constant. Other groups are also worth considering, such as ISO(1,3) and SO(2,3), which are the isometry groups of four-dimensional Minkowski and anti-de Sitter space, respectively.
Earlier attempts to formulate gravity as a gauge theory include the work of MacDowell and Mansouri [12], in which a geometric gauge theory of gravity was constructed. The authors used the (incomplete) Levi-Civita tensor of the symmetry space to define the inner product and that the tetrad field vanishes on-shell. The choice of the inner product was declared natural, but never properly justified. However, due to the fact that their inner product is _not_ compatible with the gauge covariant derivative, the theory is _not topological_ and - in its low-energy limit - it reduces to general relativity (in vacuum). Wilczek [13, 14] modified the theory by introducing a scalar field in the adjoint representation and a symmetry-breaking potential, thus making the theory manifestly gauge invariant, but still with an inner product non-compatible with the gauge covariant derivative. BF theories can be also used to unify matter and gravity. In Ref. [22] gravity is described by an \(SU(2)_{R}\) connection, which at high energies gets unified with the \(SU(2)_{L}\) of the electroweak theory.
In this work we revisit the question of the natural inner product on the group space, and opt for the one constructed from the Killing metric on the group space, which adorns the inner product of the Standard Model gauge theories, and which _is_ compatible with the covariant derivative. In our rendering of the theory the spacetime action emerges as a projection of the 5-dimensional space (manifold) on which the de Sitter group fibration is defined, as illustrated in figure 1. This projection (in yellow) then selects a natural tetrad, with respect to which the spacetime volume is defined, thus breaking the original gauge symmetry down to SO(1,3) - the symmetry of general relativity.
The two above mentioned theories - the MacDowell-Mansouri and Wilczek theories - can be considered as special realizations of the more general BF-theories [15] (or slightly more general Holst theories [16]), reviewed in [17], with a particular choice of the B-tensor. These theories are topological or not, depending on whether the B-tensor is chosen to be non-dynamical or dynamical, respectively. Various versions of the BF-theory have been studied in the literature [18, 19, 20, 21], see [17] for a more complete account of the literature.
The paper is organised as follows. Section 2, is dedicated to the definition of geometrical Yang-Mills theory showing the necessity of studying pseudo-orthogonal gauge groups. In Section 3 we develop the de Sitter gauge theory we have already mentioned. It is the easiest consistent example of geometrical Yang-Mills theory that contains the Ricci scalar as part of the action. Finally, in Section 4 we introduce a Hamiltonian formalisms for generic Yang-Mills theories in curved dynamical
spacetimes. In particular we focus on the constraints of the theory establishing their class and their self-consistency conditions. We conclude by addressing the missing steps of a proper instability analysis, and by giving outlooks for the theories we have developed.
## 2 Geometrical Yang-Mills theories
This section is dedicated to the mathematical construction of some special Yang-Mills theories that we call _geometrical_. The reason for this name is that we will study the implications of using part of the gauge connection as a tetrad. Throughout the following fix a manifold \(M\) and a principal \(G\)-bundle \(P\to M\) (\(G\) being a Lie group). Our goal is to define the metric on \(M\) through the gauge connections. Let's consider the gauge connection to be \(\boldsymbol{\omega}=\tilde{\boldsymbol{\rho}}+\boldsymbol{\sigma}\), where \(\boldsymbol{\omega},\tilde{\boldsymbol{\rho}},\boldsymbol{\sigma}\in T^{*}P\), where \(T^{*}P\) denotes the contangent space. We consider \(\tilde{\boldsymbol{\rho}}\) to be the part of the gauge connection that defines the metric, while the latter is independent from \(\boldsymbol{\sigma}\). The distinction in \(\tilde{\boldsymbol{\rho}}\) and \(\boldsymbol{\sigma}\) generates a distinction also in the gauge algebra. This happens since this forms take value in the Lie algebra \(\mathfrak{g}\) of the gauge group. Next, we introduce \(\left\{a_{i}\right\}_{i=1,\ldots,N_{A}}\) and \(\left\{b_{j}\right\}_{j=1,\ldots,N_{B}}\) with \(N_{A}+N_{B}=N=\dim(\mathfrak{g})\) such that we can write,
\[\boldsymbol{\omega}=\omega^{i}\otimes\hat{e}_{i}=\tilde{\boldsymbol{\rho}}+ \boldsymbol{\sigma}=\tilde{\rho}^{i}\otimes a_{i}+\sigma^{j}\otimes b_{j}, \tag{1}\]
where \(\left\{\hat{e}_{i}\right\}_{i=1,\ldots,N}\) is a basis for the Lie algebra \(\mathfrak{g}\). We are then tempted to define the metric tensor as,
\[g\equiv\eta_{ab}\tilde{\rho}^{a}\otimes\tilde{\rho}^{b}, \tag{2}\]
where \(\eta_{ab}\left(a,b=0,1,2,3\right)\) is the Minkowski metric, _i.e._\(\mathrm{diag}(-1,1,1,1)\). Notice that, if we intend to consider the fields \(\tilde{\rho}^{i}\) equivalent to standard tetrad fields, we need more properties. In particular, \(N_{A}\) must be equal to \(n\), dimensionality of \(M\). Moreover, dimensional analysis gives a dimensionless connection 1-form (for the non-tetrad fields) and the dimension of an inverse mass for the tetrad 1-form. Since the connection 1-forms and the tetrad fields are part of the same connection 1-form they need to have the same mass dimension. We can then write \(\rho^{a}=m^{-1}\tilde{\rho}^{a}\), where \(m\) is a constant with the dimension of a mass, and define the dimensionless metric the same as in the previous expression,
\[g=\eta_{ab}\rho^{a}\otimes\rho^{b}=m^{-2}\eta_{ab}\tilde{\rho}^{a}\otimes \tilde{\rho}^{b}. \tag{3}\]
However, this equation does not actually define a metric on \(M\) since, strictly speaking, the tensor constructed in this way lies in \(T^{*}P\otimes_{\mathrm{sym}}T^{*}P\), where \(\otimes_{\mathrm{sym}}\) denotes the symmetrized inner product. We then consider \(\rho^{a}\) in Eq. (3) to be the local connection 1-form (on \(M\)) associated with \(\boldsymbol{\rho}\) and a section \(s\in\Gamma(M;P)\) (here \(\Gamma(M;P)\) stands for the set of sections of the principal G-bundle \(P\)). Notice that making another choice for \(s\) changes the metric definition, unless the change of gauge (_i.e._
a change of section \(s\to s^{\prime}\)) leaves Eq. (3) invariant. Figure 1 shows the original 5-dimensional manifold 2 (purple) over which the fibration \(P\) is constructed, and the spacetime indices manifold \(M\) is denoted by the yellow line of co-dimension one. The sections \(\Gamma(M,P)\) are then defined as the sections of the fibration \(P\) intersecting \(M\). The original gauge group \(G\) - defined by the fibration \(P\) of the 5-dimensional manifold - is broken by the choice of a section \(s\in\Gamma(M,P)\) to \(SO(1,3)\), which is the symmetry group of general relativity. Indeed, the only gauge transformations that preserve the gauge theoretical metric tensor are those which act pseudo-orthogonally on the gauge tetrad fields, _i.e._
Footnote 2: This 5-dimensional manifold can be either flat β in which case it can be considered to be identical to the space over which the fibration is constructed β or it may be curved. The difference between these two cases is not relevant for this work, and therefore it will not be discussed any further.
\[\rho^{i}\to\Lambda^{a}{}_{b}\rho^{b},\qquad\Lambda\in SO(1,3)\subset G.\]
With the construction above we turned \(M\) into a pseudo-Riemannian manifold \((M,g)\) with Lorentzian signature. Now that we established the tetrad nature of part of the gauge field, we will use \(\rho^{a}\equiv e^{a}\) in order to adapt to the standard notation.
We introduce the standard action functional of Yang-Mills theory,
\[S[\boldsymbol{\omega}]=\int_{M}\langle\Omega,\Omega\rangle=\int_{M}\Omega^{i} \wedge*\Omega^{j}G_{ij}=\int_{M}\mathrm{d}^{4}x\det(e)\,\Omega^{i}{}_{\mu\nu} \Omega^{j\mu\nu}G_{ij}\,, \tag{4}\]
where \(\langle\cdot,\cdot\rangle\) denotes the inner product on the group space and \(i=1,2,\cdots,N\) is the algebra index and \(N=\dim[\mathfrak{g}]\). It is evident that in general the theory (4) is Lorentz invariant and not \(G\)-invariant. This happens because the Hodge-star operator introduces in the action a non-trivial metric dependence which, as pointed out above, induces a symmetry breaking. 3 We will then give a non-degenerate inner product to \(\mathfrak{g}\) that is at least Lorentz invariant, \(G_{ij}\). The equations of motion are obtained by Hamilton's variational principle and read:
Footnote 3: One can add to the action (4) a term \(\propto\int_{M}\langle\Omega,*\Omega\rangle\), which is fully gauge invariant. Such a term is (_via_ the Hodge dual) endowed with the spacetime Levi-Civita tensor, and in this way resembles the MacDowell-Mansouri and Wilczek theories. However, in our rendering of the inner product this term is purely topological, meaning that it does _not_ contribute to the equations of motion, and thus does not in any way affect the theory at the classical level, analysed in this work. In particular, it cannot give the Hilbert-Einstein action as its low-energy limit.
\[\begin{split} G_{aj}\bigg{[}\frac{1}{\sqrt{-g}}\partial_{\gamma} \left(\sqrt{-g}\Omega^{j\delta\gamma}\right)+& c_{lm}^{\;\;j} \omega^{l}{}_{\gamma}\Omega^{m\delta\gamma}\bigg{]}&=G_{ij} \left[\Omega^{i\delta\nu}\Omega^{j}{}_{\mu\nu}e_{a}{}^{\mu}-\frac{e_{a}^{\; \delta}}{4}\Omega^{i}{}_{\mu\nu}\Omega^{j\mu\nu}\right]\bigg{|}_{\text{if }a\in\mathcal{S}_{e}}\,,\\ G_{aj}\bigg{[}\frac{1}{\sqrt{-g}}\partial_{\gamma}\left(\sqrt{-g} \Omega^{j\delta\gamma}\right)+& c_{lm}^{\;\;j}\omega^{l}{}_{\gamma }\Omega^{m\delta\gamma}\bigg{]}&=0\bigg{|}_{\text{if }a\notin\mathcal{S}_{e}}\,,\end{split} \tag{5}\]
where \(\mathcal{S}_{e}=\{0,1,2,3\}\) denotes the set of tetrad indices. We found the peculiarity of a geometric Yang-Mills theory. The gauge fields that take the role of the tetrad are
not source-free in vacuum, yet they are sourced by the geometric energy-momentum tensor. In the following we focus on studying pseudo-orthogonal gauge theories, _i.e._\(\omega^{i}=\omega^{[AB]}\).
## 3 De Sitter Yang-Mills theory
In this section we study the geometrical Yang-Mills theory for the de Sitter group \(SO(1,4)\). We show that the geometrical Yang-Mills action contains the Hilbert action in the presence of a cosmological constant. We advice the reader to go through Appendix A to get more familiar with the techniques we will be using. In the following we use lower-case latin letters for the Lie algebra indices corresponding to the Lorentz generators, _i.e._\(M_{AB}|_{A,B=0,\ldots,3}\equiv M_{[ab]}\). The other four generators, \(M_{[a4]}\equiv\tilde{P}_{a}\), generate translations. Comparing with Eq. (10), we get the commutators in the Lie algebra basis with this new notation,
\[\begin{split}\left[M_{[ab]},M_{[cd]}\right]&=\eta_{ bc}M_{[ad]}+\eta_{ad}M_{[bc]}+\eta_{db}M_{[ca]}+\eta_{ac}M_{[db]},\\ \left[M_{[ab]},\tilde{P}_{c}\right]&=\eta_{bc} \tilde{P}_{a}-\eta_{ac}\tilde{P}_{b},\\ \left[\tilde{P}_{a},\tilde{P}_{c}\right]&=M_{ca}, \end{split} \tag{13}\]
which gives for the Killing metric,
\[\begin{split} G_{[ab][cd]}&=2(D-2)\left[\eta_{bc} \eta_{da}-\eta_{bd}\eta_{ca}\right],\\ G_{[ab][cd]}&=0,\\ G_{[a4][cd]}&=-2(D-2)\eta_{ac},\end{split} \tag{14}\]
where here \(D=5\) (compare with Eq. (12)).
We now consider the particular pseudo-orthogonal bundle for which the structure group is given by de Sitter group. As usual we introduce a connection,
\[\boldsymbol{\omega}=\frac{1}{2}\omega^{[AB]}\otimes M_{[AB]}=\frac{1}{2}\omega ^{[ab]}\otimes M_{[ab]}+e^{a}\otimes\tilde{P}_{a}, \tag{15}\]
which gives for the curvature (compare with Eq. (13)):
\[\begin{split}\boldsymbol{\Omega}&=\mathrm{d} \boldsymbol{\omega}+\frac{1}{2}\left[\boldsymbol{\omega},\boldsymbol{\omega} \right]\\ &=\frac{1}{2}\left[\mathrm{d}\omega^{[ab]}+\omega^{[a}_{\phantom{ a}c]}\wedge\omega^{[cb]}-e^{a}\wedge e^{b}\right]\otimes M_{ab}+\left[\mathrm{d}e^{a}+ \omega^{[a}_{\phantom{a}b]}\wedge e^{b}\right]\otimes\tilde{P}_{a}\\ &\equiv\frac{1}{2}\Omega^{[ab]}\otimes M_{[ab]}+T^{a}\otimes \tilde{P}_{a}\,.\end{split} \tag{16}\]
The Bianchi identities are given by,
\[\begin{split}\mathrm{d}_{\boldsymbol{\omega}}\boldsymbol{\Omega}& =\mathrm{d}\boldsymbol{\Omega}+[\boldsymbol{\omega},\boldsymbol{ \Omega}]\\ &=\frac{1}{2}\left[\mathrm{d}\Omega^{[AB]}+\omega^{[A}_{\phantom{ a}C]}\wedge\Omega^{[CB]}-\omega^{[B}_{\phantom{a}C]}\wedge\Omega^{[CA]}\right]=0.\end{split} \tag{17}\]
We will use the fields \(\left\{e^{a}\right\}_{a}\) as tetrad fields, in terms of which we can define the metric as in Section 2,
\[g=\eta_{ab}e^{a}\otimes e^{b}. \tag{11}\]
The other part of the connection 1-form is related to the Lorenz generators and it corresponds to the covariant derivative on the tangent bundle of \(M\). Under the interpretation explained above, the curvature related to the tetrad fields is given by the torsion on \(M\) related to the covariant derivative inherited by \(\omega^{[ab]}\). We introduce the notation,
\[R^{[ab]}=\mathrm{d}\omega^{[ab]}+\omega^{[a}_{\phantom{[a}c]}\wedge\omega^{[ cb]}, \tag{12}\]
so that we can write,
\[\Omega^{[ab]}=R^{[ab]}-\tilde{e}^{a}\wedge\tilde{e}^{b}=R^{[ab]}-m^{2}e^{a} \wedge e^{b}. \tag{13}\]
It is well known that a torsionless, metric-compatible connection (such as \(\omega_{[ab]}\)) automatically fixes it to be the Levi-Civita connection. Indeed, we see that for the configurations for which \(T^{a}=0\), we have,
\[\overset{\circ}{R^{ab}}=R^{[ab]}, \tag{14}\]
where \(\overset{\circ}{R^{ab}}\) is the Riemann curvature tensor. This equivalence is extremely important now that we will build the action.
Following the recipe of Section 2 we provide the action for the geometrical Yang-Mills theory for the de Sitter group,
\[S[\omega^{[ab]},e^{a}] =\alpha_{0}\int_{M}\!\frac{1}{4}\Omega^{[ab]}\wedge*\Omega^{[cd]}G _{[ab][cd]}+T^{a}\wedge*T^{b}G_{[a4][b4]}\] \[=\alpha\!\int_{M}\!\frac{1}{2}\Omega^{[a}_{\phantom{[a}c]}\wedge* \Omega^{[c}_{\phantom{[a}a]}-m^{2}T^{a}\wedge*T_{a}\] \[=\alpha\!\int_{M}\!\frac{1}{2}R^{[a}_{\phantom{[a}c]}\wedge*R^{[c}_ {\phantom{[a]}a]}\!-\!m^{2}T^{a}\wedge*T_{a}\!-\!m^{2}e^{a}\wedge e^{c}\wedge* R_{ca}\!+\!\frac{m^{4}}{2}e^{a}\wedge e^{c}\wedge*(e_{c}\wedge e_{a})\] \[=\alpha\int_{M}\sqrt{-g}\,\mathrm{d}x^{4}\bigg{[}-\frac{1}{4}R^{ [ac]\mu\nu}R_{[ac]\mu\nu}+\frac{m^{2}}{2}T^{[a]\mu\nu}T_{[a]\mu\nu}+m^{2}(R-2 \Lambda)\bigg{]}, \tag{15}\]
where \(\alpha=2(D-2)\alpha_{0}=6\alpha_{0}\) as in Eq. (10), \(\alpha_{0}\) is the (inverse) gauge coupling constant of the original gauge theory and \(\Lambda=\frac{n(n-1)}{4}m^{2}=3m^{2}\) is the _(gauge theoretical) cosmological constant_ coming from the last term in the above equations. Recall that we introduced the mass parameter \(m\) in accordance with dimensional analysis. For the theory (15) is to reduce in the low energy limit to general relativity, from the last line of (15) it follows that \(\alpha m^{2}\rightarrow\frac{1}{16\pi G}=\frac{M_{\mathrm{Pl}}^{2}}{2}\) and \(\Lambda=3m^{2}=\frac{3}{16\pi G\alpha}=\frac{3M_{\mathrm{Pl}}^{2}}{2\alpha}\), such that \(\alpha\) can be used to tune the cosmological constant. In particular, when \(\alpha\gg 1\), 4
the geometrical cosmological constant is much smaller than the Planck scale. While the action (3.10) still exhibits all physical degrees of freedom, its importance is in its torsionless limit,
\[\begin{split} S[\omega^{[ab]},e^{a}]&=\alpha\int_{M} \frac{1}{2}\overset{\circ}{R}\overset{[a}{{}_{c]}}\wedge\ast\overset{\circ}{R} \overset{[c}{{}_{a]}}-m^{2}e^{a}\wedge e^{c}\wedge\ast\overset{\circ}{R}_{ca} +\frac{m^{4}}{4}e^{a}\wedge e^{c}\wedge\ast(e_{c}\wedge e_{a})\\ &=\alpha\!\int_{M}\sqrt{-g}\,\mathrm{d}x^{4}\left[-\frac{1}{4} \overset{\circ}{R}\overset{[ac]\mu\nu}{R}\overset{\circ}{{}_{[ac]\mu\nu}}\!+ \!m^{2}(\overset{\circ}{R}\!-\!2\Lambda)\right],\end{split} \tag{3.11}\]
which gives, as mentioned above, the Einstein-Hilbert action supplemented by a cosmological constant and a Riemann squared term. The latter is not multiplied by \(m^{2}\), and this mass scale needs to be high enough 5 This means that the effect of this interaction on the Hilbert action is suppressed as \(1/m^{2}\). The equations of motion follow from Eq. (2.5) and read,
Footnote 5: The mass \(m\) ought to be higher than the energy scale at which general relativity is well tested.
\[\begin{split}& G_{[ab][cd]}\!\left[\frac{1}{2\sqrt{-g}}\partial_{ \gamma}\left(\sqrt{-g}\Omega^{[cd]\delta\gamma}\right)\!+\!\frac{1}{4}c_{[ef][ lm]}\overset{[cd]}{\omega}\overset{[ef]}{\gamma}\Omega^{[lm]\delta\gamma}\!+\!m^{2}c_{[4f][4 m]}\overset{[cd]}{e}^{f}{}_{\gamma}T^{m\delta\gamma}\right]\!=0,\\ &\frac{1}{\sqrt{-g}}\partial_{\gamma}\left(\sqrt{-g}T_{a}^{\ \ \delta \gamma}\right)\!+\!\omega_{[an]\gamma}T^{n\delta\gamma}\!+\!\mathrm{Ric}_{a}^{ \ \delta}\!-\!\frac{e_{a}^{\ \ \delta}}{2}\left(R\!-\!2\Lambda\right)=\frac{1}{2m^{2}}\left(\Theta_{\text{ lorentz}}\right)_{a}^{\ \ \delta}\!+\!\left(\Theta_{\text{torsion}}\right)_{a}^{\ \ \delta},\end{split} \tag{3.12}\]
where we identified,
\[\begin{split}\mathrm{Ric}_{a}^{\ \ \delta}&=R^{[cd]\delta\nu} \eta_{ac}e_{d\nu},\\ &\left(\Theta_{\text{lorentz}}\right)_{a}^{\ \ \delta}&=e_{a}^{\ \mu}R^{[cd]\delta\nu}R_{[cd]\mu\nu}-\frac{1}{4}e_{a}^{\ \ \delta}R^{[cd]\mu\nu}R_{[cd]\mu\nu},\\ &\left(\Theta_{\text{torsion}}\right)_{a}^{\ \ \delta}&=e_{a}^{\ \mu}T^{d\delta\nu}T_{d\mu\nu}-\frac{1}{4}e_{a}^{\ \ \delta}T^{d\mu\nu}T_{d\mu\nu}.\end{split} \tag{3.13}\]
Once again, it is interesting to study torsionless solutions to the equations of motion. We see that they correspond to Einstein's field equations supplemented by a geometrical energy momentum tensor and the corresponding equations for the Lorentz connection, namely:
\[\begin{split}\overset{\circ}{\mathrm{Ric}}_{a}^{\ \ \delta}&-\frac{e_{a}^{\ \ \delta}}{2}\left(\overset{\circ}{R}-2\Lambda\right)&=\,\frac{1}{2m^ {2}}\left(\Theta_{\text{lorentz}}\right)_{a}^{\ \ \delta},\\ & G_{[ab][cd]}\!\left[\frac{1}{2\sqrt{-g}}\partial_{\gamma}\left( \sqrt{-g}\Omega^{[cd]\delta\gamma}\right)+\frac{1}{4}c_{[ef][lm]}\overset{[cd]}{ \omega}\overset{[ef]}{\gamma}\Omega^{[lm]\delta\gamma}\right]&=0. \end{split} \tag{3.14}\]
This result is somewhat surprising. Indeed, we obtained Einstein's equations and a cosmological constant from the standard Yang-Mills action. In other words, we derived the equations of the gravitational field from a theory more similar to QCD or the Electro-weak interaction, and in general to the Standard Model physics. Moreover, notice that the difference between proper GR and Eq. (3.14) is a factor which is
of second order in the Riemann curvature tensor but is also suppressed by an inverse Planck mass squared. The contribution coming from a non-vanishing right-hand-side of the vacuum Einstein's equation is relevant only when the curvature is of the same order of magnitude as the Planck mass. This is the situation one usually finds close to singularities of the GR solutions. Adding matter to the theory will result in a contribution on the right-hand-side of Eq. (20). For the tetrad equation one would find the energy-momentum tensor of the matter field, since it would appear from the Hodge-star variation. For the Lorentz connection one would get the contribution coming from the gauge current (as in standard Yang-Mills theory) which would be represented by the angular momentum of the matter fields (since the gauge symmetry group is given by local Lorentz transformations).
An important aspect of this theory is the appearance of a cosmological constant. This constant is positive and it is proportional to the Planck mass squared, more precisely \(\Lambda\sim m^{2}\sim M_{\rm Pl}^{2}/\alpha\), which is also the curvature scale at which the gravitational energy from curvature becomes dynamically important. Given that the electroweak scale transition changes the vacuum energy density by an amount \(\Delta\rho\sim E_{\rm EW}^{4}\), which contributes to the cosmological constant as, \(\Delta\Lambda_{EW}\sim E_{\rm EW}^{4}/M_{\rm Pl}^{2}\). This then provides an upper bound on \(m^{2}\sim M_{\rm Pl}^{2}/\alpha<E_{\rm EW}^{4}/M_{\rm Pl}^{2}\), from which we conclude, \(\alpha=1/(4g^{2})>(M_{\rm Pl}/E_{\rm EW})^{4}\sim 10^{64}\), or equivalently \(g<10^{-32}\), a tiny gauge coupling constant. 6 However, these relations holds classically, and they will change when quantum (loop) contributions to the cosmological constant are included. To summarize, the scale \(m\) is the scale above which the geometric theory of gravity behaves as gauge theory. Current observations suggest that the scale \(m\) is of the order of the electroweak scale (or grand unified scale, if grand unification was realised). However, due to the smallness of the gravitational gauge coupling constant, this gravitational theory does not affect significanly particle physics experiments, and therefore accelerator physics experiments cannot yet constrain the theory.
Footnote 6: If \(m\sim E_{GUT}\sim 10^{16}\) GeV were of the order of the grand unified scale, the constraint on \(\alpha\) and \(g\) would be much milder, \(g\sim(E_{\rm GUT}/M_{\rm Pl})^{2}\sim 10^{-6}\), which is of the order of the electron yukawa.
Consider again the complete de Sitter geometrical action in Eq. (21). We would like to give an intuitive scheme that one could follow in order to constrain dynamically the second term (torsion squared) to be zero. Let \(A=A_{\mu}{\rm d}x^{\mu}=A_{a}e^{a}\) be a 1-form vector field on the spacetime \(M\), not necessarily a gauge boson. Here we exploited the interpretation of part the connection field as tetrad fields to express the coordinates of the \(A\) field in this orthonormal basis. Recalling that the tetrad generators correspond to \(M_{[a4]}\) one can see from Eq. (19) that the covariant derivative acting on the tetrad fields is given by,
\[{\rm d}_{\mathbf{\omega}}e^{a}={\rm d}e^{a}+\omega^{[a}_{\phantom{a}b]}e^{b}=T^{a}, \tag{22}\]
so that charging the covector fields \(A\) as a Lorentz multiplet 7, we find:
Footnote 7: This means that we are applying the principle of general covariance passing from rigid to local Lorentz transformation acting on the covector. We need Lorentz and not the entire general linear group, as would be the case for general coordinate invariance, since we can always cover a Lorentzian manifold with local orthonormal frames for the tangent bundle.
\[\mathrm{d}_{\boldsymbol{\omega}}A=\mathrm{d}A_{a}\wedge e^{a}+A_{a}T^{a}. \tag{3.16}\]
It clearly provides a gauge equivariant expression and it is different from the standard exterior derivative only if torsion is non-vanishing (as it is the case when one consider the general covariance principle applied to electro-magnetism). The easiest term to include in the action for such a field would be the standard kinetic term,
\[\int_{M}F\wedge*F\equiv \int_{M}\mathrm{d}_{\boldsymbol{\omega}}A\wedge*\mathrm{d}_{ \boldsymbol{\omega}}A \tag{3.17}\] \[= \int_{M}\left[\mathrm{d}A_{a}\wedge e^{a}\wedge*\left(\mathrm{d} A_{b}\wedge e^{b}\right)+2A_{a}T^{a}\wedge*\left(\mathrm{d}A_{b}\wedge e^{b} \right)+A_{a}A_{b}T^{a}\wedge*T^{b}\right].\]
We now see that, if one introduces a suitable potential for the \(A\)-field, there is the possibility of a condensation of the field such to give \(\langle A_{a}A_{b}\rangle=-\frac{1}{2}m^{2}\eta_{ab}\). The semiclassical limit would then correspond to the torsionless action in Eq. (3.11), turning the de Sitter gauge theory into an Einstein-Cartan theory [23; 24]. Since the only dynamical massless vector field in the Universe is the photon, this gravitational vector field should be massive enough not to be detectable by modern experiments. Furthermore, since there is no dependence in the action on the derivative of the tetrad fields, the torsion field becomes non-dynamical and once we fix the initial conditions to give a vanishing torsion this will remain true throughout evolution.
In conclusion, using the de Sitter group as gauge group for a geometrical Yang-Mills theory we are able to obtain Einstein's theory of gravity as a low energy torsionless limit of our theory. The Yang-Mills formulation, and in particular the structure constants of the de Sitter algebra, give the Hilbert action supplemented with a Riemann squared term, which is suppressed by a Planck mass squared, a torsion squared factor, which could be possibly removed dynamically as we have shown in Eq. (3.17), and a cosmological costant (as well as the usual mass parameter expected in all geometrical Yang-Mils theories). In Section 4 we perform a Hamiltonian analysis suitable for geometrical Yang-Mills theories. In particular we consider the constraints arising in phase space and we provide their analysis.
It is worth noticing that, upon replacing de Sitter group SO(1,4) with anti-de Sitter group SO(2,3), one would find the same results we have found for de Sitter since the algebra of the two groups is very similar, in particular the theory would still contain the Einstein-Hilbert action. The difference lies in the cosmological constant, which would be negative for the case of AdS gauge theory. This comes from some relative sign between the structure constants of \(\mathfrak{so}(1,4)\) and \(\mathfrak{so}(2,3)\).
Hamiltonian analysis
In order to specify a Hamiltonian [25] for the theories at hand, we need to break covariance with respect to coordinates specifying a time (read evolution) direction. As usual, the Yang-Mills Lagrangian is _singular_, _i.e._ its Hessian is degenerate: \(\det{(T^{ij})}=\det{(\delta^{2}L/\delta\dot{\omega}_{i}\delta\dot{\omega}_{j})}=0\). This implies that constraints will arise in phase space. In the following non-tetrad indices will be given by \([AB]\) while for tetrad fields we will use \([\cdot a]\). The extended Hamiltonian is given by,
\[H_{T} =\!\!\int_{\mathbb{R}^{3}}d^{3}x\bigg{\{} -\frac{1}{4\alpha\sqrt{-g}}\Pi_{[CD]}^{\ \ \ l}\Pi^{[CD]k}\frac{g_{lk}}{g^{00}}\!+\!\frac{1}{2 \alpha\sqrt{-g}}\Pi_{[CD]}^{\ \ \ l}P^{[CD]k}\frac{g_{lk}}{g^{00}}\] \[-\alpha\sqrt{-g}\left(g^{k0}g^{ij}\Omega_{[AB]ki}\right)\left(g^{ s0}g^{tl}\Omega_{\ st}^{[AB]}\right)\frac{g_{lj}}{g^{00}}\ \ +\!\frac{1}{2}\alpha\sqrt{-g}g^{ij}g^{kl}\Omega_{[AB]jk}\Omega_{\ i \ l}^{[AB]}\] \[+u^{[CD]}\phi_{[CD]}\bigg{\}}, \tag{25}\]
where the last term stands for all the Lagrange multipliers and constraints that arise during the analysis of the primary constraints \(\phi_{[CD]}=\Pi_{[CD]}^{\ \ 0}\approx 0\), where \(\approx\) stands for weak (_on-shell_) equality. These are the standard primary constraints of Yang-Mills theories. The secondary constraints are given by the usual _generalized_ Gauss' law. For non-tetrad fields we have,
\[0 \approx-\frac{\delta H_{T}}{\delta\omega_{\ \ 0}^{[CD]}}=D_{i}\Pi_{[ CD]}^{\ \ i}\equiv \tag{26}\] \[\equiv\left[\partial_{i}\Pi_{[CD]}^{\ \ i}+\Pi_{[CA]}^{\ \ i} \omega_{[D\ i}^{\ \ \ A}-\omega_{\ \ C]i}^{[A}\Pi_{[AD]}^{\ i}\right],\]
while for tetrad fields we find,
\[0\approx \frac{\delta H_{T}}{\delta e_{\ 0}^{\ a}}=\bigg{\{} -D_{i}\Pi_{(\cdot a)}^{\ \ i}+\left[\Omega_{\ ki}^{(CD)}\Pi_{(CD)}^{\ \ i}+2\alpha\sqrt{-g}\,\Omega_{(AB)kl}\left(g^{s0}g^{tl}\Omega_{\ st}^{(AB)} \right)\bigg{]}e_{a}^{\ \ k}\] \[-\frac{1}{2}\alpha\sqrt{-g}\bigg{[}\left(g^{i0}e_{a}^{\ \ j}+g^{j0}e_{a}^{\ \ i}\right)g^{kl}\left(g^{k0}e_{a}^{\ \ l}+g^{l0}e_{a}^{\ \ k}\right)g^{ij}\bigg{]} \Omega_{[AB]jk}\Omega_{\ i\ l}^{[AB]}\] \[+e_{a}^{\ 0}\bigg{[} -\frac{1}{4\alpha\sqrt{-g}}\Pi_{[CD]}^{\ \ l}\Pi^{[CD]j}\frac{g_{lj}}{g^{00}}\!-\! \alpha\sqrt{-g}\left(g^{k0}g^{ij}\Omega_{[AB]ki}\right)\left(g^{s0}g^{tl} \Omega_{\ st}^{[AB]}\right)\frac{g_{lj}}{g^{00}}\] \[+\frac{1}{2}\alpha\sqrt{-g}g^{ij}g^{kl}\Omega_{[AB]jk}\Omega_{\ \ il}^{[AB]}-\Omega_{\ ki}^{[CD]}\Pi_{[CD]}^{\ i}\frac{g^{k0}}{g^{00}}\bigg{]} \bigg{\}}. \tag{27}\]
These constraints do not generate any new one, yet they impose restrictions on the Lagrange multipliers (\(u_{[CD]}\)) of the theory in the following sense:
* The equation for \(u^{(2)[ab]}\) fixes the functions \(u^{(2)[\cdot a]}\);
* The equation for \(u^{(2)[\cdot a]}\) fixes the functions \(u^{(1)[\cdot b]}\);
which shows that our constraints are either _first_ or _second_ class. The only arbitrary Lagrange multipliers we are left with are the ones corresponding to the Lorentz group. Once again we find that the gauge symmetry of the theory, and in particular of its phase space, is given in general by the Lorentz subgroup.
Now that we completed the constraints analysis of the theory, we can compute the equations of motion using the standard Poisson brackets, namely,
\[\dot{\omega}^{[CD]}_{\ \mu}=\left\{\omega^{[CD]}_{\ \mu},H_{T}\right\},\qquad \dot{\Pi}^{\ \mu}_{[CD]}=\left\{\Pi^{\ \mu}_{[CD]},H_{T}\right\}. \tag{43}\]
One can show that these equations are equivalent to Eqs. (5), in particular they are covariant.
From the Hamiltonian one can already see a potential problem of our theory. Notice that we can rewrite the first two terms in Eq. (42) as,
\[\int_{\mathbb{R}^{3}}\frac{1}{2\alpha\sqrt{-g}}\frac{g_{lk}}{g^{00}}\Big{[}- \Big{(}\Pi^{\ l}_{[CD]}\!-\!P^{\ l}_{[CD]}\Big{)}\Big{(}\Pi^{[CD]k}\!-\!P^{[CD] k}\Big{)}+P^{\ l}_{[CD]}P^{[CD]k}\Big{]}\,, \tag{44}\]
which is the kinetic energy (first term) and the leftover from completing the square (second term). Since for pseudo-orthogonal groups the metric in Lie algebra space we use is often an indefinite inner product, there are _kinetic instabilities_ in the theory, _i.e._ fields for which the kinetic energy comes with the negative sign in the total hamiltonian. Notice that there is no indefiniteness coming from the inner product in spacetime indices. This is due to the presence of the primary constraints that force the timelike component of the momenta to vanish. In the constraints analysis we gave we did not find any constraint able to render the kinetic instabilities unphysical. However, the presence of second-class constraints suggests that the simplectic structure of phase space is not canonical and thus not all hope is gone that after one properly identifies the physical phase space of the theory there will not be this kind of issues. Another possible solution is to consider the Born-Infeld theory of electromagnetism [26] replacing the action in Eq. (4) with:
\[S=2\alpha\beta^{2}\int_{M}\mathrm{d}^{4}x\det(e)\left[\sqrt{1\!+\!\frac{1}{ \beta^{2}}\Omega^{i}_{\ \mu\nu}\Omega^{j\mu\nu}G_{ij}}\!-\!1\right]. \tag{45}\]
In this way one can covariantly introduce higher powers of the momenta into the Hamiltonian in order to make the kinetic energy bounded from below, thus stabilizing the theory, at least at the classical level. This theory harbors higher order interactions, and therefore we leave the Hamiltonian analysis for a future work.
## 5 Conclusion and outlook
The main goal of this paper was to reformulate general relativity in the context of Yang-Mills theories. Indeed, it is well-known that Einstein's theory suffers from singularity problems [5] and it was proved that the theory is not renormalizable (see [1]
for a review). Knowing that gravity can be formulated as an effective field theory [8], we showed that it is possible to formulate general relativity as a constrained version of some more general theory in its low energy limit.
Throughout this paper we have shown the intimate relation between pseudo-orthogonal Yang-Mills gauge fields and gravitational theories. In order to introduce a gauge theoretical cotetrad fields, we have developed a new class of theories, _geometrical Yang-Mills theories_[6]. Defining the metric through these particular gauge fields requires the introduction of a mass parameter which will take the role of a Planck mass. The geometrical action one obtains is invariant only with respect to the Lorentz subgroup. The equations of motion one obtains (5) are the standard curved spacetime generalization of the well-known Yang-Mills equations. In the case of the tetrad fields, one sees that these fields are sourced in the vacuum by an energy-momentum tensor related to the field strength of our gauge connection.
The main result of the paper is the de Sitter gauge theory we developed in Section 3. The torsionless low energy limit of the theory coincides with general relativity with the appearance of a positive cosmological constant, whose size is controlled by the Planck scale and the gauge coupling constant. In the weak coupling limit this geometrical cosmological constant can be much smaller than the Planck scale, and can be (in part) cancelled by the contributions from quantum fields, to yield the observed cosmological constant. These results allow us to consider Einstein's theory of gravity as part of a more general Lorentz invariant de Sitter Yang-Mills theory. We argue that the theory might be renormalizable since the vertex structure of the theory is essentially the same as ordinary flat-space Yang-Mills theories. However, since we studied arbitrary curved spacetimes, there will be new counterterms associated with the non-trivial geometry of spacetime and the issue of renormalizability of the full quantum theory still requires a careful investigation. In the context of quantum field theory in curved spacetime (with a classical gravitational field), it is known that higher-order geometric scalars are induced by quantum corrections. These counterterms can be added to the theory using our geometrical Yang-Mills formalism, avoiding Ostrogradsky instabilities, as we have shown in Section 4. Indeed, the phase space we identified is not the same as for Palatini gravity [27] since the canonical momentum of the metric is not given by the spin connection. From the analysis in Section 4 it follows that the number of phase-space degrees of freedom in a gravitational theory which originates from a gauge theory (in its torsionless limit) is twice as large as that in the Palatini formulation. These results inspire us to say that geometrical Yang-Mills theories could be a useful formulation of gravitational theories in general.
We provided the first steps towards canonical quantization \(([,]=i\hbar\{,\})\) building the Hamiltonian and studying the phase space of the theory. Since pseudo-orthogonal groups are non-compact Lie groups, the Killing form we use in Lie algebra space gives rise to an indefinite inner product. This means that some fields will pick the wrong
sign in the kinetic energy and they could generate kinetic instabilities in the theory. However, as one can see from the kinetic energy of the theory, there is no sign of Ostrogradsky instabilities in the theory at hand, as we originally expected, since in Yang-Mills theories the Riemann squared term is part of the gauge theory. As it is usually the case with gauge theories, we find both primary and secondary constraints. There are both first and second-class constraints and their self-consistency conditions reduce the gauge redundancy in phase space to its Lorentz subgroup of the original gauge group. There are no new constraints arising in phase space and thus the Hamiltonian we provide is complete. As we argue in Section 2, the projection onto the spacetime manifold \(M\) of the gauge theory induces the metric, thus breaking the gauge symmetry to its Lorentz subgroup, the symmetry group of general relativity (see figure 1).
In conclusion, we provided a new formalism for which interesting results can be found in the context of gravitational theories. We have showed another way of deriving general relativity out of the geometrical gauge theory and we have provided a consistent Hamiltonian framework suitable for any Yang-Mills theory in dynamical curved spacetime.
## Acknowledgements
The authors thank Enis Belgacem and Antonino Marciano for numerous discussions and suggestions that led to significant improvements of the manuscript. This work is part of the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW) -- NWO project number 24.001.027.
## Appendix A Pseudo-orthogonal groups and bundles
### Pseudo-orthogonal groups
In this section we define pseudo-orthogonal groups and study their properties. Consider a vector space \(\mathbb{R}^{D}\) equipped with the following metric (in the canonical cartesian basis),
\[\eta=\text{diag}(\underbrace{-1,..,-1}_{S},\underbrace{+1,...,+1}_{T}), \tag{10}\]
with \(S+T=D\). This inner product turns \(\mathbb{R}^{D}\) into the (pseudo-)normed vector space \(\mathbb{R}^{S,T}\) (for \(S=1\) and \(T=3\) we get Minkowski spacetime). We define the (fundamental representation of the) pseudo-orthogonal group \(O(S,T)\) as the set of transformations on \(\mathbb{R}^{S,T}\) that leave the inner product \(\eta(X,Y)\) invariant, \(X,Y\in\mathbb{R}^{S,T}\), _i.e._\(\eta(\Lambda\cdot X,\Lambda\cdot Y)=\eta(X,Y)\), \(\Lambda\in O(S,T)\). It can be proved that pseudo-orthogonal groups are Lie groups. It seems then natural to look at transformations infinitesimally
close to the identity in order to identify their generators, _i.e._\(\Lambda^{a}_{\phantom{a}b}=\delta^{a}_{\phantom{a}b}+\epsilon M^{a}_{\phantom{a}b}+O( \epsilon^{2})\). We get,
\[\eta(X,Y) =\eta_{AB}X^{A}Y^{B}\rightarrow\eta_{AB}X^{A}Y^{B} \tag{10}\] \[+\epsilon\left[\eta_{AB}M^{A}_{\phantom{A}C}X^{C}Y^{B}+\eta_{AB}X ^{A}M^{B}_{\phantom{A}C}Y^{C}\right]+O(\epsilon^{2})\] \[=\eta_{AB}X^{A}Y^{B}+\epsilon X^{C}Y^{B}\left[M_{BC}+M_{CB}\right]+ O(\epsilon^{2}),\] \[\Rightarrow M_{BC}=-M_{CB}\,.\]
Here and throughout this appendix capital latin indices run from \(-S+1,...,0,...,T\). The previous result shows that the generators of the pseudo-orthogonal group \(O(S,T)\) are given by \(D\times D\) antisymmetric matrices (when one index is lowered as in Eq. (10)). There are then \(D(D-1)/2\) linearly independent generators which are given by,
\[\left(M_{AB}\right)^{I}_{\phantom{I}J}=\delta^{I}_{\phantom{I}A}\eta_{BJ}- \delta^{I}_{\phantom{I}B}\eta_{AJ}. \tag{11}\]
Notice that \(M_{AB}=-M_{BA}\) so that from now on we write \(M_{[AB]}\) for the pseudo-orthogonal generators. In the following we consider only the proper-orthochronous pseudo-orthogonal group (_i.e._ the part of \(O(S,T)\) which is connected to the identity), so that with the exponential map we can recover the whole group.
Now we are ready to study the commutators between the elements of the pseudo-orthogonal Lie algebra. We compute,
\[\left(\left[M_{[AB]},M_{[CD]}\right]\right)^{I}_{\phantom{I}K} =\left(M_{[AB]}\right)^{I}_{\phantom{I}J}\left(M_{[CD]}\right)^{J }_{\phantom{I}K}-\left(M_{[CD]}\right)^{I}_{\phantom{I}J}\left(M_{[AB]} \right)^{J}_{\phantom{I}K} \tag{12}\] \[=\left(M_{[AD]}\right)^{I}_{\phantom{I}K}\eta_{BC}-\left(M_{[AC] }\right)^{I}_{\phantom{I}K}\eta_{BD}-\left(M_{[BD]}\right)^{I}_{\phantom{I}K} \eta_{AC}+\left(M_{[BC]}\right)^{I}_{\phantom{I}K}\eta_{AD}\] \[= \left[\Delta^{[EF]}_{\phantom{[AB]}[AD]}\eta_{BC}-\Delta^{[EF]}_{ \phantom{[EF]}[AC]}\eta_{BD}-\Delta^{[EF]}_{\phantom{[EF]}[BD]}\eta_{AC}+ \Delta^{[EF]}_{\phantom{[EF]}[BC]}\eta_{AD}\right]\left(M_{EF}\right)^{I}_{ \phantom{I}K}\] \[\equiv c_{[AB][CD]}^{\phantom{[EF]}\left(M_{[EF]}\right)^{I}_{\phantom{I }K},\]
where we introduced the identity in antisymmetric \(\binom{0}{2}\) tensor space, _i.e._
\[\Delta^{[AB]}_{\phantom{[AB]}[CD]}=\frac{1}{2}\left(\delta^{A}_{\phantom{A}C} \delta^{B}_{\phantom{B}D}-\delta^{A}_{\phantom{A}D}\delta^{B}_{\phantom{B}C} \right). \tag{13}\]
Having identified the structure constants of the pseudo-orthogonal algebra we can now compute the Killing metric for this Lie algebra,
\[G_{[AB][CD]}= c_{[AB][LM]}^{\phantom{[EF]}[EF]}c_{[CD][EF]}^{\phantom{[LM]}[LM]} \tag{14}\] \[= \left[\eta_{BL}\Delta^{[EF]}_{\phantom{[EF]}[AM]}+\eta_{AM} \Delta^{[EF]}_{\phantom{[EF]}[BL]}+\eta_{BM}\Delta^{[EF]}_{\phantom{[EF]}[LA]}+ \eta_{AL}\Delta^{[EF]}_{\phantom{[EF]}[MB]}\right]\] \[\times\left[\eta_{DE}\Delta^{[LM]}_{\phantom{[LM]}[CF]}+\eta_{CF} \Delta^{[LM]}_{\phantom{[LM]}[DE]}+\eta_{DF}\Delta^{[LM]}_{\phantom{[LM]}[EC]}+ \eta_{CE}\Delta^{[LM]}_{\phantom{[LM]}[FD]}\right]\] \[= \,2(D-2)\left[\eta_{BC}\eta_{DA}-\eta_{BD}\eta_{CA}\right]\] \[\equiv \,\alpha\left[\eta_{BC}\eta_{DA}-\eta_{BD}\eta_{CA}\right]\,.\]
### Pseudo-orthogonal bundles
Throughout this section we fix a manifold \(M\) and a principal \(SO(S,T)\)-bundle \(P\to M\). We introduce a connection 1-form \(\boldsymbol{\omega}\) on \(P\) and we expand it in the basis of \(\mathfrak{so}(S,T)\) in Eq. (100).
\[\boldsymbol{\omega}=\frac{1}{2}\omega^{[AB]}\otimes M_{[AB]}, \tag{101}\]
where the \(1/2\) in front is necessary to avoid overcounting. We consider the adjoint bundle \(\mathrm{Ad}(P)\). In particular, we consider the commutator between twisted differential forms (\(\boldsymbol{A}=\frac{1}{2}A^{[AB]}\otimes M_{AB}\), \(A^{[AB]}\in\Omega^{k}(P)\)). We find,
\[\begin{split}[\boldsymbol{A},\boldsymbol{B}]=&\frac{ 1}{4}A^{[AB]}\wedge B^{[CD]}\otimes\left[M_{[AB]},M_{[CD]}\right]\\ =&\frac{1}{4}A^{[AB]}\wedge B^{[CD]}\otimes\left( \eta_{BC}M_{[AD]}\!+\!\eta_{AD}M_{[BC]}\!+\!\eta_{DB}M_{CA}\!+\!\eta_{AC}M_{DB} \right)\\ =&\frac{1}{2}\left(A^{[A}_{\phantom{A}C]}\wedge B^{ [CB]}-A^{[B}_{\phantom{A}C]}\wedge B^{[CA]}\right)\otimes M_{[AB]}.\end{split} \tag{102}\]
The curvature associated with \(\boldsymbol{\omega}\) is then,
\[\begin{split}\boldsymbol{\Omega}&=\mathrm{d} \boldsymbol{\omega}+\frac{1}{2}\left[\boldsymbol{\omega},\boldsymbol{\omega} \right]\\ &=\frac{1}{2}\left[\mathrm{d}\omega^{[AB]}+\omega^{[A}_{\phantom{A }C]}\wedge\omega^{[CB]}\right]\otimes M_{[AB]}\,,\end{split} \tag{103}\]
and the covariant derivative on twisted forms is given by,
\[\begin{split}\mathrm{d}_{\boldsymbol{\omega}}\boldsymbol{A}& =\mathrm{d}\boldsymbol{A}+[\boldsymbol{\omega},\boldsymbol{A}]\\ &=\frac{1}{2}\left[\mathrm{d}A^{[AB]}+\omega^{[A}_{\phantom{A}C]} \wedge A^{[CB]}-\omega^{[B}_{\phantom{A}C]}\wedge A^{[CA]}\right].\end{split} \tag{104}\]
|
2301.02904 | Sensitivity analysis for transportability in multi-study, multi-outcome
settings | Existing work in data fusion has covered identification of causal estimands
when integrating data from heterogeneous sources. These results typically
require additional assumptions to make valid estimation and inference. However,
there is little literature on transporting and generalizing causal effects in
multiple-outcome setting, where the primary outcome is systematically missing
on the study level but for which other outcome variables may serve as proxies.
We review an identification result developed in ongoing work that utilizes
information from these proxies to obtain more efficient estimators and the
corresponding key identification assumption. We then introduce methods for
assessing the sensitivity of this approach to the identification assumption. | Ngoc Q. Duong, Amy J. Pitts, Soohyun Kim, Caleb H. Miles | 2023-01-07T17:33:31Z | http://arxiv.org/abs/2301.02904v1 | # Sensitivity analysis for transportability in multi-study, multi-outcome settings
###### Abstract
Existing work in data fusion has covered identification of causal estimands when integrating data from heterogeneous sources. These results typically require additional assumptions to make valid estimation and inference. However, there is little literature on transporting and generalizing causal effects in multiple-outcome setting, where the primary outcome is systematically missing on the study level but for which other outcome variables may serve as proxies. We review an identification result developed in ongoing work that utilizes information from these proxies to obtain more efficient estimators and the corresponding key identification assumption. We then introduce methods for assessing the sensitivity of this approach to the identification assumption.
Department of Biostatistics, Mailman School of Public Health, Columbia University
_Keywords:_ Causal inference, Data fusion, External validity, Generalizability, Missing data, Proxy variable
Introduction
Research in clinical medicine and public health is often concerned with estimating the effect of some treatment in a specific target population. However, even in a randomized clinical trial, which is considered the gold-standard study design, ensuring external validity remains a challenge. This can be due to a variety of reasons, including non-random sampling, overly stringent exclusion criteria, or an ill-defined target population of interest (Tan et al., 2022; Kennedy-Martin et al., 2015). Meta-analysis of summary statistics is a commonly used tool to synthesize and generalize findings from published study-level summary statistics, but tends to rely on strong, often implausible assumptions. An alternative approach that allows for more control over the nuances and heterogeneity across studies is to combine individual-level data, when available, from multiple studies, each of which may contain insufficient information to address a given scientific question by itself, but which collectively have the power to do so. There has been a growing body of work on generalizability and transportability methods, which can help address the problem of external validity of the effect estimates from integrating individual level data across studies.
Generalizability concerns the setting where the study population is a subset of the target population of interest while transportability addresses the setting where the study population is partially or completely external to the target population (Degtiar and Rose, 2023). Specifically, generalizability typically involves extending the causal effect estimate derived from a study as long as the covariates in the study population and the target population have common support (Gechter, 2015; Tipton, 2014). On the other hand, transportability entails extrapolating the effect estimated from a study in which some primary outcome of interest is observed to a population represented by a sample in which the outcome is not measured.
Existing methodologies involve directly transporting some estimated causal effect, e.g., the average treatmemt effect (ATE), from studies where the outcomes are observed to other studies with missing outcomes or across heterogeneous study designs and settings (Barein
boim and Pearl, 2016; Dong et al., 2020; Pearl and Bareinboim, 2014; Hunermund and Bareinboim, 2019), or to some broader target population (Dahabreh et al., 2020, 2020; Lesko et al., 2017; Westreich et al., 2017). When considering multiple studies, it is often the case that one will observe different outcomes at follow up. However, existing methods do not take advantage of these other potentially correlated and informative outcome variables measured at follow-up, which could potentially be leveraged to achieve large efficiency gains. Existing outcome proxy-blind methods typically rely on an assumption of homogeneous conditional potential outcome means for valid transportation of estimation from one population to another. Sensitivity analysis strategies have been proposed to study the extent to which the violation of these assumptions will affect the estimations and inferences drawn (Nguyen et al., 2017; Dahabreh and Hernan, 2019; Dahabreh et al., 2022).
In ongoing work, we have developed a new strategy to more efficiently estimate the ATE from integrated data across multi-outcome studies, with inconsistent availability of the primary outcome of interest at the study level. The proposed methodology takes advantage of the availability of follow-up measurements of potential correlates of the main outcome to yield more precise estimate of the causal effects. In this article, we consider the key common outcome regression (or conditional exchangeability for study selection) assumption for transportability while leveraging these outcome proxies, which differs slightly from the common outcome regression assumption that has been traditionally used for transportability. We discuss the resulting bias when this assumption is not met, and develop methodology for sensitivity analysis to the violation of this assumption.
The remainder of the article is organized as follows. In Section 2, we discuss identification of the average treatment effect in the multi-study, multi-outcome setting. In Section 3, we discuss the bias incurred by violations of the key conditional exchangeability assumption. In Section 4, we compare the conditional exchangeability assumption in our setting with that used in settings that do not leverage outcome proxies. In Section 5, we develop methods for sensitivity analysis for when our assumption is violated. We demonstrate the empirical
performance of our proposed methods in a simulation study in Section 6, and conclude with a discussion in Section 7.
## 2 Data integration for studies with primary outcome missing systematically
### Study and data setting
In this setting, we let \(A\) be the treatment indicator, \(W\) be a set of covariates that are commonly observed across studies, \(Y\) be the primary outcome variable, the set \(\{T_{1},\ldots,T_{k}\}\) be all the potential outcome proxies measured at follow-up in any study, and \(J_{s}\) be the study-specific subset of \(\{T_{1},\ldots,T_{k}\}\) that is measured in study \(s\). Suppose there are \(\mathcal{S}\) studies that are ordered such that for each \(s\) in the first \(s^{*}\) studies, we observe the set of variables \((Y,A,J_{s},W)\), while for each \(s\) in the remaining \(\mathcal{S}-s^{*}\) studies, only the subset \((A,J_{s},W)\) are observed. In other words, \(Y\) is systematically missing in the latter set of studies. Unlike the standard setup in other works concerning effect transportability that only involves \((Y,A,W)\), we introduced the use of \(\mathcal{T}_{s}\), where \(\mathcal{T}_{s}\subset J_{s}\) is some user-specified subset of \(J_{s}\) for each study \(s\). \(\mathcal{T}_{s}\) could be chosen based on availability and subject matter knowledge and must be chosen such that they are observed in at least one of the studies \(\{1,2,\ldots,s^{*}\}\).
Studies can be randomized experiments or observational; however, we will not consider scenarios in which some studies are randomized experiments and others are observational in this work. Then the study-specific average treatment effect and conditional average treatment effect can be written as:
\[ATE(s) =E(Y_{1}-Y_{0}\mid S=s)\] \[CATE(w,s) =E(Y_{1}-Y_{0}\mid W=w,S=s).\]
Accordingly, we can define the overall average treatment effect and conditional average treatment effect as:
\[ATE =\sum_{s=1}^{\mathcal{S}}ATE(s)\] \[CATE(w) =E(Y_{1}-Y_{0}\mid W=w)\]
where the weights can be user-specified such that \(\sum_{s}\pi_{s}=1\). For instance, one can choose \(\pi_{s}=P(S=s)\), or the marginal probability of being in each study. Alternatively, we could define \(ATE=E_{Q_{W,S}}CATE(W,S)\) for a user-specified, known distribution \(Q_{W,S}\) of \(W\) and \(S\).
Since \(Y\) is not measured in \(s\in\{s^{*}+1,\ldots,\mathcal{S}\}\), we cannot directly estimate the ATE and CATE using data from these studies alone. Our purpose is to transport the ATE from the first \(s^{*}\) studies where \(Y\) is observed, to the remaining \(\mathcal{S}-s^{*}\) studies while also leveraging the information from the outcome proxy set \(\mathcal{T}_{s}\) to improve efficiency. For ease of notation, let \(\sigma_{s}\) be a subset of the first \(s^{*}\) studies in which both Y and \(\mathcal{T}_{s}\) are observed. We can then use this information from the studies that form \(\sigma_{s}\) to estimate the outcome regression that will allow us to transport the causal effects to study \(s\). In this setting, we have shown in ongoing, not-yet-published work that the ATE can be nonparametrically identified as:
\[\Psi^{ATE} =\sum_{s=1}^{s^{*}}\pi_{s}E\{E(Y\mid W,A=1,S=s)-E(Y\mid W,A=0,S=s) \mid S=s\}\] \[+\sum_{s=s^{*}+1}^{\mathcal{S}}\pi_{s}E[E\{E(Y\mid\mathcal{T}_{s},W,A=1,S\in\sigma_{s})\mid W,A=1,S=s\} \tag{1}\] \[\qquad\qquad\quad-E\{E(Y\mid\mathcal{T}_{s},W,A=0,S\in\sigma_{s}) \mid W,A=0,S=s\}\mid S=s].\]
The terms in the first sum are simply the standard identification formula for the (study-specific) average treatment effects when \(Y\) is observed. The second sum is identified since it only depends on the distribution of \(Y\) in the studies in \(\sigma_{s}\), i.e., in which \(Y\) is actually
observed.
Here, we introduced a modification to how transportability has traditionally been done by incorporating information from a set of outcomes measured at follow-up that are correlated with the main outcome of interest.
### Assumptions for Identification of the ATE
This derivation ATE can be nonparametrically identified given the assumptions that are standard for identification for ATE when outcomes are all observed:
**Assumption 1** (Positivity).: \(P(A=1\mid W=w)>0\) for all \(w\) with positive probability.
**Assumption 2** (Consistency).: \(Y=AY_{1}+(1-A)Y_{0}\).
**Assumption 3** (Within-study conditional exchangeability).: \[E[Y^{a}\mid W,A,S=s]=E[Y^{a}\mid W,S=s]\text{ for all }s.\]
The validity of our estimator relies on a fourth assumption that allows for the transportation of the effect across studies:
**Assumption 4** (Common outcome regression (proxy-aware version)).: \[E(Y\mid\mathcal{T}_{s},W,A=a,S=s)=E(Y\mid\mathcal{T}_{s},W,A=a,S\in\sigma_{s}) \text{ for all }s.\]
This is a missing at random (MAR)-type assumption, where \(S\) can in a sense be thought of as a missingness indicator, since missingness is systematic by study.
We can also introduce a fifth assumption that is not necessary for identification, but allows for more borrowing of information across studies, which can help with efficiency:
**Assumption 5** (Common distribution of outcome proxies).: \(\mathcal{T}_{s}\perp S\mid W,A\) for all \(s\).
This implies the distribution of \(\mathcal{T}_{s}\) conditional on treatment assignment and baseline covariates is the same across studies. Under this additional assumption, the identification result simplifies to:
\[ATE =\sum_{s=1}^{\mathcal{S}}\pi_{s}E[E\{E(Y\mid\mathcal{T}_{s},W,A=1,S \in\sigma_{s})\mid W,A=1\}\] \[\quad-E\{E(Y\mid\mathcal{T}_{s},W,A=0,S\in\sigma_{s})\mid W,A=0\} \mid S=s].\]
In ongoing work, we have developed a simple substitution estimator that involves replacing each expectation with a regression-based estimate and the outer expectation with an empirical mean.
For the outcome proxy-blind approach, in addition to the first three standard internal validity assumptions, Assumption 4 is replaced by a slightly different mean outcome exchangeability assumption: across studies assumption (exchangeability over \(S\)) (Dahabreh and Hernan, 2019; Lesko et al., 2017):
**Assumption 6** (Common outcome regression (proxy-blind version)).: \[E(Y\mid W,A=a,S=s)=E(Y\mid W,A=a,S\in\sigma_{s})\text{ for all }s.\]
Assumption 4 differs from Assumption 6 by additionally conditioning on \(\mathcal{T}_{s}\) for each study \(s\). Assumptions 4 and 5 together imply Assumption 6. In this article, we will only consider sensitivity analysis for the violation of Assumption 4. When Assumption 5 is violated, the ATE estimator based on Assumption 4 (i.e., the substitution estimator based on the identification formula (1)) will remain consistent.
Characterizing the bias resulting from violation of the identification assumption
The validity of \(\Psi^{ATE}\) is dependent on the key assumption 4. This assumption requires no heterogeneity in the conditional outcome means given treatment, covariates, and outcomes proxies between studies with and without missing outcome (\(Y\)) data. This allows for transportation of the conditional outcome means, and correspondingly, the ATE and CATE, estimable from one study to others.
In practice, this could be a strong assumption to make while also untestable using observed data. For instance, in previous unpublished work, we estimated the average treatment effect of cognitive remediation (CR) therapy on Social Behavioral Scale (SBS) score, a measure for social functioning, using harmonized data from three trials in the NIMH Database of Cognitive Training and Remediation Studies (DoCTRS) database. However, the degree of effectiveness of CR, especially on functional and occupational outcomes, was less evident and has been suggested to vary depending on the setting in which the treatment was administered (Barlati et al., 2013; Combs et al., 2008; McGurk et al., 2007; Wykes et al., 2007, 2011). When this assumption is violated, the substitution estimators described in the previous section will be biased. Therefore, we examine two strategies for sensitivity analysis in order to examine the robustness of estimates under varying degrees of assumption violation.
To quantify the degree of violation, let the bias functions be defined as:
\[u(A=1,\mathcal{T}_{s},W) =E(Y\mid\mathcal{T}_{s},W,A=1,S=s)-E(Y\mid\mathcal{T}_{s},W,A=1,S \in\sigma_{s}),\] \[u(A=0,\mathcal{T}_{s},W) =E(Y\mid\mathcal{T}_{s},W,A=0,S=s)-E(Y\mid\mathcal{T}_{s},W,A=0,S \in\sigma_{s}) \tag{2}\]
Then, equation (1) when assumption 4 is violated instead becomes:
\[\text{ATE}=\sum_{s=1}^{s^{*}}\pi_{s}E\{E(Y\mid W,A=1,S=s)-E(Y\mid W,A=0,S=s) \mid S=s)\]
\[+\sum_{s=s^{*}+1}^{\mathcal{S}}\pi_{s}E\left[E\left\{E\left(Y\mid\mathcal{T}_{s}, W,A=1,S\in\sigma_{s}\right)\mid W,A=1,S=s\right)\right\}\]
\[-E\left\{E\left(Y\mid\mathcal{T}_{s},W,A=0,S\in\sigma_{s}\right)\mid W,A=0,S=s \right)\mid S=s]\]
\[+\sum_{s=s^{*}+1}^{\mathcal{S}}\pi_{s}E[E\left\{u\left(A=1,\mathcal{T}_{s},W \right)\mid W,A=1,S=s\right\}\]
\[-E\left\{u\left(A=0,\mathcal{T}_{S},W\right)\mid W,A=0,S=s\right\}\mid S=s],\]
where the last sum is not identified. Then, the study-specific bias for study \(s\) is:
\[E\left[E\left\{u\left(A=1,\mathcal{T}_{s},W\right)\mid W,A=1,S=s \right\}-E\left\{u\left(A=0,\mathcal{T}_{s},W\right)\mid W,A=0,S=s\right\} \mid S=s\right]\]
\[=E[\delta^{*}(W)|S=s]. \tag{3}\]
By rearranging terms, \(\delta^{*}(W)\) can be alternatively written as:
\[E\left[E\left(Y\mid\mathcal{T}_{s},W,A=1,S=s\right)-E\left(Y\mid \mathcal{T}_{S},W,A=1,s\in\sigma_{s}\right)\mid W,A=1,S=s\right]\] \[-E\left[E\left(Y\mid\mathcal{T}_{s},W,A=0,S=s\right)-E\left(Y \mid\mathcal{T}_{s},W,A=0,s\in\sigma_{s}\right)\mid W,A=0,S=s\right]\] \[=E(Y\mid W,A=1,S=s)-E(Y\mid W,A=0,S=s)\] \[-\left\{E\left[E\left(Y\mid\mathcal{T}_{s},W,A=1,s\in\sigma_{s} \right)\mid W,A=1,S=s\right]\right.\] \[\left.\qquad-E\left[E\left(Y\mid\mathcal{T}_{s},W,A=0,s\in \sigma_{s}\right)\mid W,A=0,S=s\right]\right\}. \tag{4}\]
The latter term cannot be simplified unless Assumption 5 holds.
Comparison with bias functions in settings without incorporation of follow-up surrogate outcomes
In recent work, Dahabreh and Hernan (2019) developed sensitivity analysis for transportability considering a similar setting of two types of studies with and without missing outcomes. In the base case, there are two studies considered (missingness of the outcome variable denoted by a binary indicator \(S\)). To describe this setting using our notation, we simply have \(\sigma_{0}=\sigma_{1}=\{1\}\) (i.e., study \(S=1\) with the observed outcome of interest is used to impute the conditional outcome means for study \(S=0\)). Equivalently, for ease of interpretation in the base case, let \(S=1\) and \(S=0\) denote the study where the primary outcome of interest is observed and not observed, respectively.
In the setting where the model used to impute conditional potential outcomes does not utilize information from \(\mathcal{T}_{s}\), Dahabreh and Hernan (2019) define:
\[u(A=a,W)=E[Y\mid A=a,W,S=1]-E[Y\mid A=a,W,S=0].\]
The difference between these bias functions can then be obtained as:
\[\delta(W) =u(A=1,W)-u(A=0,W)\] \[=E[Y^{1}-Y^{0}\mid W,S=1]-E[Y^{1}-Y^{0}\mid W,S=0]\]
This expression can be qualitatively expressed as the difference in the conditional average treatment effects between the two studies. This qualitative interpretation can aid in conceptualizing and thinking about more appropriate values and range for sensitivity parameters when examining robustness of the results. More specifically, assuming higher levels of the outcome are preferred, if we believe the participants in studies with missing outcomes benefit less from treatment, then true \(\delta\) can be assumed to be positive and vice versa (Dahabreh and Hernan, 2019). Since our bias functions are conditional on the set of proxy outcomes, the
term \(\delta^{*}(W)\) in (4) unfortunately cannot be reduced further to a more interpretable statistical entity. When we take \(\mathcal{T}_{s}\) to be the empty set, the bias function \(\delta^{*}(W)\) reduces to the same expression.
## 5 Accounting for violation of the common outcome regression assumption through sensitivity analyses
We consider two scenarios in which we assume the bias terms \(u(A=1,\mathcal{T}_{s},W)\) and \(u(A=0,\mathcal{T}_{s},W)\) to be 1) constants and 2) bounded functions of the outcome proxies and/or baseline covariates. The first scenario involves making a stronger assumption about the bias terms. On the other hand, the second scenario requires weaker assumptions but allow them to be non-constant.
### Bias functions assumed to be some fixed values
Although it might be more reasonable to assume that the bias functions are dependent on some baseline covariates, for ease of implementation of sensitivity analysis, one can also suppose they are constant. When \(u(A=1,\mathcal{T}_{s},W)\) and \(u(A=0,\mathcal{T}_{s},W)\) are independent of the baseline covariates \(W\) and the outcome proxy set \(\mathcal{T}_{s}\), the conditional expectations of the bias functions, and in turn, the term \(\delta^{*}(W)\) in (3), reduce to:
\[\delta=u_{1}-u_{0},\text{ where }\delta,u_{1},\text{ and }u_{0}\in\mathbb{R} \tag{5}\]
The sensitivity analysis involves correcting for the above-mentioned bias term by adding it back to the identification formula \(\Psi^{ATE}\), which relies on the common outcome regression assumption.
\[ATE= \sum_{s=1}^{s^{*}}\pi_{s}E\left\{E\left(Y\mid W,A=1,S\in\sigma_{s} \right)-E\left(Y\mid W,A=0,S\in\sigma_{s}\right)\mid S=s\right\}\] \[+\sum_{s=s^{*}+1}^{\mathcal{S}}\pi_{s}E\left[E\left\{E\left(Y\mid \mathcal{T}_{s},W,A=1,S\in\sigma_{s}\right)\mid W,A=1,S=s\right\}\right.\] \[-\left.E\left\{E\left(Y\mid\mathcal{T}_{s},W,A=0,S\in\sigma_{s} \right)\mid W,A=0,S=s\right\}\mid S=s\right]+\sum_{s=s^{*}+1}^{\mathcal{S}}\pi_ {s}\left(u_{1}-u_{0}\right)\] \[= \Psi^{ATE}+\sum_{s=s^{*}+1}^{\mathcal{S}}\pi_{s}\left(u_{1}-u_{0}\right) \tag{6}\]
where \(u_{1}\) and \(u_{0}\) are scalars.
In practice, the true bias term would be unknown. Thus, one strategy is to propose a grid of sensitivity parameters that covers the potential range of values in which the true bias term might fall. This grid of sensitivity parameters can be specified using subject-matter knowledge. We can then adjust for the bias term in the estimation step by adding back the different sensitivity parameters to the estimated ATE using our proposed method. This also allows for observation of the behavior of the estimated ATE as we vary the sensitivity parameters.
### Bounded covariate-dependent bias functions
One might also believe that the bias term is not constant at all levels of the baseline covariates and/or the outcome proxies. When the assumption of fixed-value bias terms is considered too strong, but the functional forms for bias terms cannot be confidently determined from existing knowledge of the data mechanism (as will typically be the case), one can still recover some information about the true ATE without having to correctly specify the bias terms. If we instead assume the bias terms to be some bounded functions, we can compute a bound around the (naive) ATE estimate that contains the true ATE by varying the bounds of these functions. This provides information on how far away the true ATE can be from the estimate
obtained constrained by the bounds of the bias term.
Identifying the bounds for the bias term can be expressed as maximizing and minimizing the objective function:
\[E[E[u(A=1,\mathcal{T}_{s},W)\mid W,A=1,S=s]-E[u(A=0,\mathcal{T}_{s},W)\mid W,A=0,S =s]\mid S=s]\]
subject to the following constraints:
\[|u(A=1,\mathcal{T}_{s}=t_{s},W=w)|\leq\gamma_{1}\] \[|u(A=0,\mathcal{T}_{s}=t_{s},W=w)|\leq\gamma_{0}\]
for all \(t_{s}\) and \(w\), which implies \(|E[u(A=1,\mathcal{T}_{s},W)\mid W,A=1,S=s]|\leq\gamma_{1}\) and \(|E[u(A=0,\mathcal{T}_{s},W)\mid W,A=0,S=s]|\leq\gamma_{0}\) where \(\gamma_{1},\gamma_{1}\in\mathbb{R}^{+}\).
Then we have \(-(\gamma_{1}+\gamma_{0})\leq u(A=1,\mathcal{T}_{s},W)-u(A=0,\mathcal{T}_{s},W) \leq\gamma_{1}+\gamma_{0}\). If we have no reason to suspect we know more about the bounds of one bias function than the other (as will typically be the case), we may simply choose to specify a scalar sensitivity parameter \(\gamma\) to be the maximum of \(\gamma_{1}\) and \(\gamma_{2}\), in which case we have \(-2\gamma\leq u(A=1,\mathcal{T}_{s},W)-u(A=0,\mathcal{T}_{s},W)\leq 2\gamma\).
By equation (6) even though we do not know the form of the bias functions \(u(A=1,\mathcal{T}_{s},W)\) and \(u(A=0,\mathcal{T}_{s},W)\), we can partially recover the true ATE using the bounds around the naive estimate:
\[\Psi^{ATE}-2\max(\gamma_{1},\gamma_{0}) \leq ATE\leq\Psi^{ATE}+2\max(\gamma_{1},\gamma_{0})\] \[\Psi^{ATE}-2\gamma \leq ATE\leq\Psi^{ATE}+2\gamma \tag{7}\]
If the bias functions are in fact bounded by some value smaller than or equal to our specified values for the sensitivity bounds, the true ATE would fall between \([\Psi^{ATE}-2\gamma,\Psi^{ATE}+2\gamma]\). Then, the true ATE is partially identified without assumptions about the functional form
of \(u(A=1,\mathcal{T}_{s},W)\) and \(u(A=0,\mathcal{T}_{s},W)\). One can then use the bootstrap standard error for the substitution estimator of the identification formula (1) to determine the amount to add and subtract from the upper and lower bounds, respectively, in order to produce confidence intervals for the partial identification sets for each value of the sensitivity parameter. Since the sensitivity bounds are a deterministic function of the sensitivity parameter, bootstrapping need only be done once.
## 6 Simulations
### Data generating mechanism
We consider the setting of two studies, with \(S\) = 1 indicating the study where the primary outcome is available. We generate random sample draws with sample size n = 100 for both studies. The data generating mechanism is as follows. \(W,T_{0}\) come from independent standard normal distributions, and \(T_{1}\) comes from a normal distribution with mean and variance of 1. Then
\[T =I(A=1)\times T_{1}+I(A=0)\times T_{0}\] \[Y^{0} =-4T_{0}+W+\epsilon_{0}\] \[Y^{1} =4T_{1}+W+\epsilon_{1}\] \[Y =I(A=1)\times Y_{1}+I(A=0)\times Y_{0}\]
where \(\epsilon_{1},\epsilon_{0}\sim N(0,1)\).
Via these specifications, \(T\) fully mediates the relationship between \(A\) and \(Y\) (direct effect from \(A\) to \(Y\) is constrained to be 0). As a result, the true ATE = 4. This is also a more basic setting in which the vector T is observed in all studies.
Due to the nature of the DoCTRS database, which is comprised of randomized clinical trials, in our base setting, we specified the marginal probability P(A = 1) = 0.5, represent
ing random treatment assignment. This treatment assignment satisfies the positivity and exchangeability assumption.
Specifically, to incorporate the difference in conditional outcome means between the two types of studies, in studies missing the outcome, we added constant bias terms to the counterfactual outcomes \(Y_{0}\) and \(Y_{1}\). Similar to the data generating step, we preserved the observed counterfactual outcome from the corresponding treatment assignment, which satisfies the consistency assumption. By (5), we have:
\[Y_{S=1}^{0} =Y_{S=0}^{0}+u_{0}\] \[Y_{S=1}^{1} =Y_{S=0}^{1}+u_{0}+\delta \tag{8}\]
for \(u_{0}\in\{-3,0,3\},\delta\in\{-2,0,2\}\).
Then the bias reduces to a single parameter \(\delta\), since it is no longer a function of \(u_{0}\) when computing the ATE:
\[E(Y^{1}-Y^{0}\mid S=1)=E(Y^{1}-Y^{0}\mid S=0)+\delta \tag{9}\]
In the case where the bias term is a function of baseline covariates and surrogate outcome, we had the following specification for the true bias:
\[u_{0} =b_{0}\times\sin{(T_{s}+W)}\] \[u1 =b_{1}\times\frac{\exp(T_{s}+W)}{1+\exp(T_{s}+W)}\]
for \(b_{0}\in\{2,3,4\}\) and \(b_{1}\in\{1,2,3\}\).
### Adjusting for sensitivity parameter in estimation step
In the presence of non-zero bias, when the value of the sensitivity parameter \(\delta\) is specified such that it is equal to true \(\delta\), the ATE estimate after bias adjustment tends to be closer to
the true ATE after compared to before. In addition, the corresponding 95% CIs are expected to cover the true ATE 95% of the times. Although coverage probability can be examined more in a more robust fashion using bootstrapped confidence intervals across all simulations, in Fig. 1, 2, and A.1-A.4, the 95% CIs covers the true ATE at the value of the sensitivity parameter that reflects the degree of assumption violation all but one instance, which is in line with our expectations.
**Scenario 1.** When the bias terms are assumed to be constants, a natural approach would be to specify a two-dimensional grid of sensitivity parameters for both scalars \(u_{0}\) and \(u_{1}\). However, by (8), it is equivalent to specifying \(u_{0}\) (or \(u_{1}\)) and \(\delta\). In fact, since the \(u_{0}\) (or \(u_{1}\)) as constant terms cancel out during adjustment, it is sufficient to specify one sensitivity parameter \(\delta\) (9). We also note that \(\delta\) being 0 does not necessarily imply assumption 4 is met, since the bias terms \(u_{0}\) and \(u_{1}\) could cancel exactly.
To implement sensitivity analysis, we follow the steps:
1. Specify a grid of sensitivity parameters \(\delta\). The grid should be reasonably wide to contain true \(\delta\).
2. Estimate the naively transported ATE using the identification result in (1)
3. Sequentially add the values in the sensitivity parameter grid to the naively estimated ATE, using the result in (6) to obtain the bias-corrected ATE estimates.
We then plotted the bias-corrected estimates under different sensitivity parameters against the true ATE. Additionally, we bootstrapped the bias-corrected estimates to obtain the 95% confidence intervals and explore coverage across different values of \(u_{0}\) and \(\delta\).
**Scenario 2.** When we want to make minimal assumptions about the functional form of the bias, we can still perform sensitivity analysis on the true ATE using the following steps:
1. Specify a grid of sensitivity parameters called \(\gamma\) that potentially include the upper and lower bounds of the true bias functions
2. Computed the "naive" ATE estimate using the identification result in (1)
3. Construct the upper and lower bound around the estimated ATE using (7) where \(\gamma\) is replaced with the sensitivity parameters.
We also plot the naive ATE estimates and the bounds around these estimates at each value of the sensitivity parameters. In practice, the bias functions are of course unknown and cannot be estimated from observed data. Therefore, when specifying the grid of sensitivity parameters, the analyst needs to employ subject matter knowledge about the data generating mechanism to select values of \(\delta\) and \(\gamma\).
We then explore the behavior of the adjusted estimators via simulations. In the first case, we focused on the general unbiasedness of the correctly-adjusted point estimate for both the overall ATE and ATE among studies with missing outcomes, as well as the 95% CI coverage across degrees of assumption violation (i.e., across values of true \(u_{0}\) and \(\delta\)). In the second case, we looked for correct bounding of the true ATE.
### Simulation Results
#### 6.3.1 Bias terms as constants
We examine the estimates produced by our method under the different degrees of violation of assumption 4, before and after taking into account the specified sensitivity parameter. Figure 1 shows the estimates (95% CI) for the true overall ATE using our method under varying magnitudes and directions of the bias terms from one single simulation.
In the presence of non-zero bias, when the value of the sensitivity parameter \(\delta\) is specified such that it is equal to true \(\delta\), the ATE estimate after bias adjustment tends to be closer to the true ATE after compared to before. In addition, the corresponding 95% CIs are expected to cover the true ATE 95% of the times. Although coverage probability can be examined more in a more robust fashion using bootstrapped confidence intervals across all simulations, in Fig. 1, 2, and A.1-A.4, the 95% CIs covers the true ATE at the value of the sensitivity parameter that reflects the degree of assumption violation all but one instance, which is in
Figure 1: Sensitivity-parameter-adjusted ATE estimate shown against the true overall ATE across values of the true bias and sensitivity parameter; n=100 for each study, 95% CI constructed from 1000 bootstrap samples. When sensitivity parameter \(\delta=0\), the adjusted estimate corresponds to the unadjusted estimate. Horizontal dotted line shows the true ATE given true \(\delta\); vertical dotted line indicates sensitivity parameter \(\delta\) equals true \(\delta\)
line with our expectations.
Figure 2 shows similar results for the study-specific ATE estimates in the study with missing outcomes (before and after bias adjustment) from the same simulated data. Compared to the results in Figure 1, after adjustment using the correct sensitivity parameters, the 95% CIs contain the true ATE more frequently than the CIs of the unadjusted estimates in the study with missing primary outcome. Figure 2 also shows an example where infer
Figure 2: Sensitivity-parameter-adjusted ATE estimates shown against the true study-specific ATE in the study in which the outcome is unobserved across values of the true bias and sensitivity parameter; n=100 for each study, 95% CI constructed from 1000 bootstrap samples. When sensitivity parameter \(\delta=0\), the adjusted estimate corresponds to the unadjusted estimate. Horizontal dotted line shows the true study-specific ATE given true \(\delta\); vertical dotted line indicates sensitivity parameter \(\delta\) equals true \(\delta\)
ence is sensitive to the violation of our assumption at a magnitude of \(\delta\) between -1 and -2 (\(u_{0}=-3\), bottom left panel), between which the 95% CI changes from not containing to zero to containing zero.
When we increased the sample size (n=200 and n=500), we saw general reductions in the errors of these single estimates (Figures A.1, A.3). In most cases, even when there is error in the adjusted estimates, the 95% CI bootstrap confidence intervals provide good coverage (Figures 1, A.1, A.3). The reduction in error and improved coverage are more pronounced when estimating the study-specific effect in the study with missing outcomes than in the overall ATE combining the two studies (Figures A.2, A.4).
We also ran 1000 simulations under the same data generating mechanism and obtained the unadjusted and sensitivity-parameter-adjusted estimates for each simulation. We then showed the mean and 2.5th and 97.5th quantiles of these estimates under each combination of the true bias values. We can see that when averaged across 1000 simulations, the adjusted estimates closely approximate the true ATE (Figures 3, 4) when the true value of \(\delta\) is used for the sensitivity parameter.
Figure 3: Sensitivity-parameter-adjusted ATE estimates shown against the true overall ATE across values of the true bias sensitivity parameter; mean, 2.5th and 97.5th quantiles obtained from 1000 simulations. When sensitivity parameter \(\delta=0\), the adjusted estimate corresponds to the unadjusted estimate. Horizontal dotted line shows the true overall ATE given true \(\delta\); vertical dotted line indicates sensitivity parameter \(\delta\) equals true \(\delta\)
When approximate sensitivity parameters \(\delta\) are used (\(\delta\in\{-1,1\}\) when true \(\delta\in\{-2,2\}\)), the middle 95% values of adjusted estimates also cover the true ATE whereas those of unadjusted estimates do not (Figure 4).
Figure 5 compares the errors in the estimates and sensitivity of associated inferences between the outcome proxy-blind method of Dahabreh et al. (2020); Lesko et al. (2017) and our proposed method across 1000 simulations.
Figure 4: Sensitivity-parameter-adjusted ATE estimates shown against the true ATE in the study with missing outcome across values of the true bias and sensitivity parameter; mean, 2.5th and 97.5th quantiles obtained from 1000 simulations. When sensitivity parameter \(\delta\) = 0, the adjusted estimate corresponds to the unadjusted estimate. Horizontal dotted line shows the true study-specific ATE given true \(\delta\); vertical dotted line indicates sensitivity parameter \(\delta\) equals true \(\delta\)
The distributions of the estimates from both methods are centered on the true parameter. However, the estimates tend to be more precise when we utilize the information from the outcome proxy (as demonstrated through the narrower 2.5th-97.5th quantile range). The efficiency gains have implications for the sensitivity analysis, since resulting inferences are not as sensitive given analogous magnitude in violation of the identification assumption 4.
Assumption 4 implies both \(u_{0}\) and \(u_{1}\) equal 0. As a result, the true \(\delta\) also equals 0. This suggests transportation of the conditional potential outcome means, and in turn, the
Figure 5: Sensitivity-parameter-adjusted ATE estimates obtained from our proposed method and the outcome proxy-blind method; mean, 2.5th and 97.5th quantiles obtained from 1000 simulations. When sensitivity parameter \(\delta=0\), the adjusted estimate corresponds to the unadjusted estimate. Horizontal dotted line shows the true overall ATE given true \(\delta\); vertical dotted line indicates sensitivity parameter \(\delta\) equals true \(\delta\)
conditional average treatment effects, can be done without incurring bias (vertical middle panes, figure 3). We also observed that, when \(\delta\) is 0, regardless of the values of \(u_{0}\) (and \(u_{1}\)), there is also no bias (vertical middle panes, figure 3) in the unadjusted estimator. In both cases, no bias correction would be necessary, and incorporating a non-zero \(\delta\) sensitivity parameter will actually introduce bias to the estimate.
#### 6.3.2 Bias terms as bounded functions
When the sensitivity parameter \(\gamma\) is greater or equal to \(\max\{\gamma_{0},\gamma_{1}\}\) for the true function bounds \(\gamma_{0}\) and \(\gamma_{1}\), the bounds always include the true ATE when the bias functions are bounded by \(\gamma_{0}\) and \(\gamma_{1}\) (Figure 6).
Although this approach requires minimal assumptions about the bias functional form, it can also be conservative since the true bias functions are unlikely to evaluate to the bounds across the domain of the functions. For instance, the bottom three panels of Figure 6 show that when the sensitivity parameter \(\gamma\) is greater than or equal to max(true \(\gamma_{0}\), true \(\gamma_{1}\)), while the bounds on the estimate contain the true ATE, they also contains the null value zero as well. On the other hand, these bounds do not rely on an assumption of constant bias functions, which we may often have no reason to believe. Here, we demonstrated through
Figure 6: ATE estimates with sensitivity bounds shown against the true overall ATE across values of the true bias and sensitivity parameter. When sensitivity parameter \(\gamma=0\), the bounds collapse to a point estimate. Blue horizontal dotted line shows the true study-specific ATE given true bias functions
simulations that sensitivity analysis with relaxed and more credible assumptions can still provide helpful information about the parameter of interest. However, when the bounds are too narrow or too wide, sensitivity analysis using bounded bias functions might not be accurate (i.e., not containing the true parameter) or useful (i.e., containing the null value when the truth is non-null), respectively.
## 7 Discussion
In this paper, we discussed a data integrative method that utilizes information from available proxies of the outcome of interest measured at follow-up for efficiency gains. We then presented two sensitivity analysis strategies specific to this approach for causal effect transportation when the identification assumption is violated. Our modification to the identification of the ATE in (1) allows for more efficient estimators given sufficiently strong outcome proxies. As a result, our bias functions also have similar, yet distinct interpretations than the bias functions of Dahabreh and Hernan (2019).
When the bias terms are assumed to be constants, we can obtain different bias-adjusted point estimates based on our specification of the sensitivity parameters. Additionally, via obtaining the 95% bootstrap confidence interval for the bias-adjusted estimates, we can examine the robustness of inferences made using our method under varying magnitudes of assumption violation. Specifically, beyond certain values of the sensitivity parameters, the 95% CI will cross the null value 0. These are the degrees of violation that can affect inferences (where the 95% CI suggest a change from significant results to non-significant results).
We also proposed sensitivity analysis using bounded bias functions as an alternative when one believes the assumption of a fixed-value bias term is too strong. This approach allows for inferences with minimal assumptions about the unobserved bias functions but can still provide useful information about the parameter of interest. Due to fewer assumptions being made, the results are more conservative and robust, hence more reasonable and credible.
Specifically, although we are unable to obtain a point estimate, sensitivity analysis using bounded bias functions can still be informative in the sense of providing information about the general direction of the parameter of interest (beneficial or harmful). This method is generally more conservative if the bounds on the functions are not close to their extreme values, if the bias functions are generally not close to their extreme values, or if there is a large difference between the extrema of the two bias functions.
Correct specification of the bias functions would allow for more precise and informative estimation of the true ATE. However, since they are generally unknown and non-estimable from observed data, sensitivity analysis will typically be the realistic course of action.
When conducting sensitivity analysis, the analyst can start off by specifying a wide grid of the sensitivity parameter and examining the behaviors of the point estimates and 95% CI (first approach) as well as bounds around the estimates (second approach). They can then search for the "critical" sensitivity parameters that still suggest rejection of the null hypothesis, i.e., the 95% CI (in the first case) and bounds around the estimate (in the second case) that do not contain 0. It can be determined if greater bias is plausible by using background knowledge of the data generating mechanism or further hypothesizing about such mechanism. If there is little or no evidence that the true bias functions exceed these critical sensitivity parameters, one can be more comfortable in concluding that the observed effect and associated inferences are robust to violation of the transportability assumption (Ding and VanderWeele, 2016; Cornfield et al., 1959).
|
2307.00245 | Deep Angiogram: Trivializing Retinal Vessel Segmentation | Among the research efforts to segment the retinal vasculature from fundus
images, deep learning models consistently achieve superior performance.
However, this data-driven approach is very sensitive to domain shifts. For
fundus images, such data distribution changes can easily be caused by
variations in illumination conditions as well as the presence of
disease-related features such as hemorrhages and drusen. Since the source
domain may not include all possible types of pathological cases, a model that
can robustly recognize vessels on unseen domains is desirable but remains
elusive, despite many proposed segmentation networks of ever-increasing
complexity. In this work, we propose a contrastive variational auto-encoder
that can filter out irrelevant features and synthesize a latent image, named
deep angiogram, representing only the retinal vessels. Then segmentation can be
readily accomplished by thresholding the deep angiogram. The generalizability
of the synthetic network is improved by the contrastive loss that makes the
model less sensitive to variations of image contrast and noisy features.
Compared to baseline deep segmentation networks, our model achieves higher
segmentation performance via simple thresholding. Our experiments show that the
model can generate stable angiograms on different target domains, providing
excellent visualization of vessels and a non-invasive, safe alternative to
fluorescein angiography. | Dewei Hu, Xing Yao, Jiacheng Wang, Yuankai K. Tao, Ipek Oguz | 2023-07-01T06:13:10Z | http://arxiv.org/abs/2307.00245v1 | # Deep Angiogram: Trivializing Retinal Vessel Segmentation
###### Abstract
Among the research efforts to segment the retinal vasculature from fundus images, deep learning models consistently achieve superior performance. However, this data-driven approach is very sensitive to domain shifts. For fundus images, such data distribution changes can easily be caused by variations in illumination conditions as well as the presence of disease-related features such as hemorrhages and drusen. Since the source domain may not include all possible types of pathological cases, a model that can robustly recognize vessels on unseen domains is desirable but remains elusive, despite many proposed segmentation networks of ever-increasing complexity. In this work, we propose a contrastive variational auto-encoder that can filter out irrelevant features and synthesize a latent image, named deep angiogram, representing only the retinal vessels. Then segmentation can be readily accomplished by thresholding the deep angiogram. The generalizability of the synthetic network is improved by the contrastive loss that makes the model less sensitive to variations of image contrast and noisy features. Compared to baseline deep segmentation networks, our model achieves higher segmentation performance via simple thresholding. Our experiments show that the model can generate stable angiograms on different target domains, providing excellent visualization of vessels and a non-invasive, safe alternative to fluorescein angiography.
deep learning, vessel enhancement, vessel segmentation, domain generalization Send correspondence to Ipek Oguz ([email protected])
## 1 Introduction
Retinal fundus photography is a cheap, fast and non-invasive modality that reveals essential anatomical features including optic disc, optic cup, macula, fovea, vessels and lesions such as hemorrhages and exudates [1]. Therefore, it is widely used for the diagnosis of diseases such as diabetic retinopathy [2], glaucoma [3] and age-related macular degeneration [4]. While fundus photography is broadly used as a low-cost screening tool, it does not provide sufficient contrast to resolve clinically relevant vascular features and exogenous indocyanine green angiography (ICG)/fluorescein angiography (FA) remain the standard of care for visualization/quantifying retinal vasculopathies. An algorithm that can provide accurate vessel segmentation from these fundus images would have profound impact on future clinical practice. In recent years, deep learning models [5] have achieved remarkable success in this task. Nevertheless, the domain shift induced by variations in image contrast and presence of unseen pathological features in testing data can dramatically degrade the performance of deep models.
Recent research explored three main types of domain generalization methods [6]: domain randomization, representation learning and general learning strategy. Domain randomization augments the training data to extend the source domain [7], improving the likelihood that an unseen target domain overlaps with the training domain. Representation learning refers to the disentanglement of features that are invariant to different domains [8]. A typical general learning strategy is meta-learning: for example, Li et al. simulate the domain shift by splitting the source domain into meta-train and meta-test [9].
In this work, we leverage both domain randomization and representation learning approaches to train a model that has superior generalizability across different domains. We augment the source domain by the contrast limited adaptive histogram equalization (CLAHE) [10] with clip limit \(\epsilon\in\mathcal{N}\). In addition to well-enhanced contrast for vessels, the augmented images also have exaggerated irrelevant structures including noise and lesions. Inspired by the idea of disentangling the shared features in two images presented in our previous work [11, 12], we leverage a variational auto-encoder (VAE) to extract the representation of vessels. However, as we showed in [11], this latent image may have an arbitrary style that contains unwanted features. We tackle this challenge by introducing a contrastive loss such that vessels are the only features in the synthetic image. We name the result a _deep angiogram_. Then, the segmentation task is simply reduced
to Otsu thresholding [13]. Without the irrelevant features, the visibility of the vasculature is drastically improved in the deep angiogram compared to other vessel enhancement approaches [14]. We evaluate the generalizability of our model by the segmentation performance on the target domains. For baseline models, we trained two segmentation networks on the source domain that take the green channel fundus image and the principle component analysis (PCA) image as the input respectively. The result indicates that the proposed method generalizes better on target domains and achieves higher segmentation performance than deep segmentation networks, by simple thresholding.
## 2 Methods
### Causal Feature Extraction
Fig. 1(a) shows our VAE model composed by the encoder \(E_{\theta}\) and the decoder \(D_{\varphi}\). The input image is \(x\) and the supervision is provided by the label \(y\). As we have previously shown [11, 12], when the latent manifold of the VAE has the same dimension with input \(x\), the encoder is able to enhance the shared features in \(x\) and \(y\). Intuitively, if an image is regarded as a collection of representations, then \((x\cap y)\subseteq E_{\theta}(x)\) should hold to guarantee that there is no essential information missing in the output \(\hat{y}\). In the context of causal learning, \(x\cap y\) is the set of causal features for the final prediction. In this implementation, the fundus image \(x\) includes information of many anatomical structures such as optic disc, vessels, macula and lesions, whereas the causal features for the segmentation task contain just the vessels, so ideally the latent image should be a vessel map without any irrelevant features, i.e., \((x\cap y)=E_{\theta}(x)\).
As suggested in Fig. 1, since we want to put most of the workload on the encoder \(E_{\theta}\), it is designed to have more learnable parameters than the decoder \(D_{\varphi}\). Both \(E_{\theta}\) and \(D_{\varphi}\) have residual U-Net architecture. Note that the decoder \(D_{\varphi}\) will not be applied in the testing since its purpose is to simply provide supervision to \(E_{\theta}\) during training. The segmentation loss for the decoder is set to be a combination of cross-entropy and Dice loss:
\[\mathcal{L}_{seg}=-\frac{1}{N}\sum_{n=1}^{N}y_{n}\log\hat{y}_{n}+\left(1-\frac {2\sum_{n=1}^{N}y_{n}\hat{y}_{n}}{\sum_{n=1}^{N}y_{n}^{2}+\hat{y}_{n}^{2}}\right) \tag{1}\]
### Domain Randomization
There are two major causes for distribution shift of fundus images. First, within a well-curated dataset (e.g., DRIVE [15]), the image contrast is usually consistent. A model trained on such a dataset may struggle with a poor-contrast test image.
Figure 1: **(a)** The deep angiogram model structure. \(x\) is the input fundus image, \(x^{\prime}=C_{\epsilon}(x)\) is the CLAHE-enhanced image with clip limit \(\epsilon\). \(y\) is the ground truth. \(E_{\theta}\) is a residual U-Net that serves as the encoder of the VAE. \(D_{\varphi}\) is the corresponding decoder. \(\mathcal{L}_{cont}\) and \(\mathcal{L}_{seg}\) represent the contrastive loss and segmentation loss. The dashed line on \(D_{\varphi}\) indicates it will not be applied in testing. **(b)** The source target domains. **a,** DRIVE, **b,** HRF, **c,** STARE, **d,** ARIA.
Second, since a given dataset is unlikely to exhaustively provide samples of all possible pathologies, unseen features such as drusen and hemorrhages can be problematic during testing.
To improve the robustness of the model, we randomize the source domain data by CLAHE[10] in addition to other commonly used augmentation methods (e.g., rotation). For an input image \(x\), we apply CLAHE \(C_{\epsilon}\) to all the color channels with a random clip limit \(\epsilon\in\mathcal{N}(5,1)\). In the resultant image \(x^{\prime}\), the contrast of vessels are strongly enhanced, as well as the background noise. Then as in Fig. 1, we introduce a contrastive loss \(\mathcal{L}_{cont}\) for the latent image to guarantee that the model is not distracted by this exaggerated noise and provides stable visualization for input with various contrasts. The loss function is defined as the sum of the \(L_{2}\) loss and the structural similarity (SSIM) loss.
\[\mathcal{L}_{cont}=\|E_{\theta}(x)-E_{\theta}(x^{\prime})\|_{2}+SSIM(E_{ \theta}(x)-E_{\theta}(x^{\prime})) \tag{2}\]
The SSIM loss is defined as \(SSIM(x,y)=\frac{(2\mu_{x}\mu_{y}+c_{1})(2\sigma_{xy}+c_{2})}{(\mu_{x}^{2}+\mu_ {y}^{2}+c_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2})}\), where \(\mu\) and \(\sigma\) represent the mean and standard deviation of the image, and \(c_{1}\) and \(c_{2}\) are constants.
Figure 2: Test examples from the target domains, ARIA and STARE. Below each image, close-up panels show two highlighted areas (red and yellow boxes) for easier comparison. Deep angiograms provide excellent vessel clarity.
### Experiments
**Baseline Methods.** Since the color image is more sensitive to domain shift, it is common to convert the fundus image to grayscale as pre-processing, typically by extracting the green channel or using principle component analysis (PCA). We train a segmentation network that has the same architecture as \(E_{\theta}\) with either the green channel or the PCA as input. We compare these two networks to Otsu thresholding of deep angiograms.
**Datasets.** We use four publicly available fundus datasets as shown in Fig. 1(b). The **DRIVE** dataset[15] consists of 20 labelled images of size \(565\times 584\). The **HRF** dataset[16] contains 45 labelled images of size \(3504\times 2336\). The **STARE** dataset[17] includes 20 labelled images of size \(700\times 605\). The **ARIA** dataset[18] includes 138 labelled images of size \(768\times 576\). DRIVE and HRF are set as source domain, whereas STARE and ARIA are used for testing.
**Implementation Details.** All networks are trained and tested on an NVIDIA RTX 2080TI 11GB GPU. We use a batch size of 4 and train for 300 epochs. We use the Adam optimizer with the initial learning rate of \(5\times 10^{-4}\) for the proposed VAE, \(1\times 10^{-3}\) for the baseline segmentation networks. The learning rate for both networks decay by 0.5 every 3 epochs.
## 3 Results and Conclusion
Fig. 2 shows a test example from each of the target domains. We observe that for different datasets, the manual annotations includes varying amounts of detail: the label for the STARE dataset contains many more small vessels than ARIA. In the ARIA example, the deep angiogram is able to enhance the thin vessels with very poor contrast. This is also evident by the big vessels seen at the bottom left quadrant of the image where the illumination is low. Moreover, the angiogram filters out the circular artifacts seen within the red box. In the STARE example, our model extracts most of the vasculature including the faintly visible fine vessels. These tiny vessels have relatively lower intensity in the deep angiogram, which suggests lower confidence. Compared to the manual label, the deep angiogram can also delineate the vessel diameter more precisely.
We quantitatively evaluate the vessel segmentation performance in Fig. 3. By simple thresholding on deep angiogram, we obtain get better vessel maps than the segmentation networks that use the green channel and PCA image as inputs.
The proposed method can effectively extract a specific type of feature from a complex context. Specific to retinal vessels, our model can generate stable deep angiograms that dramatically enhance small vessels with poor contrast for color fundus images from unseen domains. Hence, deep angiogram is a low-cost method that can be performed using standard fundus photography technologies, including portable handheld systems. The ability to resolve vascular features without the need for exogenous contrast injections significantly reduces the clinical expertise/equipment/cost of retinal angiography. Integration of these technologies with recent demonstrations of cellphone-based fundus photography methods and remote diagnostic technologies can move retinal disease screening out of the clinic and dramatically expand the impact of color fundus photography.
Figure 3: Quantitative evaluation of the segmentation results on the two target domains. From left to right: Dice coefficient, accuracy, sensitivity, specificity. **Blue**, segmentation network trained on the green channel. **Orange**, segmentation network trained on the PCA image. **Green,** segmentation obtained by thresholding the deep angiogram.
## 4 Acknowledgements
This work is supported by the Vanderbilt University Discovery Grant Program.
|
2308.01906 | Reasoning in Large Language Models Through Symbolic Math Word Problems | Large language models (LLMs) have revolutionized NLP by solving downstream
tasks with little to no labeled data. Despite their versatile abilities, the
larger question of their ability to reason remains ill-understood. This paper
addresses reasoning in math word problems (MWPs) by studying symbolic versions
of the numeric problems, since a symbolic expression is a "concise explanation"
of the numeric answer. We create and use a symbolic version of the SVAMP
dataset and find that GPT-3's davinci-002 model also has good zero-shot
accuracy on symbolic MWPs. To evaluate the faithfulness of the model's
reasoning, we go beyond accuracy and additionally evaluate the alignment
between the final answer and the outputted reasoning, which correspond to
numeric and symbolic answers respectively for MWPs. We explore a self-prompting
approach to encourage the symbolic reasoning to align with the numeric answer,
thus equipping the LLM with the ability to provide a concise and verifiable
reasoning and making it more interpretable. Surprisingly, self-prompting also
improves the symbolic accuracy to be higher than both the numeric and symbolic
accuracies, thus providing an ensembling effect. The SVAMP_Sym dataset will be
released for future research on symbolic math problems. | Vedant Gaur, Nikunj Saunshi | 2023-08-03T17:59:27Z | http://arxiv.org/abs/2308.01906v1 | # Reasoning in Large Language Models Through
###### Abstract
Large language models (LLMs) have revolutionized NLP by solving downstream tasks with little to no labeled data. Despite their versatile abilities, the larger question of their ability to reason remains ill-understood. This paper addresses reasoning in math word problems (MWPs) by studying symbolic versions of the numeric problems, since a symbolic expression is a "concise explanation" of the numeric answer. We create and use a symbolic version of the SVAMP dataset and find that GPT-3's davinci-002 model also has good zero-shot accuracy on symbolic MWPs. To evaluate the faithfulness of the model's reasoning, we go beyond accuracy and additionally evaluate the _alignment_ between the final answer and the outputted reasoning, which correspond to numeric and symbolic answers respectively for MWPs. We explore a _self-prompting_ approach to encourage the symbolic reasoning to align with the numeric answer, thus equipping the LLM with the ability to provide a _concise and verifiable_ reasoning and making it more interpretable. Surprisingly, self-prompting also improves the symbolic accuracy to be higher than both the numeric and symbolic accuracies, thus providing an ensembling effect. The SVAMP-Sym dataset will be released for future research on symbolic math problems.
## 1 Introduction
Large language models (LLMs), with hundreds of billions of parameters, can solve a wide range of NLP tasks such as machine translation, question-answering, etc., taking us closer to general-purpose intelligent agents. The initial success of GPT-3 Brown et al. (2020) has led to many other LLMs Rae et al. (2021); Smith et al. (2022); Chowdhery et al. (2022) which have, perhaps surprisingly, taken big strides in solving difficult tasks like common sense reasoning, math and science problems Lewkowycz et al. (2022), and writing code Li et al. (2022).
Despite the incredible successes, we have little understanding of why LLMs are effective at problems that require reasoning. In fact we have limited techniques to quantifiably study the question of reasoning beyond just evaluating accuracy. Recent ideas like Chain-of-Thought prompting (CoT) Wei et al. (2022); Kojima et al. (2022) encourage the model to "think step by step" and output a verbose reasoning in text. However, verifying such reasoning at scale will incur the infeasible cost of manually going over the text outputs. Furthermore, we would like the model's reasoning to be consistent with its outputted answer, in order to trust the presented reasoning. For these considerations, we would like our models to output a _concise reasoning_ or explanation for its answer that can be _automatically verified_. In particular, we desire reasoning in the form of explanations that are
* Verifiable: For ease of evaluating correctness of the outputted reasoning, and
* Concise: For scalability of verification. Manually going through text reasoning can quickly get cumbersome
For instance, instead of a text description of an algorithm to solve a problem, a Python implementation of the algorithm would be a more concise explanation for the reasoning behind the algorithm1. Similarly, a simple linear model or decision tree explaining the answers of a black-box neural network also achieves the same goal Ribeiro et al. (2016). Concise explanations can provide clearer insights into the reasoning abilities of models, and verifiable explanations aid
interpretability and help foster trust in models, in line with explainable AI (Samek et al., 2019).
In this work we use concise and verifiable explanations to study reasoning abilities of LLMs in math word problems (MWPs). LLMs have shown to achieve good zero-shot accuracy on many numeric MWP benchmarks (Kojima et al., 2022). Chain-of-thought like ideas encourage LLMs to first general a step-by-step explanation (in text) before generating the answer. However, this does not satisfy the criteria of being concise or easily verifiable2. We address reasoning by considering symbolic versions of numeric MWPs, because a symbolic expression can be viewed as a concise explanation for a numeric answer and can also be automatically evaluated. Thus in this reasoning framework for MWPs, we require an LLM to output both, a numeric answer and a concise symbolic expression, such that we have: (1) high accuracy for the predicted numeric answer, (2) high alignment of the symbolic expression with the predicted numeric answer. While most prior studies focus on goal (1), we argue that goal (2) is equally important for interpretability of these models and to trust the its reasoning. Our main finding is that LLMs can also do reasonably well on goal (2), either by generating a numeric answer and symbolic explanation together, or by generating the answer first and then a post-hoc symbolic explanation. In this context, we make the following contributions:
Footnote 2: It is not uncommon for the outputted reasoning to be inconsistent with the final answer
**Symbolic evaluation.** We construct a symbolic version of the SVAMP dataset (Patel et al., 2021) called SVAMP-Sym to evaluate LLMs. Firstly we find, perhaps surprisingly, that GPT-3's davinci-002 model already achieves good zero-shot accuracy on symbolic problems (\(64.2\)%), comparable to the numeric accuracy of \(68.9\)%. Secondly, this observation provides a simple way to get good accuracy and alignment for numeric problems by first solving symbolic versions and then substituting back the values for variables. This approach generates the numeric answer and a symbolic explanation in one go, thus trivially achieving3 an accuracy of \(64.2\)% and alignment of \(100\)%.
Footnote 3: If a βcalculatorβ can evaluate symbolic expressions.
**Self-prompting.** There are two key drawbacks with the above approach: (a) symbolic accuracy of \(64.2\)% is lower than the numeric accuracy (\(68.9\)%), (b) alignment of symbolic expressions, as post-hoc explanation to the original numeric answers, is very low (\(\sim 50\)%). To get a better post-hoc explanation, we propose a novel _self-prompting_ approach that first prompts the LLM with the numeric problem and its response to the problem, and then asks it to solve the symbolic problem; see Figure 1. Self-prompting significantly improves alignment with numeric answers to \(74\)% (a \(24\)% absolute improvement). Surprisingly, self-prompting also improves the symbolic accuracy to \(71.7\)%, higher than both the raw numeric and symbolic accuracies of \(68.9\)% and \(64.2\)% respectively. This suggests that self-prompting has an ensembling effect.
We perform further ablation studies and analyses and hope that these insights will aid future work on using LLMs for reasoning problems.
### Related Work
Language models like GPT-3 (Brown et al., 2020) and MLMs like BERT (Devlin et al., 2019) have demonstrated impressive emergent behaviors (Wei et al., 2022) at scale. For math problems, Minerva (Lewkowycz et al., 2022) was fine-tuned from PaLM (Chowdhery et al., 2022) to do well on many MWP benchmarks. Instead of fine-tuning, Wei et al. (2022) uses in-context learning and finds that asking the model to "think step by step" (CoT prompting) improves few-shot accuracy on MWPs; Kojima et al. (2022) verify this for zero-shot setting as well, which is the focus of our work.
There is limited theoretical work for the downstream success of LMs (Saunshi et al., 2021; Xie et al., 2022) and the emergent behaviors of LLMs through scaling laws (Kaplan et al., 2020). Our idea of self-prompting is motivated by the efficacy of in-context learning (Brown et al., 2020) and prompting (Liu et al., 2023) in LMs. The ensembling effect of self-prompting idea could be related to self-calibration abilities of LMs (Kadavath et al., 2022). Finally, Ho et al. (2022) survey the progress of LMs on various notions of reasoning; we consider a weaker notion of "concise post-hoc explanations" here.
## 2 Math Word Problems with LLMs
### SVAMP-Sym Dataset
We choose the SVAMP dataset (Patel et al., 2021) for testing LMs on MWPs because it provides numeric answers in the form of numeric
expressions (rather than just numeric values). This lets us automatically convert the dataset into a symbolized version, without manual annotation. The main idea is to replace all occurrences of numbers in the problem statement with newly introduced variables, e.g. (w,x,y,z). Appendix A provides more details on the dataset construction. The dataset is released in [https://github.com/vedantgaur/Symbolic-MWP-Reasoning](https://github.com/vedantgaur/Symbolic-MWP-Reasoning).
### Querying and Evaluating LMs
Broadly, our evaluation pipeline has four phases: (1) get a verbose response from the LLM for the math problem, (2) prompt the LLM to extract just the answer (number or symbolic expression) from its initial response, (3) refine the extracted answer using a novel _filtering_ step, (4) compare the filtered answer to the ground-truth answer.
Initial response.We query the LM with the problem statement and an optional CoT prompt, i.e. "Q: <Problem> A:" or "Q: <Problem> A: Let's think step by step.". <Problem> could either be a numeric or symbolic problem. Table 3 summarizes the prompts used for various settings.
Answer extraction.Since the LLM outputs a long text response (Figure 1), we use an extraction prompt to isolate the answer, similar to Kojima et al. (2022). We query the LM with the transcript so far, followed by the question and the prompt "The final answer (only the number) is:" to isolate the numeric answer. Table 3 has the similar prompt for symbolic problems.
Answer filtering.The extraction prompt does not always isolate the final answer and sometimes outputs a sentence, especially for symbolic problems. Thus we add a LM-independent filtering step which includes stripping escape sequences, removing commas, de-latexifying equations, picking the longest symbolic expression, among others; more details in Appendix C.2.
Answer evaluation.We compare the filtered answer to the ground-truth answer (symbolized expression or numeric value). Since there are multiple ways to express the same symbolic expression (e.g. "w + (y + x)" and "w + x + y"), we compare two expressions through their evaluations on 20 random variable assignments. If they match on all 20 assignments, we adjudge them to be equivalent, making a (reasonable) assumption that 20 random assignments will avoid false positives.
## 3 Experimental Results
We pick 150/1000 examples from the SVAMP dataset (due to budget constraints) and run each examples 5 times. We use GPT-3's davinci-002 model with temperature 0.0 for (mostly) deterministic outputs, with a max token length of 256.
### Numeric and Symbolic Evaluations
We discuss the accuracies for solving numeric and symbolic math problems from SVAMP and SVAMP-Sym respectively.
Numeric accuracy.The zero-shot numeric accuracy both with chain-of-thought (CoT) prompt and without (vanilla) are presented in Table 1; they are \(68.9\)% and \(65.6\)% respectively. This good performance is unsurprising given prior work (Kojima et al., 2022). Our accuracies are \(\sim 5\)-7% higher than Kojima et al. (2022), due in part to better answer extraction and filtering.
Symbolic accuracy.We also evaluate raw symbolic problems from SVAMP-Sym in the vanilla and CoT settings with 3 natural choices for variables: (w,x,y,z), (i,j,k,l) and (p,q,r,s).
Figure 1: LMs can be queried to solve numeric/symbolic math problems. Self-prompting includes the numeric problem and the LMβs solution to it before passing the symbolic problem. This encourages the model to output the answer that aligns with the numeric answer. The symbolic expression wβxβy serves as a concise explanation/reasoning for the numeric answer of 2.
Firstly we observe, in Table 1, that GPT-3 can achieve pretty high symbolic accuracies with variables (w,x,y,z): vanilla and CoT settings achieve \(59.7\)% and \(64.2\)% respectively, which is just \(4\)-\(5\)% lower than numeric accuracy. Furthermore, we notice that variables (i,j,k,l) have slightly worse accuracy than other variable settings, possibly because (w,x,y,z) and (p,q,r,s) are more popular choice for variables in the training data for language models.
Effect of filtering.We report the accuracies without the filtering step in Table 1; these are the (-F) entries. While there is a \(4\)-\(5\)% drop in the numeric accuracy without filtering, the drop is \(12\)-\(14\)% for symbolic problems, suggesting that filtering is much more crucial for symbolic problems4. Our extraction and filtering steps still have issues and there is scope for improvement.
Footnote 4: Intuitively it makes sense that extracting an expression/equation is harder than extracting a single number
### Reasoning and Alignment
While prior work only cares about the accuracy on MWPs, we also study of reasoning abilities of LLMs by requiring them to generate a concise explanation for numeric answers in the form of a symbolic expressions. We evaluate "reasoning ability" through an alignment metric that checks if the outputted numeric answer and symbolic expression compute to the same value. In general there is no consistent zero-shot method to return a perfectly aligned symbolic expression. A natural attempt to generate such an expression is to directly solve the symbolic versions of numeric problem. However this approach has very low alignment, i.e. the symbolic output does not reflect the way in which the model solved the numeric problem. Specifically in Table 1, the average alignment score for raw symbolic outputs is only \(52.9\%\) and \(51.2\%\) for Vanilla and CoT respectively. This motivates self-prompting.
### Self-prompting
In order to improve alignment, we propose a two-step procedure that first inputs the numeric MWP and the LM's response to it, followed by the symbolic version of the MWP. In particular the prompt looks like "Q: <Numeric Question> A: <Model Response> Q: <Symbolic Question> A:". Given the in-context tendencies of LMs, we hope that this encourages the symbolic response to imitate the numeric response and thus return a well aligned expression. We find in Table 1 that this approach (termed **SP**) indeed improves the alignment by \(\sim 10\)% over the naive approach.
We take this one step further: whenever the numeric and symbolic answers do not align, we add another "alignment prompt" before the symbolic problem that explicitly asks the model to copy the numeric answer; see Table 3 for the exact format. Results in the **SP+AP** column of Table 1 verify that this leads to another \(11\)%
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Numeric} & \multicolumn{4}{c|}{Symbolic} \\ \multicolumn{1}{c|}{} & & \multicolumn{4}{c|}{(w,x,y,z)} & \multicolumn{2}{c|}{(p,q,r,s)} & (i,j,k,l) \\ \cline{2-9} \multicolumn{1}{c|}{} & Evaluation & Raw (-F) & Raw (-F) & **SP** (-F) & **SP** + **AP** & Raw & Raw \\ \hline \hline \multirow{3}{*}{Accuracy} & _Vanilla_ & 65.6 (61.6) & 59.7 (47.6) & 61.9 (40) & **68.3** & 62.3 & 53.5 \\ \cline{2-9} & _CoT_ & 68.9 (65.9) & 64.2 (48.8) & 67.9 (48.6) & **71.7** & 64.4 & 58.4 \\ \hline \multirow{3}{*}{Alignment} & _Vanilla_ & - & 52.9 (40.7) & 60.3 (40) & **64.9** & 56.3 & 44.7 \\ \cline{2-9} & _CoT_ & - & 51.2 (39.1) & 63.1 (44.9) & **74** & 51.9 & 47.1 \\ \hline \hline Similarity & _Vanilla_ & - & 27.8 & 44.2 & **49.8** & 27.1 & 26.8 \\ \cline{2-9} (BLEU) & _CoT_ & - & 21.3 & 53.9 & **57.6** & 22.7 & 21.4 \\ \hline Similarity & _Vanilla_ & - & 56.5 & 65.2 & **71.3** & 56.8 & 55.4 \\ \cline{2-9} (Levenshtein) & _CoT_ & - & 44.9 & 75.6 & **79.8** & 45.4 & 43.9 \\ \hline \end{tabular}
\end{table}
Table 1: Zero-shot accuracy and alignment evaluations using GPT-3. All values are reported in %. βRawβ refers to evaluation on the SVAMP and (SVAMP-Sym) dataset for numeric (symbolic) MWPs; (-F) refers to the output before the filtering step. βSPβ is the new self-prompting method and βSP + APβ refers to two-stage self-prompting where we an additional βAlignment Promptβ is added when needed; see Section 3.3. CoT prompting consistently elicits higher accuracy from the model for numeric and symbolic problems. While accuracy and alignment only look at the final answers, we also measure similarity between the full responses for numeric and symbolic problems. As evident, self-prompting significantly improves the similarity under BLEU score and Levenshtein metric; Appendix B.1 has more details on these metrics.
improvement over **SP** and \(\sim 22\)% improvement over raw symbolic. Surprisingly we find that **SP+AP** has higher accuracy than raw numeric and raw symbolic, suggesting a "best of both worlds" or ensembling phenomenon in action. Further analysis in Figure 7 reveals how self-prompting combines the benefits of numeric and symbolic.
We also compute the similarity between the full numeric and symbolic responses. Table 1 reveals that the average similarity is significantly higher for **SP** and **SP+AP** compared to raw symbolic. So not only do the answers align more but also the full text responses are very similar. Histograms of similarity scores can be found in Appendix B.1. Additional analyses and results can be found in Appendix B.
## 4 Conclusions and Future Work
This paper studies reasoning in LLMs for MWPs and results suggest that LMs are good at zero-shot solving of symbolic MWPs, and that this ability can lead to concise explanations. Self-prompting emerges as a promising idea to generate better explanations and the ensembling effect demonstrated by it can potentially have other applications (left for future work). Alignment with self-prompting, while significantly better than with raw symbolic outputs, still has a lot of scope for improvement. Aspects that are not considered are few-shot learning of explanations and the role of temperature, which could improve accuracy and alignment. Finally the notion of "concise explanation" to study reasoning can have implications beyond MWPs.
Broader Impact Statement.Given the incredible successes of LLMs, it is becoming increasingly important to study why they work and how to debug them when they are wrong. There are ongoing debates and discussions about whether LMs are simply "stochastic parrots" (Bender et al., 2021) or they actually "understand" language. Besides there are also privacy concerns (Carlini et al., 2021) associated with LLMs trained on extremely large corpora. Our work attempts to formalize a weak notion of "reasoning" in math problems that could help with improving the intepretability, and thus trustworthiness, of such models. This is extremely important if LLMs are to be deployed in real-life applications. That said, any preliminary notion or definition of "reasoning in LLMs", including the one in this paper, should be taken with a healthy dose of skepticism.
Acknowledgments.We thank Misha Khodak for comments on an earlier version of this draft. We also thank the anonymous ACL reviewers for useful suggestions.
|
2304.11618 | Modality-Aware Negative Sampling for Multi-modal Knowledge Graph
Embedding | Negative sampling (NS) is widely used in knowledge graph embedding (KGE),
which aims to generate negative triples to make a positive-negative contrast
during training. However, existing NS methods are unsuitable when multi-modal
information is considered in KGE models. They are also inefficient due to their
complex design. In this paper, we propose Modality-Aware Negative Sampling
(MANS) for multi-modal knowledge graph embedding (MMKGE) to address the
mentioned problems. MANS could align structural and visual embeddings for
entities in KGs and learn meaningful embeddings to perform better in
multi-modal KGE while keeping lightweight and efficient. Empirical results on
two benchmarks demonstrate that MANS outperforms existing NS methods.
Meanwhile, we make further explorations about MANS to confirm its
effectiveness. | Yichi Zhang, Mingyang Chen, Wen Zhang | 2023-04-23T11:22:17Z | http://arxiv.org/abs/2304.11618v1 | # Modality-Aware Negative Sampling for Multi-modal Knowledge Graph Embedding
###### Abstract
Negative sampling (NS) is widely used in knowledge graph embedding (KGE), which aims to generate negative triples to make a positive-negative contrast during training. However, existing NS methods are unsuitable when multi-modal information is considered in KGE models. They are also inefficient due to their complex design. In this paper, we propose Modality-Aware Negative Sampling (MANS) for multi-modal knowledge graph embedding (MMKGE) to address the mentioned problems. MANS could align structural and visual embeddings for entities in KGs and learn meaningful embeddings to perform better in multi-modal KGE while keeping lightweight and efficient. Empirical results on two benchmarks demonstrate that MANS outperforms existing NS methods. Meanwhile, we make further explorations about MANS to confirm its effectiveness.
## I Introduction
Knowledge graphs (KGs) [1, 2] represent real-world knowledge in the form of triple \((h,r,t)\), which indicates the entity \(h\) and the entity \(t\) are connected by the relation \(r\). Multi-modal KGs (MMKGs) are the KGs that consist of rich modal information such as images and text. Nowadays, KGs and MMKGs have been widely used in AI-related tasks like question answering [3], recommendation systems [4], language modeling [5] and telecom fault analysis [6].
Meanwhile, KGs as well as MMKGs are usually far from complete and comprehensive because many triples are unobserved, which restricts the application of KGs and makes knowledge graph completion (KGC) a significant task. Knowledge graph embedding (KGE) [7, 8, 9, 10] is a popular and universal approach for KGC, which represents entities and relations of KGs in a continuous low-dimension vector space. In the usual paradigm, KGE models would design a score function to estimate the plausibility of triples with entity and relation embeddings. These embeddings are structural embeddings since they can encode information about triple structures. As for MMKGs, embedding-based methods can still work by utilizing multi-modal information. Nevertheless, existing multi-modal KGE (MMKGE) [11, 12, 13] methods design additional embeddings to represent the modal information, which would also participate in the score function.
Negative sampling (NS) [7] is a widely used technology for training KGE models, which aims to generate manual negative triples by randomly replacing entities for positive-negative contrast. NS would guide the KGE model to give higher scores for the positive triples. An outstanding NS strategy would obviously improve the performance of KGE models to discriminate the triple plausibility.
Though existing NS methods [14, 15, 16, 17, 18, 19] have tried different ways to obtain high-quality negative samples, they have one drawback that cannot be ignored: they are designed for general KGE models and **underperform in MMKGE**. As for MMKGE, entities may have multiple heterogeneous embeddings such as visual and structural embeddings. However, NS for the general KGE models will treat multiple embeddings of an entity as a whole and replace them together with embeddings of another entity, which we think is entity-level. Such design implicitly assumes that different embeddings of an entity have been aligned and model could distinguish the two embeddings of each entity, which weakens the model's capability of aligning different embeddings and results in less semantic information being learned by the embeddings. Besides, we should also take the efficiency of the method into account while considering the multi-modal scenario, as those existing approaches design many complex modules (e.g. GAN [14], large-scale caches [15], manual rules [18], entity clustering [19]) to sample high-quality negative samples. We think they are over-designed and make the NS method computationally expensive.
To address the mentioned challenges, we propose **M**odality-**A**ware **N**egative **S**ampling (MANS for short) strategy for MMKGE. MANS is a lightweight but effective NS strategy designed for MMKGE. We first propose visual NS (MANS-V for short), a modal-level sampling strategy that would sample only negative visual features for contrast. We employ MANS-V to achieve modality alignment for multiple entity embeddings and guide the model to learn more semantic information from different perspectives by utilizing multi-modal information. We further extend MANS-V to three combined strategies, called two-stage, hybrid, and adaptive negative sampling respectively. All of the NS methods make up MANS together. Our Contribution could be summarized as follows:
* To the best of our knowledge, MANS is the first work focusing on the negative sampling strategy for multi-modal knowledge graph embedding.
* In MANS, we propose MANS-V to align different modal information. Furthermore, we extend it to three combined NS strategies with different settings.
* We conduct comprehensive experiments on two knowledge graph completion tasks with two MMKG datasets. Experiment results illustrate that MANS could outperform the baseline methods in various tasks.
* We further carry out extensive analysis to explore several research questions about MANS to demonstrate the details of MANS.
## II Related Works
### _Knowledge Graph Embedding_
Knowledge Graph Embedding (KGE) [20] is an important research topic for knowledge graphs, which focuses on embedding the entities and relations of KGs into low-dimensional continuous vector space.
General KGE methods utilize the triple structure to embed entities and relations and follow the research paradigm that defines a score function to measure the plausibility of triples in the given KG. Negative sampling (NS) is a significant technology widely used when training KGE models. During training, positive triples should get higher scores than those negative triples, which are generated by NS.
Previous KGE methods can be cursorily divided into several categories. Translation-based methods like TransE [7] and TransH [21] modeling the triples as the translation from head to tail entities with a distance-based scoring function. Semantic-based methods like DistMult [9] and ComplEx [8] use similarity-based scoring functions. Neural network-based methods [22, 23] employ neural networks to capture features from entities and relations and score the triples. Several KGE methods modeling triples with various mathematical structures, such as RotatE [10], ConE [24]. Some recent methods [25, 26] combine rule learning / analogical inference and KGE together to enhance the interpretability of KGE models.
### _Multi-modal Knowledge Graph Embedding_
The KGE methods mentioned before are unimodal approaches as they only utilize the structure information from KGs. For multi-modal Knowledge Graphs (MMKGs), the modal information like images and text should also be highly concerned as another embedding for each entity and relation. Existing methods usually extract modal information using pre-trained models and project the modal information into the same representation space as structural information. IKRL [11] apply VGG [27] to extract visual information of entities' images and scoring a triple with both visual information and structure information using TransE [7]. TransAE [13] also employs TransE as the score function and exact modal information with a multi-modal auto-encoder. Mosselly et al [28] and Pezeshkpour et al [12] use VGG [27] and GloVe [29] to separately extract visual and textual information and then fused them into multi-modal information. Recently, RSME [30] focused on preserving truly valuable images and discarding the useless ones with three gates.
### _Negative Sampling in Knowledge Graph Embedding_
Negative sampling (NS) aims to generate negative triples which don't appear in existing KGs. Those negative triples will participate in the training process of KGE models by contrasting them with positive triples. Therefore, many NS methods are proposed to generate high-quality negative samples. Normal NS [7] randomly replaces the head or tail entity with another entity with the same probabilities. KBGAN [31] and IGAN [14] apply Generative Adversarial Networks (GANs) [32] to select harder negative samples. NSCaching [15] store the high-quality negative triples with cache during training to achieve efficient sampling. NS-KGE [17] employs a unified square loss to avoid NS during training. It is called no-sampling but all-sampling. SANS [16] utilize the graph structure to sample high-quality negative samples. CAKE [18] construct commonsense from KGs to guide NS. EANS [19] propose a clustering-based negative sampling strategy with an auxiliary loss function. VBKGC [33] propose a twins negative sampling method for different parts of the score function.
However, many of the NS methods have their shortcomings which leads to the dilemma of NS for MMKGE. On the one hand, they are not lightweight enough as extra modules are introduced in the models. On the other hand, they are designed for unimodal knowledge graph embedding. Such a strategy performs well in general KGE because each entity has only one structural embedding. As many MMKGE models define multiple embeddings for each entity, the alignment between different embeddings is also significant but ignored by existing methods.
## III Problem Formulation
In this section, we would introduce the basic pipeline of multi-modal knowledge graph embedding (MMKGE) in a three-step format. We first formally describe what a MMKG is and the embeddings we design for the MMKGE task. Then we detailedly introduce the modules of the MMKGE model. Eventually, we would show the training objective of MMKGE model and emphasis the process of negative sampling.
### _Basic Definition_
A MMKG can be denoted as \(\mathcal{G}_{M}=(\mathcal{E},\mathcal{R},\mathcal{I},\mathcal{T})\), where \(\mathcal{E},\mathcal{R},\mathcal{I},\mathcal{T}\) are the entity set, relation set, image set and triple set. Entities in \(\mathcal{E}\) may have 0 to any number of images in \(\mathcal{I}\), and the image set of entity \(e\) is denoted as \(I_{e}\).
We denote \(\mathbf{e}_{s}\) and \(\mathbf{e}_{v}\) as the structural embedding and visual embedding for an entity \(e\), respectively. Therefore, the entity \(e\) can be represented by two embedding vectors \(\mathbf{e}_{s},\mathbf{e}_{v}\). Besides, we denote \(\mathbf{r}\) as the structural embedding of relation \(r\).
### _MMKGE Framework_
In this paper, we employ a general MMKGE framework as the backbone model. The model architecture is shown in Figure 1, which consists of a visual encoder and a score function.
#### Iii-B1 Visual Encoder
Visual encoder, which is denoted as \(E_{img}\), aims to capture the visual feature of entities and project them into the same representation space of structural embeddings. For those entities with more than one image, we use mean pooling to aggregate the visual feature. The visual embedding \(\mathbf{e}_{v}\) of entity \(e\) can be denoted as:
\[\mathbf{e}_{v}=\mathbf{W}\times\frac{1}{|I_{e}|}\sum_{I_{e}^{k}\in I_{e}}E_{img}( I_{e}^{k}) \tag{1}\]
where \(\mathbf{W}\in\mathbb{R}^{d\times d_{v}}\) is the projection matrix, \(d\) is the dimension of both structural and visual embedding and \(d_{v}\) is the dimension of the output dimension of the visual encoder. In this paper, we employ pre-trained VGG-16 [27] as the visual encoder.
#### Iii-B2 Score Function
The score function is denoted as \(\mathcal{F}(h,r,t)\). Both the structural embeddings \(\mathbf{e}_{s}\) and visual embeddings \(\mathbf{e}_{v}\) will be considered in the score function. The overall score function consists of four parts, aiming to learn the embeddings in the same vector space, which can be denoted as: \(\mathcal{F}(h,r,t)=f(\mathbf{h_{s}},\mathbf{r},\mathbf{t_{s}})+f(\mathbf{h_{v }},\mathbf{r},\mathbf{t_{v}})+f(\mathbf{h_{s}},\mathbf{r},\mathbf{t_{v}})\)
\(+f(\mathbf{h_{v}},\mathbf{r},\mathbf{t_{s}})\), where \(f\) is the TransE score [7].
Besides, the overall score function \(\mathcal{F}(h,r,t)\) can be divided into two parts, unimodal scores, and multi-modal scores. The unimodal scores only consider single-modal embedding of entities while multi-modal scores use both structural embeddings and visual embeddings. Under such criteria, \(f(\mathbf{h_{s}},\mathbf{r},\mathbf{t_{s}}),f(\mathbf{h_{v}},\mathbf{r}, \mathbf{t_{v}})\) are unimodal scores and \(f(\mathbf{h_{s}},\mathbf{r},\mathbf{t_{v}}),f(\mathbf{h_{v}},\mathbf{r}, \mathbf{t_{s}})\) are multi-modal scores. Such a distinction of scores will play an important role in adaptive NS.
### _Sampling and Training_
The general target of a MMKGE model is to give higher scores for the positive triples and lower scores for the negative triples. In another word, the MMKGE model would discriminate the plausibility of a given triple by its score, which is widely used in KGC to predict the missing triples. Margin-rank loss is a general training objective extensively used in the MMKGE model [11, 12]. It could be denoted as:
\[\mathcal{L}=\sum_{(h,r,t)\in\mathcal{T}}\sum_{(h^{\prime},r^{\prime},t^{ \prime})\in\mathcal{T}^{\prime}}\max(\gamma-\mathcal{F}(h,r,t)+\mathcal{F}(h^ {\prime},r^{\prime},t^{\prime})) \tag{2}\]
where \(\gamma\) is the margin, \((h,r,t)\) is the positve triple in the KG and \((h^{\prime},r^{\prime},t^{\prime})\) is the negative triples.
Besides, a given KG usually consists of the observed facts, which are all positive triples. We need to generate the negative triple \((h^{\prime},r^{\prime},t^{\prime})\) manually. Such a process is what we call negative sampling (NS). In normal NS, one of the head and tail entities is randomly replaced. In this setting, \(h^{\prime},t^{\prime}\) are still the entities in \(\mathcal{E}\). This also means that normal NS is an entity-level sampling strategy as it samples negative entities for a given positive triple. As we have analyzed in the previous section, normal NS is suitable for general KGE models but fails when it comes to the MMKGE. In the next section, we will introduce our NS methods to sample better negative triples.
## IV Methodology
Normal NS is an entity-level strategy, as all the embeddings of the selected entity are replaced by the negative ones. However, our approach differs. In this section, we would briefly introduce our **M**odality-**A**ware **N**egative **S**ampling (MANS). MANS is based on visual negative sampling (MANS-V for short), which is a modal-level NS strategy and would sample negative visual embeddings for a finer contrast. We further combine MANS-V and normal NS with a sampling proportion \(\beta\) and propose three more comprehensive NS settings. They are two-stage negative sampling (MANS-T), hybrid negative sampling (MANS-H), and adaptive negative sampling (MANS-A).
### _Visual Negative Sampling (MANS-V)_
MANS-V aims to sample the negative visual embeddings that do not belong to the current entity to teach the model to identify the visual features corresponding to each entity, which could achieve the modality alignment between structural and visual embeddings. In our context, modality alignment means that the model could identify the relations between the two modal embeddings, which we think is of great importance in MMKGE.
MANS-V is a modal-level method that would sample negative visual embeddings. The negative triple \((h^{\prime},r^{\prime},t^{\prime})\) generated by MANS-V preserves the original structural embeddings but the visual embedding of the replaced entity is changed. For example, if we replace head entity \(h\) with another entity \(h^{\prime}\), the embeddings of \(h^{\prime}\) used during training is \(\mathbf{h_{s}},\mathbf{h}^{\prime}_{v}\). For tail entity, the embeddings of \(t^{\prime}\) is \(\mathbf{t_{s}},\mathbf{t}^{\prime}_{v}\). In MANS-V, the replaced entity is a virtual negative entity that doesn't exist in \(\mathcal{E}\). An intuitive example of MANS-V is shown in Figure 2.
Thus, MANS-V is a more fine-grained strategy compared with normal sampling. It changes the granularity of NS from the whole entity to the single modal embedding of the entity. By sampling only negative visual embeddings, MANS-V could achieve alignment between different modal embeddings for an entity.
KGE models would learn to align the two embeddings for each entity by MANS-V. However, learning to discriminate the plausibility of triples is still significant, which could be achieved by normal NS. Hence, we consider that MANS-V could play an important role as the auxiliary to enhance the
Fig. 1: Our Multi-modal KGE model architecture
normal NS and we propose three combination strategies for comprehensive training.
### _Two-Stage Negative Sampling (MANS-T)_
MANS-T divides the training process into two different stages:
* In **Stage1**, MANS-V is applied to train the model. The model would learn to align different modal embeddings in this stage.
* In **Stage2**, we employ normal sampling and train the model to discriminate the plausibility. As the structural and visual embeddings are aligned inside each entity, the model would learn better in this stage.
We assume that the total training epoch is \(M\) and the proportion of MANS-V is \(\beta_{1}\), then the turning point for stage switching is:
\[M_{0}=\beta_{1}\times M \tag{3}\]
which means training epoch \([0,M_{0}]\) is **Stage1** and \([M_{0}+1,M]\) is **Stage2**. It's not difficult to find that normal NS and MANS-V are two special cases of MANS-T when \(\beta_{1}=0\) (normal) or \(\beta_{1}=1\) (image).
### _Hybrid Negative Sampling (MANS-H)_
As MANS-T divides the NS from the view of training epochs, MANS-H would apply two sampling strategies in each training epoch. Compared with the two-stage setting, MANS-H is more progressive.
In each mini-batch of one training epoch, we assume that the batch size is \(N\) and the MANS-V proportion is \(\beta_{2}\), for each triple, we sample \(k\) negative samples, then the total number of negative samples is \(kN\) and the total negative triples generated by MANS-V is:
\[N_{0}=\beta_{2}\times kN \tag{4}\]
which means that \(N_{0}\) negative samples are randomly generated by MANS-V in a mini-batch and others are generated by normal NS. During the whole training process, MANS-H will be applied and the negative samples are blended from multiple sampling strategies. In MANS-H, the sampling proportion \(\beta_{2}\) is a tunable hyper-parameter. The same as the two-stage setting, MANS-H becomes normal NS when \(\beta_{2}=0\) and MANS-V when \(\beta_{2}=1\).
### _Adaptive Negative Sampling (MANS-A)_
MANS-A is an improved version of MANS-H, which no longer needs to tune the sampling proportion anymore. MANS-A will change the proportion \(\beta_{3}\) adaptively. The adaptive sampling proportion \(\beta_{3}\) would be determined by different scores of the training data.
As mentioned before, the overall score function \(\mathcal{F}(h,r,t)\) can be divided into unimodal scores and multi-modal scores. We could denote the two parts as:
\[\mathcal{F}_{unimodal}(h,r,t)=f(\mathbf{h_{s}},\mathbf{r},\mathbf{t_{s}})+f( \mathbf{h_{v}},\mathbf{r},\mathbf{t_{v}}) \tag{5}\]
\[\mathcal{F}_{multimodal}(h,r,t)=f(\mathbf{h_{s}},\mathbf{r},\mathbf{t_{v}})+ f(\mathbf{h_{v}},\mathbf{r},\mathbf{t_{s}}) \tag{6}\]
We define a function \(\Phi(h_{i},r_{i},t_{i})\) to discriminate whether the triple \((h_{i},r_{i},t_{i})\) need MANS-V. The function \(\Phi(h_{i},r_{i},t_{i})\) is defined as:
\[\Phi(h_{i},r_{i},t_{i})=\left\{\begin{array}{ll}0&\mathcal{F}_{multimodal} \geq\mathcal{F}_{unimodal}\\ 1&\mathcal{F}_{multimodal}<\mathcal{F}_{unimodal}\end{array}\right. \tag{7}\]
which means that, when multi-modal score \(\mathcal{F}_{multimodal}(h_{i},r_{i},t_{i})\) is higher than the unimodal score, MANS-V will be applied. As MANS-V would align different modal embeddings and achieve higher multi-modal scores. Hence, the adaptive proportion \(\beta_{3}\) for each batch is defined as:
\[\beta_{3}=\frac{1}{N}\sum_{i=1}^{N}\Phi(h_{i},r_{i},t_{i}) \tag{8}\]
where \((h_{i},r_{i},t_{i})(i=1,2,\dots,N)\) is the batch data. With sampling proportion \(\beta_{3}\), the MANS-H would be applied during the training of this batch. The biggest difference between adaptive and MANS-H is that we define an adaptive sampling proportion \(\beta_{3}\) and no longer need to tune it anymore, which could reduce the workload for searching better hyper-parameters.
## V Experiments
In this section, we will present the detailed experiment settings and the experimental results to show the advantages of MANS. We design several experiments to answer the following research questions (RQs):
* **RQ1:** Could MANS outperform the baseline methods and achieve new state-of-the-art (SOTA) results in various KGC tasks?
* **RQ2:** As a new hyper-parameter \(\beta\) is introduced in our method, how to select better sampling proportion \(\beta_{i}(i=1,2)\) for MANS-T and MANS-H?
* **RQ3:** Is MANS-A a reasonable and effective design? What is the trend of the sampling proportion \(\beta_{3}\) in MANS-A during training?
* **RQ4:** Is MANS efficient and lightweight compared with existing NS methods?
* **RQ5:** Could MANS learn better embeddings with more semantic information compared with normal NS?
Fig. 2: An example of MANS-V. Only negative visual feature is sampled compared with normal negative sampling. We further combine MANS-V with normal NS to get three more NS strategies.
### _Datasets_
In our experiments, we use two well-known MMKG datasets (FB15K, DB15K with extra images of entities) proposed in [34], the statistical information of the datasets is shown in Table I.
### _Evaluation and Implementation Details_
#### Iv-B1 Tasks and Evaluation Protocol
We evaluate our method on two tasks, link prediction, and triple classification [7]. The link prediction task aims to predict the missing entity for a given query \((h,r,?)\) or \((?,r,t)\) with the KGE model. We evaluate the link prediction task by mean rank (MR) [7], mean reciprocal rank (MRR) [10] and Hit@K (K=1,3,10) [7]. Besides, we follow the filter setting [7] which would remove candidate triples that have already appeared in the datasets.
Triple classification task would predict the given triple \((h,r,t)\) is true or not. Thus, we evaluate the task with accuracy (Acc), precision (P), recall (R), and F1-score (F1), which are the common metrics for the binary classification task.
#### Iv-B2 Baselines
For the link prediction task, we employ the normal NS [7] and several recent SOTA NS methods as the baselines. They are No-Samp [17], NSCaching [15], SANS [16], CAKE [18], and EANS [19], which enhance the normal NS from their different perspectives. We utilize their official code to conduct baseline results. For the triple classification task, we compare the performance of MANS with normal NS, as other NS methods do not focus on this task and give the corresponding implementations.
#### Iv-B3 Experiments Settings
For experiments, we set both structural embedding and visual embedding size \(d_{e}=128\) for each model. The dimension of visual features captured by a pre-trained VGG-16 model is \(d_{v}=4096\). For those entities which have no image, we employ Xavier initialization [35] for their visual features. We set the number of negative triples to 1 and train each model with 1000 epochs.
During training, we divide each dataset into 400 batches and apply IKRL [11] as the MMKGE model. We use the default Adam optimizer for optimization and tune the hyperparameters of our model with grid search. The margin \(\gamma\) is tuned in \(\{4.0,6.0,8.0,12.0\}\) and learning rate \(\eta\) is tuned in \(\{0.001,0.01,0.1,1\}\). Besides, for two-stage and MANS-H, we tuned the sampling proportion \(\beta_{1},\beta_{2}\) from \(0.1\) to \(1.0\).
For baselines, we have taken full account of the parameter settings in the original paper [15, 17, 18, 19]. All the experiments are conducted on one Nvidia GeForce 3090 GPU. Our code of MANS is released in [https://github.com/zjukg/MANS](https://github.com/zjukg/MANS).
### _RQ1: Main Results_
To answer RQ1, we conduct experiments on two KGC tasks. The evaluation results of the link prediction task are shown in Table II and the triple classification results are in Table III. From the experimental results, We can conclude the following points:
**Poor performance of the baselines.** We could observe that existing NS methods have poor performance and they are even worse than the normal NS. According to our previous analysis, these NS methods are designed for general KGE models and are unsuitable for the multi-modal scenario where modal information is carefully considered. They could not align different embeddings of each entity and get bad performance in MMKGE.
**The outperformance of MANS.** MANS could achieve better link prediction results compared with baselines. For example, MANS-A achieves much better Hit@1 on FB15K compared with baselines (from 0.318 to 0.353, a relative improvement of 9.9%). Besides, MANS performs particularly well in Hit@1 and MRR, which are sensitive to high-rank results [15]. This means that MANS can largely improve the accurate discriminatory ability of the model by aligning structural and visual embeddings.
**Necessity and effectiveness of MANS-V.** According to the previous section, MANS-V is designed to align different modal information. Though it does not perform better than baseline methods, MANS-V is the fundamental component of the other three settings of MANS. Besides, we could prove with such a result that both modal alignment and positive-negative discrimination are important for MMKGE, which could be achieved by MANS-V and normal NS respectively. MANS-T, MANS-H, and MANS-A could perform better because they combine the advantages of both. In summary, MANS-V is a necessary design for MMKGE.
**Comparison of different MANS settings.** As we propose three different settings of MANS, we could observe from Table II that all of the three settings (MANS-T, MANS-H, MANS-A) outperform the baseline methods. Experiment results demonstrate that MANS-H and MANS-A would perform better than MANS-T. Meanwhile, MANS-H and MANS-A have their advantages on different datasets and metrics, but the overall difference of link prediction performance between MANS-H and MANS-A is not notable. Nevertheless, the proportion \(\beta_{2}\) of MANS-H needs to be tuned several times to find the best choice while MANS-A could adaptively change the proportion \(\beta_{3}\) during training and get good performance without hyperparameter tuning. For the mentioned reasons, we believe that the overall performance of MANS-A is better than MANS-T and MANS-H. MANS-A is free of proportion tuning and could achieve outstanding results.
**Universality of MANS.** From Table III, we could see that three settings of MANS could achieve better triple classification results on four metrics compared with normal NS. Besides, MANS-A outperforms MANS-T and MANS-H on accuracy and F1-score. In summary, the results show that our
design of MANS could benefit the MMKGE model in various KGC tasks such as link prediction and triple classification, which means that MANS is a universal approach for better KGC.
### _RQ2: Proportion Selection_
Though MANS achieved good performance on link prediction and other tasks, a fact cannot be ignored is that MANS might require more effort to tune the sampling proportion (\(\beta_{1},\beta_{2}\) for MANS-T and MANS-H respectively). The optimal proportions for MANS-T and MANS-H are shown in Table IV, we further explore the impact of sampling proportion on the link prediction task. It would answer RQ2 and guide us in choosing the best sampling proportion.
It is worth mentioning that when \(\beta_{i}=0.0\)(\(i=1,2\)), both MANS-T and MANS-H degrade to normal negative sampling. When \(\beta_{i}=1.0\)(\(i=1,2\)), both of them become MANS-I. Thus, they can be baselines for comparison.
We could observe that the trends of MANS-T and MANS-H are almost identical. For MANS-T, the best proportion \(\beta_{1}=0.3\), and for MANS-H the best proportion \(\beta_{2}=0.4\). In the range of 0.1 to 0.4, MMKGE models trained with MANS-T and MANS-H perform better. Meanwhile, we could find that as the proportion of image negative sampling increases (when \(\beta_{1},\beta_{2}\geq 0.5\)), the model performance would get down and might be worse than normal negative sampling. In the range of 0.1 to 0.4, the performance of each strategy has just little changes most of the time. Therefore, the best choice for sampling proportion should most likely be in this range.
### _RQ3: Adaptive Setting_
From the previous experiments, we could observe that the performance of MANS-A is close to and slightly better than MANS-H most of the time. In this section, we will dive into MANS-A and make further exploration to illustrate the rationality of its design and answer RQ3.
We record the adaptive proportion \(\beta_{3}\) for each batch of data in each training epoch and then calculate the average
Fig. 3: Impact of sampling proportion \(\beta_{1},\beta_{2}\) for two-stage (MANS-T, the red line) and hybrid (MANS-H, the blue line) negative sampling. The experiments are based on FB15K dataset and TransE base score function.
adaptive proportion of all the batches for each epoch. The trends of adaptive sampling proportion \(\beta_{3}\) for different models and datasets in each training epoch are shown in Figure 4.
According to Figure 4, the adaptive proportion \(\beta_{3}\) usually becomes stable during the training process. We could pay attention to the stable part of each curve. Compared with the optimal proportions for MANS-H (which can be found in the previous section), we could find that design of MANS-A is reasonable as the adaptive proportion \(\beta_{3}\) in MANS-A is close to the optimal settings in MANS-H. For example, the stable sampling proportions on FB15K and DB15K are nearly 0.4 and 0.3. They are close to the optimal or sub-optimal \(\beta_{2}\) of MANS-H. This suggests that the adaptive setting MANS-A would find the suitable proportion \(\beta_{3}\) which is consistent with MANS-H but free of tuning. In summary, the design of MANS-A is reasonable and effective.
### _RQ4: Efficiency_
As we mentioned earlier, MANS is more lightweight and efficient compared with existing methods because it is free of over-designed. Therefore, we evaluate the training speed of each NS method and list the results in Table V, aiming to answer RQ4. The experiments are conducted on a single Nvidia GeForce RTX 3090 GPU.
From the table, we could find that the training speed of MANS is closer to the normal NS. Even the most complicated MANS-A is more efficient than several baselines. Though NSo-Samp [17] is very fast, it fails to perform well in MMKGE according to the link prediction results in Table II.
We also list the extra modules proposed by each method. Unlike random walks in SANS [16] and entity clustering in EANS [19], our visual NS is not computationally intensive, which is the reason why MANS is lightweight enough. Besides, we have found in practice that NSCaching [15] and No-Samp [17] would consume lots of memory and GPU resources, which is 1.13\(\times\) (NSCaching [15]) and 6.65\(\times\) (No-Samp [17]) than MANS-A. In summary, MANS is lightweight and efficient enough and could make the training process faster compared with other NS methods. We have achieved a significant improvement in two tasks of KGC with our lightweight design.
conclude that MANS could guide the MMKGE model to learn meaningful and semantic-rich embeddings.
## VI Conclusion
In this paper, we propose MANS, a modality-aware negative sampling method for MMKGE, which focuses on the alignment between different modal embeddings of a MMKGE model. MANS is the first NS method designed especially for MMKGE while achieving efficiency and effectiveness to solve the problems of existing NS methods. We first propose visual negative sampling (MANS-V) and extend MANS-V to three different settings called MANS-T, MANS-H, and MANS-A. Besides, we conduct comprehensive experiments on two public benchmarks and two classic tasks to demonstrate the performance of MANS compared with several state-of-the-art NS methods. In the future, we plan to conduct more in-depth research about MMKGE from two perspectives: (1) developing more robust solutions to achieve modal alignment and fusion of MMKG, (2) attempting to make co-design of the MMKGE model and NS method for better performance.
## Acknowledgement
This work is funded by Zhejiang Provincial Natural Science Foundation of China (No. LQ23F020017) and Yongjiang Talent Introduction Programme (No. 2022A-238-G).
|
2306.07603 | Numerical Simulation of Power-Law Fluid Flow in a Trapezoidal Cavity
using the Incompressible Finite-Difference Lattice Boltzmann Method | In this paper, a numerical investigation of power-law fluid flow in the
trapezoidal cavity has been conducted by incompressible finite-difference
lattice Boltzmann method (IFDLBM). By designing the equilibrium distribution
function, the Navier-Stokes equations (NSEs) can be recovered exactly. Through
the coordinate transformation method, the body-fitted grid in physical region
is transformed into a uniform grid in computational region. The effect of
Reynolds (Re) number, the power-law index $n$ and the vertical angle {\theta}
on the trapezoidal cavity are investigated. According to the numerical results,
we come to some conclusions. For low Re number Re=100, it can be found that the
behavior of power-law fluid flow becomes more complicated with the increase of
n. And as vertical angle {\theta} decreases, the flow becomes smooth and the
number of vortices decreases. For high Re numbers, the flow development becomes
more complex, the number and strength of vortices increase. If the Reynolds
number increases further, the power-law fluid will changes from steady flow to
periodic flow and then to turbulent flow. For the steady flow, the lager the
{\theta}, the more complicated the vortices. And the critical Re number from
steady to periodic state decreases with the decrease of power-law index n. | Xinmeng Chen, Zhenhua Chai, Yong Zhao, Baochang Shi | 2023-06-13T08:00:00Z | http://arxiv.org/abs/2306.07603v1 | Numerical Simulation of Power-Law Fluid Flow in a Trapezoidal Cavity using the Incompressible Finite-Difference Lattice Boltzmann Method
###### Abstract
In this paper, a numerical investigation of power-law fluid flow in the trapezoidal cavity has been conducted by incompressible finite-difference lattice Boltzmann method (IFDLBM). By designing the equilibrium distribution function, the Navier-Stokes equations (NSEs) can be recovered exactly. Through the coordinate transformation method, the body-fitted grid in physical region is transformed into a uniform grid in computational region. The effect of Reynolds (\(Re\)) number, the power-law index \(n\) and the vertical angle \(\theta\) on the trapezoidal cavity are investigated. According to the numerical results, we come to some conclusions. For low \(Re\) number \(Re=100\), it can be found that the behavior of power-law fluid flow becomes more complicated with the increase of n. And as vertical angle \(\theta\) decreases, the flow becomes smooth and the number of vortices decreases. For high Re numbers, the flow development becomes more complex, the number and strength of vortices increase. If the Reynolds number increases further, the power-law fluid will changes from steady flow to periodic flow and then to turbulent flow. For the steady flow, the lager the \(\theta\), the more complicated the vortices. And the critical Re number from steady to periodic state decreases with the decrease of power-law index \(n\).
keywords: Finite difference lattice Boltzmann method, Coordinate transformation, Power-law fluid, Trapezoidal cavity +
Footnote β : journal: Elsevier
## 1 Introduction
Over the last couple of decades, tremendous amount of research has been carried out in solving NSEs, such as finite difference method [1], finite element method [2], finite volume method [3]. In particular, the lattice Boltzmann method (LBM), as a novel alternative method, has been widely concerned now [4; 5; 6; 7]. As a classic benchmark problem described by NSEs, the two-dimensional lid-driven flow in a square cavity has also been widely investigated [8; 9; 10; 11], including the study of high Reynolds number flow [12] and the three-dimensional lid-driven cavity flow [13]. On this basis, the lid-driven flows of different cavities shapes were also simulated. In 2006, Patil et al. [14] applied the lattice Boltzmann equation to simulate the lid-driven flow in a two-dimensional rectangular deep cavity. He studied several features of the flow, such as the location and strength of the primary vortex, and the corner-eddy dynamics. Then, Cheng et al. [15] investigated the vortex structure in a lid-driven rectangular cavity at different depth-to-width ratios and Reynolds numbers by a lattice Boltzmann method. Zhang et.al [16] used the lattice BGK model to simulate lid-driven flow in a two-dimensional trapezoidal cavity. In addition, Li et al. [17] presented an accurate and efficient calculations of the flow inside a triangular cavity for high Reynolds numbers. And Erturk et al. [18] studied the numerical solutions of 2-D steady incompressible flow in a driven skewed cavity.
However, all above works only studied the Newtonian fluids. Non-newtonian fluids are widely observed in nature and industrial production, such as petroleum, food, geophysics, lubricants, chemistry, hydrogeology, to name but a few [19]. Unlike the Newtonian fluid, the relationship between shear stress and shear strain rate of non-Newtonian fluid is nonlinear. As a result, the non-Newtonian fluid will show shear thickening and shear variation characteristics. Due to the complicated constitutive equation of non-Newtonian fluid, it is a challenge to investigate the non-Newtonian fluid behavior by numerical methods. Recently, there are many efforts have been made to simulate non-Newtonian fluid flows through LBM in various computational geometries [20, 21, 22, 23, 24, 25, 26], such as the Non-Newtonian flow through porous media [20], the filling of expanding cavities by Bingham fluids [21] and the non-Newtonian pseudo-plastic fluid in a micro-channel [25]. In addition, Gabbanelli et al. [22] studied the shear-thinning and shear-thickening fluids in parallel and reentrant geometries by LBM. Boy et al. [23] presented a second-order accurate LBM for the simulations of the power-law fluid in a two-dimensional rigid pipe flow. Yoshino et al. [24] developed a LBM to investigate the power-law model in a reentrant corner geometry and flows inside a three-dimensional porous structure. Mendu and Das [27] applied the LBM to study the power-law fluids inside a two-dimensional enclosure driven by the motion of the two facing lids. Psihogios et al. [28] investigated the non-Newtonian shear-thinning fluid flow in three dimensional digitally reconstructed porous domain. Hamedi and Rahimian [25] simulated the power-law model for pseudo-plastic fluids in micro-channel by using LBM. Wang and Ho [26] investigated the shear thinning non-Newtonian blood flows through LBM. Chai et al. used the multi-relaxation-time lattice Boltzmann method (MRT-LBM) to simulate the generalized Newtonian fluid flow. Qi et al. [29] investigated the wake effect on the interaction between particle and power-law fluid flow by the parallel three-dimensional LBM. And they also investigated the interaction between fluid rheology and bed properties through LBM [30].
For the Non-newtonian fluids in two-dimensional cavity, Li et al. [31] used the MRT-LBM to study power-law fluid flows in square cavity. Besides, MRT-LBM has been applied to simulate the power-law fluid in square enclosures with undulation in Ref. [32]. At present, the non-Newtonian fluid flow problem in trapezoidal cavity has not been investigated. Obviously, this problem will be more complex than that in the square cavity. The angle of trapezoid, the power-law index and the \(Re\) number are the key factors affecting the flow. On the one hand, a more stable model is required for the simulation due to the characteristics of shear thickening and shear variation in Non-newtonian fluids. On the other hand, the curved boundary treatment method should be applied for the boundary of trapezoid. However, the stability of standard LBM is less than the finite difference LBM (FDLBM) [33], and it is also difficult to implement the body-fit grid in the trapezoidal cavity [34]. Using the curved boundary treatment method means that the computational domain will be expanded into a rectangle, which will increase the computation. Based on the above problems, we find some works have been conducted on FDLBM to simulate complex flows in order to improve numerical scheme accuracy and geometric flexibility, including three-dimensional incompressible flows [35], two-phase liquid-vapor flows [36], natural convection in some special geometries [37, 38] and blood flow [39]. Hence, the finite difference LBM (FDLBM) is more suitable to simulate the power-law fluid in the trapezoidal cavity. Compared to conventional numerical methods, one of the characteristics of IFDLBM is that the shear tensor can be computed locally without taking space derivatives of the velocity field [24, 40]. For transport phenomena in complex geometries, the LBM is more efficient than the finite difference method [41] and the finite volume method [42]. And the IFDLBM, as a mesoscopic numerical method, also possesses this characteristic. Besides, compared to the LBM, the FDLBM is more stable and the flow details of non-Newtonian fluids can be better captured even at high Re numbers through the FDLBM [33, 34]. And the space-time is decoupled in FDLBM, it is convenient to use a body-fit mesh to simulate the trapezoidal cavity problem.
In this paper, the incompressible FDLBM (IFDLBM) has been proposed as a core solver to simulate the power law flow in a two-dimensional trapezoidal cavity. The rest of the paper is organized as follows. The physical model and the governing equation are expressed in Sec. 2. In Sec. 3, we give the IFDLBM and list the calculational process. Through coordinate transformation, the general formula of body-fitting mesh transformation is given in Sec. 4. Then, the code validation and the grid independence testing are performed in Sec. 5. In Sec. 6, we showed the numerical results and discuss the fluid behavior. Finally, a brief summary was made in Sec. 7.
## 3 The incompressible finite-difference lattice Boltzmann method
In this section, the incompressible FDLBM will be presented where the collision term is discreted by BGK model [34]. For the power-law fluid flow, we consider the BGK model combined with FDLBM. First, let's begin from the discrete velocity Boltzmann equation (DVBE) without the force term,
\[\partial_{t}f_{i}+\mathbf{c}_{i}\cdot\nabla f_{i}=-\frac{1}{\lambda}(f_{i}-f_{i}^{ eq}), \tag{11}\]
where the \(f_{i}(\mathbf{x},t)\) is density distribution function for particle moving with velocity \(\mathbf{c}_{i}\) at position \(\mathbf{x}\) and time \(t\), and \(\lambda\) is the relation time, \(f_{i}^{eq}=f_{i}^{eq}(P,\mathbf{u})\) is the equilibrium distribution function. Based on the previous FDLBM [33; 34], the two evolution equations of IFDLBM can be written as
\[\hat{f}_{i}(\mathbf{x},t+\Delta t)=\hat{f}_{i}^{+}(\mathbf{x},t)-\Delta t\mathbf{c}_{i} \cdot\nabla f_{i}(\mathbf{x},t+\frac{1}{2}\Delta t), \tag{12}\]
and
\[\bar{f}_{i}(\mathbf{x},t+\frac{1}{2}\Delta t)=\bar{f}_{i}^{+}(\mathbf{x},t)-\frac{1}{ 2}\Delta t\mathbf{c}_{i}\cdot\nabla\bar{f}_{i}^{+}(\mathbf{x},t), \tag{13}\]
where
\[\hat{f}_{i}(\mathbf{x},t)=f_{i}(\mathbf{x},t)-\frac{1}{2}\Delta t(-\lambda(f_ {i}(\mathbf{x},t)-f_{i}^{eq}(\mathbf{x},t))), \tag{14}\] \[\hat{f}_{i}^{+}(\mathbf{x},t)=f_{i}(\mathbf{x},t)+\frac{1}{2}\Delta t(- \lambda(f_{i}(\mathbf{x},t)-f_{i}^{eq}(\mathbf{x},t))), \tag{15}\]
and
\[\bar{f}_{i}(\mathbf{x},t)=f_{i}(\mathbf{x},t)-\frac{1}{2}\Delta t(-\lambda(f_ {i}(\mathbf{x},t)-f_{i}^{eq}(\mathbf{x},t))), \tag{16}\] \[\bar{f}_{i}^{+}(\mathbf{x},t)=f_{i}(\mathbf{x},t)+\frac{1}{4}\Delta t(- \lambda(f_{i}(\mathbf{x},t)-f_{i}^{eq}(\mathbf{x},t))). \tag{17}\]
The gradient terms \(\nabla f_{i}\) and \(\nabla\bar{f}_{i}^{+}\) can be discretized by a mixed difference scheme,
\[\nabla\Pi_{j}^{*}=\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_ {m}=\eta\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_{c}+(1- \eta)\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_{u}, \tag{18}\]
where \(\Pi_{j}^{*}\) represents \(f_{j}\) or \(\bar{f}_{j}^{+}\), and the parameter \(\eta\in[0,1]\). The terms \(\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_{u}\) and \(\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_{c}\) represent second up-wind difference and central-difference schemes, which can be expressed as
\[\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_{c}= \frac{\Pi_{j}^{*}(\chi_{\alpha}+\Delta\chi_{\alpha},t)-\Pi_{j}^{*}(\chi_{ \alpha}-\Delta\chi_{\alpha},t)}{2\Delta\chi_{\alpha}}, \tag{19a}\] \[\frac{\partial\Pi_{j}^{*}}{\partial\chi_{\alpha}}\bigg{|}_{u}= \begin{cases}\frac{3\Pi_{j}^{*}(\chi_{\alpha},t)-4\Pi_{j}^{*}(\chi_{ \alpha}-\Delta\chi_{\alpha},t)+\Pi_{j}^{*}(\chi_{\alpha}-2\Delta\chi_{\alpha},t)}{2\Delta\chi_{\alpha}},&if\quad c_{ia}\geq 0,\\ -\frac{3\Pi_{j}^{*}(\chi_{\alpha},t)-4\Pi_{j}^{*}(\chi_{\alpha}+\Delta\chi_{ \alpha},t)+\Pi_{j}^{*}(\chi_{\alpha}+\Delta\chi_{\alpha},t)}{2\Delta\chi_{ \alpha}},&if\quad c_{ia}<0.\end{cases} \tag{19b}\]
Through the CE analysis in the Appendix, the equilibrium distribution function \(f_{i}^{eq}(\mathbf{x},t)\) can be designed as
\[f_{i}^{eq}=\bar{\omega}_{i}\frac{P}{c_{s}^{2}}+\omega_{i}\rho_{0}+\omega_{i}[ \frac{\mathbf{c}_{i}\cdot\mathbf{u}}{c_{s}^{2}}+\frac{\mathbf{uu}:(\mathbf{c}_{i}\mathbf{c}_{i}-c_ {s}^{2}\mathbf{I})}{2c_{s}^{2}}], \tag{20}\]
where \(\bar{\omega}_{i}\) and \(\omega_{i}\) are the weight coefficients determined by the discrete velocity model. And the calculation of macroscopic quantities \(\mathbf{u}\) and \(P\) are given by
\[\mathbf{u}=\sum_{i}\mathbf{c}_{i}f_{i}=\sum_{i}\mathbf{c}_{i}\bar{f}_{i}=\sum_{i}\mathbf{c}_{i} \hat{f}_{i}, \tag{21}\]
Figure 1: Geometry of the trapezoidal cavity
\[P=\frac{c_{s}^{2}}{\bar{\omega}_{0}}(-\sum_{i\neq 0}\hat{f_{i}}+(1-\omega_{0}) \rho_{0}+\frac{\omega_{0}}{2c_{s}^{2}}\mathbf{uu})=\frac{c_{s}^{2}}{\bar{\omega}_{0} }(-\sum_{i\neq 0}\bar{f_{i}}+(1-\omega_{0})\rho_{0}+\frac{\omega_{0}}{2c_{s}^{2}} \mathbf{uu}) \tag{22}\]
In the present work, we take the D2Q9 lattice model for simulation. The discrete velocities in D2Q9 model can be expressed as
\[\mathbf{c}_{j}=\begin{pmatrix}0&1&0&-1&0&1&-1&-1&1\\ 0&0&1&0&-1&1&1&-1&-1\end{pmatrix}c, \tag{23a}\] \[\omega_{0}=4/9,\omega_{j=1-4}=1/9,\omega_{j=5-9}=1/36,\] (23b) \[\bar{\omega}_{0}=-5/9,\bar{\omega}_{j=1-4}=1/9,\bar{\omega}_{j=5-9}=1/36. \tag{23c}\]
For the Power-law fluid, the viscosity of non-Newtonian fluids depends on the strain rate tensor. Different from the traditional methods, the local nature of FDLBM makes it possible to calculate the strain rate tensor locally at each grid point, rather than estimate the velocity gradients. According to the Eq. (A.9) in the Appendix, the strain rate tensor can be calculated by the secondary moments of \(f^{ne}\),
\[\nabla\mathbf{u}+\nabla\mathbf{u}^{T}=-\frac{1}{\lambda c_{s}^{2}}\sum_{ \mathbf{c}}\mathbf{c}_{i}\mathbf{c}_{j}f_{i}^{ne}, \tag{24}\]
where
\[f_{i}^{ne}=f_{i}-f_{i}^{eq}. \tag{25}\]
To simulate the power-law fluids, the relaxation time of FDLBM is related with the viscosity. Because the apparent viscosity of power-law fluid varies with position, the modified relaxation time \(\lambda\) can be rewritten as
\[\lambda=\mu/c_{s}^{2}, \tag{26}\]
where the \(\mu\) can be obtained by Eqs. (24), (5) and (6). Combing Eq. (5), Eq. (24) and Eq. (26), it can be conclude that \(\lambda=\mu(\lambda)/c_{s}^{2}\), which means that the calculation of \(\lambda\) is an implicit form. It can not be solved by analytical techniques, so \(\mu\) is approximated with the \(\lambda\) at the last time step in the simulation. For steady-state problem or practical applications without sacrificing sufficient accuracy, the equation becomes an explicit form and it can be applied efficiently. For the power-law fluid flows in the lid-driven cavity, Eq. (5) can be non-dimensionalized to produce the following dimensionless number analogous to the \(Re\) number:
\[Re=\frac{U^{2-n}L^{n}}{\mu}, \tag{27}\]
where \(U\) is the maximum velocity in the cavity of height \(L\).
## 4 Coordinate conversion and boundary processing
One feature of the IFDLBM is geometric flexibility. First, the physical and computational domain are denoted by \((x,y)\) and \((\xi,\eta)\), respectively. And they satisfy the following relationship,
\[\xi=\xi(x,y),\quad\eta=\eta(x,y). \tag{28}\]
Then Eqs. (12) and (13) can be transformed into the generalized curvilinear coordinate. For the physical and computational domains, the following condition should be satisfied
\[\begin{bmatrix}\xi_{x}&\xi_{y}\\ \eta_{x}&\eta_{y}\end{bmatrix}=\frac{1}{J}\begin{bmatrix}y_{\eta}&-x_{\eta}\\ -y_{\xi}&x_{\xi}\end{bmatrix}, \tag{29}\]
where \(J\) is the transformation Jacobian matrix, it can be obtained by
\[J=x_{\xi}y_{\eta}-x_{\eta}y_{\xi}. \tag{30}\]
According to the chain rule, the gradient term of Eqs. (12) and (13) can be rewritten as
\[\mathbf{c}_{i}\cdot\nabla f_{i} =c_{ix}\frac{\partial f_{i}}{\partial x}+c_{iy}\frac{\partial f_{i }}{\partial y} \tag{31}\] \[=c_{ix}\left(\frac{\partial f_{i}}{\partial\xi}\xi_{x}+\frac{ \partial f_{i}}{\partial\eta}\eta_{x}\right)+c_{iy}\left(\frac{\partial f_{i}} {\partial\xi}\xi_{y}+\frac{\partial f_{i}}{\partial\eta}\eta_{y}\right)\] \[=(c_{ix}\xi_{x}+c_{iy}\xi_{y})\frac{\partial f_{i}}{\partial\xi} +(c_{ix}\eta_{x}+c_{iy}\eta_{y})\frac{\partial f_{i}}{\partial\eta}\] \[=c_{i\xi}\frac{\partial f_{i}}{\partial\xi}+c_{iy}\frac{\partial f _{i}}{\partial\eta},\]
\[\mathbf{c}_{i}\cdot\nabla\bar{f}_{i}^{+}=c_{i\xi}\frac{\partial\bar{f}_{i}^{+}}{ \partial\xi}+c_{i\eta}\frac{\partial\bar{f}_{i}^{+}}{\partial\eta}. \tag{32}\]
In the computational domain, \(\mathbf{c}_{i}=(c_{i\xi},c_{i\eta})=(c_{ix}\xi_{x}+c_{iy}\xi_{y},c_{ix}\eta_{x}+c_ {iy}\eta_{y})\) is the microscopic contravariant velocity vectors. Then, the evolution equations (12) and (13) can be rewritten as
\[\hat{f}_{i}(\mathbf{x},t+\Delta t)=\hat{f}_{i}^{+}(\mathbf{x},t)-\Delta t(c_{i\xi} \frac{\partial f_{i}}{\partial\xi}+c_{i\eta}\frac{\partial f_{i}}{\partial \eta})(\mathbf{x},t+\frac{1}{2}\Delta t), \tag{33}\]
\[\bar{f}_{i}(\mathbf{x},t+\frac{1}{2}\Delta t)=\bar{f}_{i}^{+}(\mathbf{x},t)-\frac{1}{ 2}\Delta t(c_{i\xi}\frac{\partial\bar{f}_{i}^{+}}{\partial\xi}+c_{i\eta}\frac {\partial\bar{f}_{i}^{+}}{\partial\eta})(\mathbf{x},t). \tag{34}\]
It should be noted that the velocity vectors \((c_{ix},c_{iy})\) are constant on the physical domain, however the transformed velocity \((c_{i\xi},c_{i\eta})\) are not constant in the computational plane, they change with the position.
For the isosceles trapezoidal cavity, the general relationship between the \((\xi,\eta)\) with \((x,y)\) can be given by,
\[x=\frac{2}{\tan\theta}\xi\eta+\xi-\frac{1}{\tan\theta}\eta+\frac{1}{\tan\theta },\quad y=\eta, \tag{35}\]
and the derivative can be expressed as,
\[x_{\xi}=\frac{2}{\tan\theta}\eta+1,\quad x_{\eta}=\frac{2}{\tan \theta}\xi-\frac{1}{\tan\theta}.\] \[y_{\xi}=0,\quad y_{\eta}=1. \tag{36}\]
In the simulation, three physical conditions are taken into account. The bottom and height of the three types of trapezoidal cavities are equal to 1, and the top angles of the trapezoid are (i)case 1: \(\theta=75^{\circ}\), \(\tan\theta=2+\sqrt{3}\), (ii)case 2: \(\theta=60^{\circ}\), \(\tan\theta=\sqrt{3}\), (iii)case 3: \(\theta=45^{\circ}\), \(\tan\theta=1\). Three cases of isosceles trapezoid physical domains can be transported into the square computational domain by Eq. (35), as shown in Fig. 2.
Figure 2: The physical domain transport to the computational domain.
In any numerical technique, the boundary information plays an important role. Thanks to the coordinate conversion, the standard boundary scheme can be applied for numerical simulation. In this work, the boundary conditions are treated by the non-equilibrium extrapolation scheme, which could keep the accuracy of the IFDLBM.
Now, we present the evolution process in Fig. 3, and list the calculational process as follows:
Step(1): initialize the fluid velocity and pressure, initialize the distribution function \(f_{i}(\mathbf{x},t)\) and \(\hat{f}_{i}(\mathbf{x},t)\) by \(f_{i}^{eq}(\mathbf{x},t)\), and calculate the transformed velocity \((c_{i\xi},c_{in})\).
Step(2): calculate the relaxation time \(\lambda\) by \(f_{i}^{(1)}(\mathbf{x},t)\).
Step(3): calculate \(\hat{f}_{i}^{+}(\mathbf{x},t)\) by Eq. (37),
\[\hat{f}_{i}^{+}=\frac{2\lambda-\Delta t}{2\lambda+\Delta t}\hat{f}_{i}+\frac{2 \Delta t}{2\lambda+\Delta t}f_{i}^{eq}. \tag{37}\]
Step(4): calculate \(\hat{f}_{i}^{+}(\mathbf{x},t)\) by Eq. (38), obtain the \(\bar{f}_{i}(\mathbf{x},t+\frac{\Delta t}{2})\) through the evolution equation (13). Then calculate the distribution function \(f_{i}(\mathbf{x},t+\frac{\Delta t}{2})\) by Eq. (39), and evaluate spatial term \(\mathbf{c}_{i}\cdot\nabla f_{i}(\mathbf{x},t+\frac{\Delta t}{2})\) by Eq. (18),
\[\bar{f}_{i}^{+}=\frac{4\lambda-\Delta t}{4\lambda+2\Delta t}\hat{f}_{i}+\frac{ 3\Delta t}{4\lambda+2\Delta t}f_{i}^{eq}, \tag{38}\]
\[f_{i}=\frac{4\lambda}{4\lambda+\Delta t}\bar{f}_{i}+\frac{\Delta t}{4\lambda+ \Delta t}f_{i}^{eq}, \tag{39}\]
Step(5): update the distribution function \(\hat{f}_{i}(\mathbf{x},t+\Delta t)\) by Eq. (12).
Step(6): calculate the macroscopic quantities \(\mathbf{u},P\) of fluid, and compute the boundary populations.
Step(7): calculate the distribution function \(f_{i}(\mathbf{x},t+\Delta t)\) by Eq. (40).
\[f_{i}=\frac{2\lambda}{2\lambda+\Delta t}\hat{f}_{i}+\frac{\Delta t}{2\lambda+ \Delta t}f_{i}^{eq}. \tag{40}\]
Step(8): calculate the global relative error (GRE) of \(\mathbf{u}\) and \(P\) by Eq. (41), if GRE is more than \(10^{-9}\), increment the time step and go back to step(2).
\[E(\mathbf{u})=\frac{\sqrt{\sum_{ij}|u_{i,j}(t_{n})-u_{i,j}(t_{n-1})|^{2}}}{\sqrt {\sum_{ij}|u_{i,j}(t_{n})|^{2}}},\quad E(P)=\frac{\sqrt{\sum_{ij}(P_{i,j}(t_{ n})-P_{i,j}(t_{n-1}))^{2}}}{\sqrt{\sum_{ij}P_{i,j}^{2}(t_{n})}}. \tag{41}\]
## 5 Code validation and grid independence
### Lid-driven of power-law fluid in the square enclosure
To verify the accuracy of the IFDLBM, we simulate the lid-driven of power-law fluid in the square enclosure. In the simulation, we take the grid (\(128\times 128\)) for \(Re=100\), and \(c=1\). The Courant-Friedrichs-Lewy (CFL) condition number is set to be \(0.5\). The lid velocity is set to be \(0.1\) in initial conditions and boundary condition. Besides, the power-law index \(n\) is taken as \(0.5,0.75,1.5\). The numerical results are shown in the Fig. 4. It is obvious that the numerical results of IFDLBM are in great agreement with the LBM [43]. It can be indicated that the local computational scheme of viscosity is effective.
Figure 3: The evolution process of FDLBM
### Lid-driven flow in the trapezoidal enclosure
In this subsection, we will adopt the coordinate conversion method to simulate the lid-driven flow in the trapezoidal enclosure. In order to verify the correctness of IFDLBM, the trapezoid size is consistent with that in the Ref.[16]. Different from Ref.[16], we take the grid (\(128\times 128\)) for \(Re=100\) and the grids (\(256\times 256\)) for \(Re=500\). And \(u_{0}=0.1\), \(c=1.0\) and \(CFL=0.5\). The power-law index \(n\) is taken as \(1\), because only the Newtonian fluid has been studied in Ref.[16]. As shown in Fig. 8 and 9, it can be found that the numerical results of IFDLBM agree well with previous work [16; 44]. It indicates the coordinate conversion method is valid and the IFDLBM can accurately simulate the flow in the trapezoidal enclosure.
In order to verify the numerical stability of IFDLBM, some numerical simulations are carried out. Considering the case of \(\theta=45^{o}\) in Fig. 2 and \(Re=1200\), the grid is fixed as \(128*128\) for IFDLBM and \(128*384\) for LBM [16]. The velocity of center line along \(y\)-axis are shown in Fig. 5. We can find the simulation results of the two numerical methods are in good agreement. However, if the \(Re\) number increases, the LBM will be divergent when \(Re>9000\), but the IFDLBM can still work well even if \(Re=50000\). The simulation results for \(Re=50000\) are shown in Figs. 6 and 7. The change of velocity \(u\) at point \((1,0.5)\) is displayed in Fig. 6. The streamline diagram of the fluid is shown in Fig. 7. It can be observed that the state of fluid transforms into turbulence. There are \(20\) vortices in the trapezoidal cavity, which is much greater than in the case of \(Re=500\), and the top vortex is severely squeezed by the main vortex. These results indicate that IFDLBM has better numerical stability.
Figure 4: Comparison between present results and those reported in some previous works for Power-law fluid (Re = 100); (a) the velocity component u along the vertical centreline of the cavity, (b) the component velocity v along the horizontal centreline of the cavity.
Figure 5: Vertical component of velocity along \(y\)-axis for \(Re=1200\) and \(\theta=45^{o}\).
Figure 8: Comparison between present results and those reported in some previous works for Power-law fluid (Re = 100); (a) the component velocity v along the horizontal centreline of the cavity, (b)the velocity component u along the vertical centreline of the cavity.
Figure 6: The velocity \(u\) at point \((1,0.5)\) for \(Re=50000\) and \(\theta=45^{o}\).
Figure 7: Streamline plots at \(Re=50000\) and \(\theta=45^{o}\).
### Grid independence test
It is important to implement the grid independence analysis to confirm that the numerical technique is independent on the grid. To assess the effectiveness of the grid size on the method, we adopt four grid systems for \(Re=100,500,1000\), i.e. \(64\times 64\), \(128\times 128\), \(256\times 256\), \(512\times 512\), respectively. The effects of different grids have been displayed in the Figs. 10. To select the suitable grid size, we magnify the local velocity \(u\) in Fig. 11. It can been seen that the numerical results converges rapidly toward the results of grid \(512\times 512\) as the number of grid nodes increases. Since the numerical results with \(256\times 256\) are nearest to the results with \(512\times 512\). Considering the calculation efficiency and accuracy, the gird of \(256\times 256\) is adequate to simulate the problem.
## 6 Results and discussion
In this section, we are going to analyze the relationship between the behavior of power-law fluid with \(Re\) number, angle \(\theta\) and power-law index \(n\). In order to study the rheological behavior effectively, two cases are taken into account, they are low \(Re\) number condition (\(Re=100\)) and high \(Re\) number condition \(Re\geq 500\). To seek the law of TC flow, the power-law index \(n\) will be changed from 0.5 to 1.5 and the angle \(\theta\) will be varied from \(45^{o}\) to \(75^{o}\) for the two cases.
### Low \(Re\) number
(i) Effect of power-law index \(n\) on the development of flow for low \(Re\) number
It is excepted that the behavior of power-law fluid will have an effect on the flow field. Because the power-law index \(n\) determines the viscosity of the fluid. In this simulation, we fixed \(\theta=75^{o}\) and \(Re=100\). And the power-law
Figure 10: Vertical component of velocity along \(y\)-axis for different grid sizes; (a) Re=100, (b)Re=500, (c)Re=1000.
Figure 9: Comparison between present results and those reported in some previous works for Power-law fluid (Re = 500); (a) the component velocity v along the horizontal centreline of the cavity, (b)the velocity component u along the vertical centreline of the cavity.
Figure 11: Vertical component of velocity along local \(y\)-axis for different grid sizes; (a) Re=100, (b)Re=500, (c)Re=1000.
Figure 12: Streamline plots at \(Re=100\) and \(\theta=75^{o}\); (a) n=1.5, (b) n=1.0, (c) n=0.75, (d) n=0.5.
index \(n\) is varied from \(0.5\) to \(1.5\) to investigated the behavior of power-law fluid. We have presented some numerical results in Fig. 12.
According to the simulation results, it can be found that the TC flow will be in a stable state eventually when \(Re=100\). However, the structure of vortices is different for various \(n\). A first-order vortex appears in the central region of the cavity. At the same time, two secondary vortexes appear in the lower left and right corners of the cavity respectively, and the secondary vortex in the lower right corner is obviously larger than that in the lower left corner. As the power-law index \(n\) increases, the center of the first-order vortex will gradually move closer to the center of the cavity. In addition, the range of secondary vortexes increases gradually as the power-law index \(n\) increases.
The numerical velocity in the central line through \(x\)-axis and \(y\)-axis are presented in the Fig. 13. As we can see, the velocity changes more dramatically when \(n\) increases, and both the maximum and minimum values of velocity on the center line increase.
(ii) Effect of angle \(\theta\) on the development of flow for low \(Re\) number
We also study the rheological behavior of power-law fluid with \(\theta=60^{o}\) and \(\theta=45^{o}\). As the \(\theta\) decreases, the area of the cavity increases correspondingly. We show some numerical results with \(\theta=60^{o}\) in Fig. 14. It can be seen that the secondary vortexes in the lower left and right corner of the cavity are smaller than that in \(\theta=75^{o}\). As the \(n\) decreases, the secondary vortexes fade away and the first-order vortex gradually moves closer to the upper right corner of trapezoidal cavity. When \(\theta=45^{o}\), there are no secondary vortexes in the lower left and right corner of the cavity, and the first-order vortex gradually moves closer to the center of trapezoidal cavity as the power-law index \(n\) increases.
Figs. 15 and 17 show the numerical results of velocity in the central line through \(x\)-axis and \(y\)-axis. It is clear that the maximum and minimum values of velocity on the center line increase as the power-law index \(n\) increases. And the velocity profiles are similar to the results with \(\theta=75^{o}\).
Figure 14: Streamline plots at \(Re=100\) and \(\theta=60^{o}\); (a) n=1.5, (b) n=1.0, (c) n=0.75, (d) n=0.5.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline n & \multicolumn{3}{c}{First-primary eddy} & \multicolumn{3}{c}{Second-primary eddy (left)} & \multicolumn{3}{c}{Second-primary eddy (right)} \\ \cline{2-7} & x & y & x & y & x & y \\ \hline n=1.5 & 1.2264 & 0.6465 & 0.5852 & 0.0120 & 1.5709 & 0.0119 \\ n=1.0 & 1.3441 & 0.6684 & 0.5831 & 0.0083 & 1.5725 & 0.0095 \\ n=0.75 & 1.4292 & 0.6872 & β & β & β & β \\ n=0.5 & 1.5544 & 0.7150 & β & β & β & β \\ \hline \hline \end{tabular}
\end{table}
Table 2: The location of eddies at different \(n\) for isosceles trapezoidal cavity flow (\(\theta=60^{o}\)).
Figure 15: Vertical component of velocity for different power-law index \(n\) with \(Re=100\) and \(\theta=60^{o}\); (a) velocity \(v\) through \(y/H=0.5\) along \(x\)-axis, (b)velocity \(u\) through \(x/L=0.5\) along \(y\)-axis.
angle decreases, the vortex center position moves toward the right side of the trapezoidal cavity. This phenomenon is consistent with the above results.
### High Re number
(i) Effect of power-law index \(n\) on the development of flow for high \(Re\) number
The discussion in section 5.2 merely focuses on the low Re number, the trapezoid cavity flows are steady state when \(Re=100\). As we all know, the flow state will change as \(Re\) number increases. To study the influence of the \(Re\) number on the behavior of power-law fluid, we take three cases of trapezoid cavity with different \(Re\) numbers and power-law index \(n\) into account.
First of all, we consider the \(\theta=60^{o}\) and \(n=1.5\), and the computational grids is \(256\times 256\). The typical study has been reported in Fig. 19 for different \(Re\) numbers ranging from \(500\) to \(3000\). As we can see, the TC flows will eventually reach a stable state for any \(Re\) number belonging to \([500,1000,2000,3000]\). The vortex structure in the cavity changes greatly as the \(Re\) number changes. With the increase of \(Re\) number, more and more vortices appear in the cavity. The center of the first-order large vortex is more and more away from the center line of the cavity and moves to the upper right of the cavity.
Figure 16: Streamline plots at \(Re=100\) and \(\theta=45^{o}\); (a) n=1.5, (b) n=1.0, (c) n=0.75, (d) n=0.5.
Figure 17: Vertical component of velocity for different power-law index \(n\) with \(Re=100\) and \(\theta=45^{o}\); (a) velocity \(v\) through \(y/H=0.5\) along \(x\)-axis, (b)velocity \(u\) through \(x/L=0.5\) along \(y\)-axis.
A large vortex and two small angular vortices appear in the cavity when \(Re\) reaches to \(1000\). The vortex in the lower right corner is obviously larger than that in the lower left corner, and the flow function diagram in the trapezoidal cavity is similar to that in the square cavity. When the \(Re\) number reaches \(2000\), the flow phenomenon in the cavity is quite different from that in the square cavity. Four vortices appear in the trapezoidal cavity and are located at the top, middle and bottom of the cavity respectively. The angular vortex, which appears at the lower left corner gradually spread to the middle layer of the cavity as \(Re\) increases to \(2000\). Moreover, the scope and the intensity of the vortex increases significantly. The first-order vortex moves to the upper right of the cavity due to the squeeze of the vortex at the lower left corner, and the scope of the first-order vortex decreases significantly when \(Re=2000\). As \(Re\) increases to \(3000\), it can be found that the range and intensity of the second-order vortex continue to increase, squeezing the upper first-order vortex, resulting in a decrease in the range and intensity of the first-order vortex. And there is a new third-order vortex appears at the lower right corner of the trapezoidal cavity. With its appearance, the range of the third-order vortex in the lower left corner decreased. All these phenomena indicate that the flow behavior in the cavity becomes more and more complicated with the increase of \(Re\).
We also present the centerline velocity results in Fig. 20. It can be seen that the velocity profile changes significantly when \(Re=2000\), \(3000\). This is principal because the vortex shape in the trapezoidal cavity changes significantly with the increase of \(Re\).
In addition, as the \(Re\) number further increases, the flow in the cavity presents a periodic state. When \(Re=4000,5000\), we also select a point in the cavity, whose coordinate is \([1.07735,0.5]\), located at the center of the cavity, and track its velocity \((u,v)\). Figs. 21(a) and 21(b) show the phase diagrams of this point. As shown in Figs. 21(c), 21(e), 21(d) and 21(f), the velocity changed periodically with time. We also track the energy at this point, which is defined as:
\[E(t)=\frac{1}{2}\sum_{\Omega}\|\mathbf{u}(\mathbf{x},t)\|^{2}d\mathbf{x}. \tag{42}\]
where \(\Omega\) represents the area of trapezoidal cavity. Then, by spectral analysis, we can get the principal frequency of periodic flow. According to Fig. 21, it can be concluded that when \(Re=4000,5000\), the flow phenomenon presents a periodic state.
\begin{table}
\begin{tabular}{c c c} \hline \hline n & \multicolumn{2}{c}{First-primary eddy} \\ \cline{2-3} & x & y \\ \hline n=1.5 & 1.7049 & 0.6354 \\ n=1.0 & 1.8698 & 0.6651 \\ n=0.75 & 1.9983 & 0.6925 \\ n=0.5 & 2.2039 & 0.7372 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The location of eddies at different \(n\) for isosceles trapezoidal cavity flow (\(\theta=45^{o}\)).
Figure 18: Variation of the central location of the first-order vortex with \(\theta\) and \(n\).
Figure 19: Streamline plots at \(n=1.5\) and \(\theta=60^{o}\); (a) Re=500, (b) Re=1000, (c) Re=2000, (d) Re=3000.
Figure 20: Vertical component of velocity for \(\theta=60^{o}\) and \(n=1.5\); (a) velocity v through y/H = 0.5 along x-axis, (b)velocity u through x/L = 0.5 along y-axis.
Figure 21: The information of TC flow with \(n=1.5\) and \(\theta=60^{o}\), the first column represents \(Re=4000\) and the second column represents \(Re=5000\); (a)(b) Phase-space trajectories of velocity, (c)(d) The evolution of velocity \(u\), (e)(f) The evolution of velocity \(v\), (g)(h) The Fourier power spectrum of kinetic energy.
Figure 23: Vertical component of velocity for \(\theta=60^{o}\) and \(n=1.0\); (a) velocity v through y/H = 0.5 along x-axis, (b)velocity u through x/L = 0.5 along y-axis.
Figure 22: Streamline plots at \(n=1.0\) and \(\theta=60^{o}\); (a) Re=500, (b) Re=1000, (c) Re=2000.
Now, we consider the situation of \(n=1.0\) and \(\theta=60^{o}\). Fig. 22 presents the streamline plots with \(Re\) changing from \(500\) to \(2000\). It is clear that the TC flow reaches a stable state when \(Re\) number ranges from \(500\) to \(2000\). When \(Re\) increases to \(1000\), the range of secondary vortexes in the lower left and right corners begin to increase. Compared with \(n=1.5\), the range of the second-order vortex grows larger and the squeezing of the first-order vortex is more obvious. As \(Re\) number increases to \(2000\), the shape of vortex in trapezoidal cavity changes obviously, and the distribution of vortexes are divided into three layers, which are similar to the case of \(n=1.5\). However, the difference is that a three-stage vortex appears in the lower right corner, rather than the lower left. Compared with the case \(n=1.5\), due to the squeezing of the second-order vortex in the lower left corner, the range of the first-order vortex decreases more obviously, and the center of the first-order vortex is closer to the upper right corner of the cavity. Meanwhile, the range of second-order vortex grows bigger, the vortex in the upper left corner becomes more flattened after being squeezed, and the center of the third-order vortex in the lower right corner is closer to the right side of the trapezoidal cavity. This phenomenon is consistent with the change of vortexes shapes in Fig. 22.
The results of the centerline velocity with different \(Re\) numbers are shown in the Fig. 23. As we can see, the velocity profile changes drastically when \(Re=2000\). This is because the TC flow becomes more complex and the vortex morphology changes as \(Re\) increases.
When the \(Re\) number increases to \(2500\), the velocity \((u,v)\) of the point \([1.07735,0.5]\) at the center of cavity is tracked. Figs. 24(a), 24(d), 24(g) and 24(j) show the phase diagrams, the evolutions of velocity \((u,v)\) and the Fourier power spectrum of kinetic energy, respectively. It is indicated that the TC flow is a periodic flow when \(Re=2500\).
Furthermore, the results on TC flow with \(Re=3000\) are presented in Figs. 24(b), 24(e), 24(h) and 24(k). The flow phenomenons is close to a periodic state. But the results are somewhat different from the periodic state, where the spectrum of energy has more than one principal frequency and the phase diagram is not a simple closed ring. We define this state as quasi-periodic. Then, the study results on TC flow with \(Re=4000\) are shown in Figs. 24(c), 24(f), 24(i) and 24(l). It is clear that the phase diagrams and velocity evolution diagrams become more complex and irregular, and the energy spectrum appears multiple local peaks. So the TC flow is turbulence when \(Re=4000\). According to those flow phenomenons, we can find that the flow at low Reynolds numbers will develop into periodic flows and then it will turn to the turbulence flow as the Re number increases.
Next, we discuss the situation of \(\theta=60^{o}\) and \(n=0.5\). The Streamline plots are presented in Fig. 25. When \(Re=500\) and \(Re=750\), the TC flow is a stable flow. There is a secondary vortex in the upper left corner of the trapezoidal cavity when \(Re=500\), while this vortex is absent in the other two cases (\(n=1.5\) and \(n=1.0\)). In addition, as the \(Re\) number increases to \(750\), the two second-order vortices on the left fuse into a larger vortex, and squeeze the first-order vortex. The results on centerline velocity are shown in Fig. 26. The development trend of velocity with \(Re=500\) is similar to that with \(Re=750\), because the structure of the first-order vortex does not change. However, when \(Re=750\), the peak velocity is a little larger.
As \(Re\) continues to increase, the TC flow exhibits a periodic state. The relevant results with \(Re=1000\) and \(Re=2000\) are displayed in Fig. 27. In conclusion, as the power-law index \(n\) decreases, the critical Reynolds number of TC flow from steady state to periodic state also decreases.
In addition, for \(Re=500\) and \(\theta=60^{\circ}\), we also compare the difference of centerline velocity under different power-law index. It can be seen that as \(n\) decreases, the velocity changes more dramatically. Generally speaking, there are several reasons accounting for this phenomenon. When \(n=1.5\), it is shear-thickening fluid, the higher the speed, the more viscous the fluid will be. This characteristic will hinder the flow of the fluid and make the velocity change to be flat. When n=0.5, it is a shear-thinning fluid, the higher the velocity, the lower the viscosity. Hence the flow of the fluid will be promoted, so the change of velocity will be more drastic.
(ii) Effect of vertical angle \(\theta\) on the development of flow for high \(Re\) number
In this section, we will fix the power-law index \(n=1.5\), and adjust \(\theta\) and \(Re\) to observe the development of TC flow. First, we consider the case of \(\theta=75^{\circ}\). We present some streamline plots in Fig. 29. As the Re varies between \(1000\) and \(5000\), the TC flow remains steady-state. When \(Re=1000\), the first-order vortex occupies the central position, and there are two second-order vortexes in the lower left and right corners respectively. With the increase of \(Re\) number, the range of the two secondary vortices in the lower left and right corner gradually increases. In addition, when \(Re\) number increases to \(4000\), a third-order vortex appears in the upper left corner, and the third-order vortex will also become larger with the increase of \(Re\). These phenomena are very similar to the phenomena of lid-driven flow in a square cavity. But the phenomena are slightly different from those with \(\theta=60^{\circ}\). The main reason is that with the increase of \(\theta\), the length of the roof decreases, the physical area is closer to the square, and the flow is more similar to
Figure 24: The information of TC flow with \(n=1.0\) and \(\theta=60^{o}\), the first column represents \(Re=2500\), the second column represents \(Re=3000\) and the third column represents \(Re=4000\); (a)(b)(c) Phase-space trajectories of velocity, (d)(e)(f) The evolution of velocity \(u\), (g)(h)(i) The evolution of velocity \(v\), (j)(k)(l) The Fourier power spectrum of kinetic energy.
Figure 26: Vertical component of velocity for \(\theta=60^{\circ}\) and \(n=0.5\); (a) velocity v through y/H = 0.5 along x-axis, (b)velocity u through x/L = 0.5 along y-axis.
Figure 25: Streamline plots at \(n=0.5\) and \(\theta=60^{\circ}\); (a) Re=500, (b) Re=750.
Figure 27: The information of TC flow with \(n=0.5\) and \(\theta=60^{\circ}\), the first column represents \(Re=1000\) and the second column represents \(Re=2000\); (a)(b) Phase-space trajectories of velocity, (c)(d) The evolution of velocity \(u\), (e)(f) The evolution of velocity \(v\), (g)(h) The Fourier power spectrum of kinetic energy.
the flow of the square cavity. These results show that the TC flow tends to flatten as \(\theta\) increases.
Some results with \(Re=6000\) and \(Re=7000\) are also shown in Fig. 30. It can be observed that the TC flow is a periodic flow. Compared with the cases of \(\theta=60^{o}\), the closed loop in the phase diagram is simpler. This is mainly because the change of physical area makes the flow more gentle, and the vortex structure in the periodic flow will be simpler, and the shape of the closing ring will naturally become simpler.
Then, we study the development of TC flow with \(\theta=45^{o}\) and \(n=1.5\). The streamline plots with \(Re\) ranging from 1000 to 5000 are displayed in the Fig. 31. As the \(\theta\) decreases, the shape of the vortexes in the trapezoidal cavity become more complex.
When \(Re=1000\), a new vortex has split away from the first-order vortex, which is located at the upper left corner of the cavity. The two secondary vortexes at the bottom are partially fused, and the secondary vortex at the lower left corner are obviously larger than that at the lower right corner.
Figure 28: Vertical component of velocity for \(\theta=60^{o}\) and \(Re=500\); (a) velocity v through y/H = 0.5 along x-axis, (b) velocity u through x/L = 0.5 along y-axis.
Figure 29: Streamline plots at \(n=1.5\) and \(\theta=75^{o}\); (a) Re=1000, (b) Re=2000, (c) Re=4000, (d) Re=5000.
Figure 30: The information of TC flow with \(n=1.5\) and \(\theta=75^{o}\), the first column represent \(Re=6000\) and the second column represent \(Re=7000\); (a)(b) Phase-space trajectories of velocity, (c)(d) The evolution of velocity \(u\), (e)(f) The evolution of velocity \(v\), (g)(h) The Fourier power spectrum of kinetic energy.
As \(Re\) increases to 2000, the range of second-order vortexes in the upper left corner and lower left corner gradually increases. Squeezed by these two vortices, the range of the first-order vortex becomes smaller and the center of the vortex moves to the upper right corner. At the same time, the two second-order vortices at the bottom are completely separated, and the second-order vortex at the lower right corner moves towards the bottom of the cavity. But on the whole, the number of vortices has not changed, and it is still four vortices.
As \(Re\) increases to 4000, the number of vortices increases to 6, and the vortex structure is divided into three layers. The first layer consists of a first-order vortex, a second-order vortex and a third-order vortex separated from the first-order vortex. The second layer is made up of the second-order vortex on the left, and a new third-order vortex separated from the second-order vortex. The third layer is the secondary vortex in the lower right corner. Compared the vortexes with \(Re=2000\), the scope of the secondary vortex at the third layer increases.
When \(Re=5000\), the shape of vortexes are similar to that of \(Re=4000\). However, the range of the first-order vortex is squeezed smaller, and the two vortices of the second layer are squeezed to the upper left of the cavity by the vortex in third layer. In addition, a fourth-order vortex appears at the bottom of the cavity, and it squeezes the third-order vortex.
According to the above results, we conclude that the TC flow becomes more intense as \(\theta\) decreases. The main reason is that when \(\theta\) decreases, the length of roof increases and the drag distance lengthens, which will make the flow more complex and generate more small vortexes. The mutual extrusion of vortexes also makes the shape of vortexes more complicated.
As we continue to increase the \(Re\) number, the TC flow changes from a steady state to a periodic state. The relevant results are shown in Fig. 32. As shown in the figure, when \(Re=6000\), it can be seen from the phase diagram and velocity curve presents an approximately periodic state, where the velocity in the phase diagram does not form a simple closed ring. Meanwhile, it can also be seen from the spectrum diagram of energy that multiple extreme points appear. Therefore, TC flow is a quasi-periodic flow when Re=6000. In addition, when \(Re\) increases to 7000, it can be seen from the figure that TC flow becomes a standard periodic flow.
Combining the above results, we find that when \(\theta=75^{o}\) and \(45^{o}\), the TC flow reaches a periodic state at \(Re=6000\), but when \(\theta=60^{o}\), the TC stream reaches a periodic state at \(Re=4000\). Therefore, the relationship between \(\theta\) with critical \(Re\) number from steady to periodic state is not monotonic.
We also present the centerline velocity with different \(\theta\) in Fig. 33. It can be found the velocity curves with \(\theta=75^{o}\) is similar to that with \(\theta=60^{o}\), but is different from that with \(\theta=45^{o}\). It is indicated that the smaller the \(\theta\) is, the greater the impact on the velocity. This is because the smaller the angle is, the greater the change degree of the vortex
Figure 31: Streamline plots at \(n=1.5\) and \(\theta=45^{o}\); (a) Re=1000, (b) Re=2000, (c) Re=4000, (d) Re=5000.
Figure 32: The information of ITC flow with \(n=1.5\) and \(\theta=45^{o}\), the first column represent \(Re=6000\) and the second column represents \(Re=7000\); (a)(b) Phase-space trajectories of velocity, (c)(d) The evolution of velocity \(u\), (e)(f) The evolution of velocity \(v\), (g)(h) The Fourier power spectrum of kinetic energy.
shape in the trapezoidal cavity is, and the greater the influence on the velocity is.
Generally speaking, the flow state of lid-driven flow can be divided into three modes: steady flow, periodic flow and turbulence flow. According to the above simulation results, we summarize the flow states under different power-law index \(n\) and different angles \(\theta\), and show the results of flow states in Fig. 34. As shown in the figure, with the decrease of \(n\), the critical \(Re\) number from steady to periodic state decreases. The relationship between \(\theta\) with the critical \(Re\) number is not monotonic.
## 7 Conclusions
The main work of this paper is to develop the IFDLBM and apply the IFDLBM to simulate power law fluid flow in a two-dimensional trapezoidal cavity. Due to the complex boundary of trapezoidal cavity, we use coordinate transformation method to transform the body-fitted gird of physical region into uniform grid of computational region. The effects of Re number, power-law index \(n\) and angle \(\theta\) on TC fluid are studied.
It is found that when \(Re\) is fixed at 100, the cavity flow becomes more complicated with the increase of power-law index \(n\). As \(\theta\) decreases, the flow becomes gentle and the number of vortices decreases. When \(\theta\) and \(n\) are fixed, with the increase of Re, the development of the flow becomes more complex, the number and strength of vortices increase, and the TC flow gradually changes from steady flow to periodic flow and then to turbulent flow. In addition, the critical Re number from steady to periodic state decreases with the decrease of \(n\). Finally, we study the effect of \(\theta\) on TC flow at high Reynolds number. It can be found that the smaller the \(\theta\) is, the more complicated the flow is. This is contrary to the conclusion at low \(Re\) number.
## Acknowledgments
This work was financially supported by the National Natural Science Foundation of China (Grants No. 12072127 and No. 51836003) and the Fundamental Research Funds for the Central Universities, HUST (No. 2021JYCXJJ010). The computation was completed on the HPC Platform of Huazhong University of Science and Technology.
## Appendix A The Chapman Enskog analysis of the IFDLBM
Inspired by the CE analysis of discrete unified gas kinetic scheme in Ref [45], we will recover the NSE from the DVBE (11). The CE analysis will be used to recover the incompressible NSE. The moment of distribution function \(f_{i}^{eq}\) is designed as
\[\sum_{i}f_{i}^{eq}=\rho_{0}\quad\sum_{i}\mathbf{c}_{i}f_{i}^{eq}=\mathbf{u}\quad\sum_{ i}\mathbf{c}_{i}\mathbf{c}_{i}f_{i}^{eq}=(P+c_{s}^{2}\rho_{0})\mathbf{I}+\mathbf{uu}\quad\sum_{ i}\mathbf{c}_{i}\mathbf{c}_{i}\mathbf{c}_{i}f_{i}^{eq}=c_{s}^{2}\Delta\cdot\mathbf{u}, \tag{12}\]
Figure 33: Vertical component of velocity for \(n=1.5\) and \(Re=500\) through x/L = 0.5 along y-axis.
Figure 34: The distributions of flow state with different \(\theta,n,Re\); (a) \(\theta=75^{o}\), (c) \(\theta=45^{o}\), (e) \(\theta=60^{o}\), (b) \(n=1.5\), (d) \(n=1.0\), (f) \(n=0.5\).
From the multi-scale technique, we can get
\[f_{i}=f_{i}^{(0)}+\epsilon f_{i}^{(1)}+\epsilon^{2}f_{i}^{(2)},\quad \partial_{t}=\epsilon\partial_{t_{1}}+\epsilon\partial_{t_{2}}^{2},\quad\nabla= \epsilon\nabla_{1}.\] (A.2)
If we substitute Eq. (A.2) to discrete Boltzmann equation, we can get
\[O(\epsilon^{0}):-\frac{1}{\lambda}(f_{i}^{(0)}-f_{i}^{eq})=0 \Leftrightarrow f_{i}^{(0)}=f_{i}^{eq}\] (A.3a) \[O(\epsilon^{1}):\partial_{t_{1}}f_{i}^{(0)}+\mathbf{\epsilon}_{i} \cdot\nabla_{1}f_{i}^{(0)}=-\frac{1}{\lambda}f_{i}^{(1)},\] (A.3b) \[O(\epsilon^{2}):\partial_{t_{2}}f_{i}^{(0)}+\partial_{t_{1}}f_{i}^{(1)}+\mathbf{ \epsilon}_{i}\cdot\nabla_{1}f_{i}^{(1)}=-\frac{1}{\lambda}f_{i}^{(2)}.\] (A.3c)
Summing Eq. (A.3b) yields
\[\nabla_{1}\cdot\mathbf{u}=0.\] (A.4)
Multiplying \(\mathbf{c}_{i}\) to Eqs. (A.3b) and (A.3c), and substituting them, one can obtain
\[\partial_{t_{1}}\mathbf{u}+\nabla_{1}\cdot[(P+c_{s}^{2}\rho_{0})\mathbf{ I}+\mathbf{uu}]=0,\] (A.5)
\[\partial_{t_{2}}\mathbf{u}+\nabla_{1}\cdot\sum_{i}\mathbf{c}_{i}\mathbf{c}_{i}f_{i}^{(1)} =0,\] (A.6)
Multiplying \(\mathbf{c}_{i}\mathbf{c}_{i}\) to Eq. (A.3b) and substituting can deduce
\[\sum_{i}\mathbf{c}_{i}\mathbf{c}_{i}f_{i}^{(1)}=-\lambda\left\{\partial_{t_{1}}[(P+c_ {s}^{2}\rho_{0})\mathbf{I}+\mathbf{uu}]+\nabla_{1}\cdot c_{s}^{2}\Delta\cdot\mathbf{u} \right\},\] (A.7)
where under the low-Mach-number assumption, the term \(\partial_{t_{1}}[(P+c_{s}^{2}\rho_{0})\mathbf{I}+\mathbf{uu}]\) can be ignored, then Eq. (A.7) can be simplified as
\[\sum_{i}\mathbf{c}_{i}\mathbf{c}_{i}f_{i}^{(1)} = -\lambda c_{s}^{2}[(\nabla_{1}\cdot\mathbf{u})\mathbf{I}+(\nabla_{1}\bm {u}+\nabla_{1}\mathbf{u}^{T})],\] (A.8)
if Eq. (A.4) is substituted into Eq. (A.8), we can obtain
\[\sum_{i}\mathbf{c}_{i}\mathbf{c}_{i}f_{i}^{(1)} = -\lambda c_{s}^{2}(\nabla_{1}\mathbf{u}+\nabla_{1}\mathbf{u}^{T}).\] (A.9)
with the help of Eq. (A.9), Eq. (A.6) can be rewritten as
\[\partial_{t_{2}}\mathbf{u}=\nabla_{1}\cdot[\lambda c_{s}^{2}(\nabla_{1}\mathbf{u}+ \nabla_{1}\mathbf{u}^{T})].\] (A.10)
Through combining the results at \(\varepsilon\) and \(\varepsilon^{2}\) scales, i.e., Eqs. (A.4), (A.5) and (A.10), the NSE (1) and (2) can be recovered correctly.
|
2302.04825 | Boundary conditions in hydrodynamic simulations of isolated galaxies and
their impact on the gas-loss processes | Three-dimensional hydrodynamic simulations are commonly used to study the
evolution of the gaseous content in isolated galaxies, besides its connection
with galactic star formation histories. Stellar winds, supernova blasts, and
black hole feedback are mechanisms usually invoked to drive galactic outflows
and decrease the initial galactic gas reservoir. However, any simulation
imposes the need of choosing the limits of the simulated volume, which depends,
for instance, on the size of the galaxy and the required numerical resolution,
besides the available computational capability to perform it. In this work, we
discuss the effects of boundary conditions on the evolution of the gas fraction
in a small-sized galaxy (tidal radius of about 1 kpc), like classical
spheroidal galaxies in the Local Group. We found that open boundaries with
sizes smaller than approximately 10 times the characteristic radius of the
galactic dark-matter halo become unappropriated for this kind of simulation
after about 0.6 Gyr of evolution, since they act as an infinity reservoir of
gas due to dark-matter gravity. We also tested two different boundary
conditions that avoid gas accretion from numerical frontiers: closed and
selective boundary conditions. Our results indicate that the later condition
(that uses a velocity threshold criterion to open or close frontiers) is
preferable since minimizes the number of reversed shocks due to closed
boundaries. Although the strategy of putting computational frontiers as far as
possible from the galaxy itself is always desirable, simulations with selective
boundary condition can lead to similar results at lower computational costs. | Anderson Caproni, Gustavo A. Lanfranchi, AmΓ’ncio C. S. FriaΓ§a, Jennifer F. Soares | 2023-02-09T18:30:58Z | http://arxiv.org/abs/2302.04825v1 | Boundary conditions in hydrodynamic simulations of isolated galaxies and their impact on the gas-loss processes
###### Abstract
Three-dimensional hydrodynamic simulations are commonly used to study the evolution of the gaseous content in isolated galaxies, besides its connection with galactic star formation histories. Stellar winds, supernova blasts, and black hole feedback are mechanisms usually invoked to drive galactic outflows and decrease the initial galactic gas reservoir. However, any simulation imposes the need of choosing the limits of the simulated volume, which depends, for instance, on the size of the galaxy and the required numerical resolution, besides the available computational capability to perform it. In this work, we discuss the effects of boundary conditions on the evolution of the gas fraction in a small-sized galaxy (tidal radius of \(\sim\)1 kpc), like classical spheroidal galaxies in the Local Group. We found that open boundaries with sizes smaller than approximately 10 times the characteristic radius of the galactic dark-matter halo become unappropriated for this kind of simulation after \(\sim\)0.6 Gyr of evolution, since they act as an infinity reservoir of gas due to dark-matter gravity. We also tested two different boundary conditions that avoid gas accretion from numerical frontiers: closed and selective boundary conditions. Our results indicate that the later condition (that uses a velocity threshold criterion to open or close frontiers) is preferable since minimizes the number of reversed shocks due to closed boundaries. Although the strategy of putting computational frontiers as far as possible from the galaxy itself is always desirable, simulations with selective boundary condition can lead to similar results at lower computational costs.
galaxies: dwarf -- galaxies: evolution -- hydrodynamics -- methods: numerical +
Footnote β : journal: ApJ
0000-0002-2078-8085]Anderson Caproni
0000-0002-2880-0885]Gustavo A. Lanfranchi
0000-0002-4135-5885]Amanacio C. S. Friaca
0000-0002-4133-0888]Jennifer F. Soares
## 1 Introduction
Differential equations are widely used in different astrophysical contexts, from physical phenomena involving our solar system to the large-scale structures in the universe, as clusters and superclusters of galaxies. In particular, fluid-dynamic problems are formulated through differential equations that represent the conservation of mass, momentum, and energy of a fluid with a certain equation of state (e.g., Landau & Lifshitz, 1987).
To find a particular solution of any differential equation, it is necessary to provide some initial condition and/or some boundary conditions (BCs; e.g., Arfken & Weber, 2005). However, because of high complexity involving hydrodynamic (HD) problems in astrophysical systems, analytical solutions for the temporal behavior of a fluid are rare, leading to applications of numerical methods to solve the HD equations (e.g., Toro, 2009). There are several numerical codes dedicated to dealing with astrophysical gas/particle dynamics (e.g., Stone & Norman, 1992; Fryxell et al., 2000; Raga et al., 2000; Teyssier, 2002; Gammie et al., 2003; Anninos et al., 2005; Springel, 2005; Mignone et al., 2007; Bryan et al., 2014), adopting different strategies to numerically evolve
gas/particle flows. Distinct BCs are usually available in those codes, which are chosen according to the specific physical situation to be studied.
In this work, we focus on HD simulations planned to study the time evolution of the gas content inside an isolated galaxy under the influence of a dark-matter distribution and supernova feedback (e.g., Silich & Tenorio-Tagle, 1998; Mac Low & Ferrara, 1999; Fragile et al., 2003; Wada & Venkatesan, 2003; Marcolini et al., 2006; Stinson et al., 2007; Revaz et al., 2009; Ruiz et al., 2013; Recchi, 2014; Caproni et al., 2015; Emerick et al., 2016; Caproni et al., 2017; Emerick et al., 2020), aiming to verify the influence of the BCs on the gas removal efficiency.
This paper is structured as follows. In Section 2, we describe the three BCs analyzed in this work. Initial setup and the general results from the HD simulations performed in this work are presented in Section 3 and discussed in Section 4. The main conclusions obtained in this work are listed in Section 5.
## 2 The Selective Boundary Condition
Before introducing our selective boundary condition (SBC), it is useful to present the main characteristics of the additional boundary conditions (BCs) used in this work.
Let \(\rho\), \(P\), and \(\boldsymbol{v}\) be the mass density, the thermal pressure, and the velocity of a fluid element at a position \(\boldsymbol{r}\) measured in a given reference frame. The former three quantities are usually referred to as primitive variables in HD problems. For grid numerical simulations, the region of interest is discretized on computational cells, where the HD equations are evolved in space and time. The region of interest, or simply the computational domain, is enclosed by numerical boundaries. Boundary conditions are implemented numerically by the usage of guard or ghost cells adjacent to the boundaries of the computational domain.
Let us also define \(\hat{\boldsymbol{n}}\) as the unit vector orthogonal to the boundaries of a computational domain, always pointing outwards by convention. In the case of the open boundary condition (OBC), also known as outflow BC, the gradient of any primitive variable across the boundary along \(\hat{\boldsymbol{n}}\) is set equal to zero (e.g., Mignone et al., 2007).
Caproni et al. (2015) and Caproni et al. (2017) adopted closed boundary conditions (CBC) in their HD simulations. It differs from open boundaries only in terms of the values of \(\boldsymbol{v}\) at the boundaries: all velocity components were set to zero in Caproni et al. (2015), while the null value was set only for the velocity component parallel to \(\hat{\boldsymbol{n}}\), \(\boldsymbol{v}_{n}=\boldsymbol{v}\cdot\hat{\boldsymbol{n}}\), in Caproni et al. (2017). Those authors adopted such boundary conditions to avoid that the frontiers of the computational domains in their simulations behaved as an infinity reservoir of matter due to the dark-matter gravitational potential (see Section 3 for further discussion). A similar BC was also adopted by Fragile et al. (2003), where density and temperature at static boundaries (\(\boldsymbol{v}=0\)) is kept fixed at their initial values.
The SBC (also known as diode BC; e.g., Fryxell et al., 2000; Zingale et al., 2002) is a variant of the CBC adopted in Caproni et al. (2017), in the sense that if the fluid element that reaches the boundary is moving outwards, as well as it having a speed higher than a predefined threshold value, \(v_{\rm th}\), the CBC is switched to OBC at that location. In other words, the selective boundaries allow those fluids that are moving fast enough to leave the computational domain; otherwise SBC blocks their passage, keeping them inside the domain. Thus, the SBC can be defined as follows
\[SBC\!=\!\left\{\begin{aligned} & OBC,\quad\text{if}\;\;( \boldsymbol{v}\cdot\hat{\boldsymbol{n}}>0\;\text{and}\;|\boldsymbol{v}\cdot \hat{\boldsymbol{n}}|>v_{\rm th})\\ & CBC,\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{otherwise} \end{aligned}\right. \tag{1}\]
where \(SBC\) is the boundary condition to be used for a given position at the boundaries in a given time step, and \(|\boldsymbol{v}\cdot\hat{\boldsymbol{n}}|\) (\(=|\boldsymbol{v}_{n}|\)) is the the absolute value of \(\boldsymbol{v}_{n}\) in a given cell adjacent to the boundary.
## 3 HD Numerical Simulations
### Initial setup
Aiming to test the impact of the boundaries on the evolution of the gas content inside an isolated (dwarf spheroidal) galaxy, we decided to use in our simulations a similar initial gas configuration found in Caproni et al. (2017). In a few words, an isothermal gas is put in hydrostatic equilibrium with a cored, static dark-matter gravitational potential (e.g., Equation 6 in Caproni et al., 2017), so that its density distribution is peaked at the center of the gravitational potential well, decreasing radially as the galactocentric distance increases.
Adopting a dwarf galaxy as a proxy for an isolated galaxy in our simulations avoids working with large computational domains, since dwarf galaxies are relatively small in size, with a tidal radius roughly below of some thousands of parsecs (e.g., Mateo, 1998). Consequently, it helps to conduct high numerical resolution experiments without a large number of computational cells, decreasing substantially the involved execution times.
We show in Table 1 the main physical characteristics of the isolated galaxy used in our numerical simulations. These values are compatible with those inferred for the classical dwarf spheroidal galaxy Ursa Minor.
### Perturbing the galactic gas: types Ia and II supernovae feedback
The initial gas distribution is perturbed by supernova (SN) blasts in our simulations. We followed basically Caproni et al. (2017) for the SN feedback recipe, even though the new version of our code used in this work follows independently types Ia and II supernovae1. In a few words, the rates of types Ia and II SNe in our simulations were constrained by the chemical evolution model for Ursa Minor galaxy (Lanfranchi & Matteucci, 2004, 2007), a typical classical dwarf spheroidal galaxy in the Local Group. The imposed types Ia and II SNe rates are strictly respected during the whole of the simulations, telling to the code when an SN event must occur. On the other hand, where a SN event must take place depends on its type: denser regions are more prone to be selected for harboring a type II SN blast, while type Ia SNe are distributed randomly inside the galaxy. Independent of the type of supernova, an internal energy of \(10^{51}\) erg is added into the computational cell elected as an SN site.
Footnote 1: Further details of this new approach will be provided in a future paper in preparation (Lanfranchi et al., 2023).
The SN feedback injects momentum into the interstellar medium, producing a net motion of the gas that is directed outward the galaxy. These galactic winds drive the gas losses that the simulated galaxy will experience as the time evolves. A portion of this galactic wind can reach the boundaries after a given interval, so that we must be concerned about the influence of the chosen boundaries on it.
### Boundary conditions and instantaneous gas-loss rates
All the numerical HD simulations performed in this work made use of the PLUTO code2(Mignone et al., 2007) in its version 4.2. The classical hydrodynamic differential equations are evolved in time by a third-order Runge-Kutta algorithm, while the primitive variable reconstruction is done by a piecewise parabolic method (Colella & Woodward, 1984). The flux computation among numerical cells was done by the advection upstream splitting method (AUSM+; Liou, 1996). We also assumed that gas respects the ideal equation of state, and it is under influence of a cored, dark-matter gravitational potential and cooling processes (see Caproni et al., 2017 for additional details).
Footnote 2: [http://plutocode.ph.unito.it/](http://plutocode.ph.unito.it/)
We performed 16 HD numerical simulations to study the impact of the boundaries on the gas-loss rates. The main characteristics of these simulations are listed in Table 2. They include two simulations adopting OBC (OBL60N170 and OBL3N100), one simulation with CBC (CBL3N100), and the remaining 13 simulations used to test the behavior of the SBC.
Except for the simulations OBL60N170, V64L12N200, V64L6N200, and V64L6N250, the computational domain consisted of a cubic box of size \(L\) equals to 3 kpc. Numerical resolution in our simulations, \(l\), was set from 15 to 60 pc per cell, which implied to a num
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value \\ \hline Dark-matter halo mass inside \({R_{200}}\)\({}^{a}\) (M\({}_{\odot}\)) & 3.1\(\times 10^{9}\) \\ \(R_{200}\) (kpc) & 30.5 \\ Characteristic radius of the dark matter halo (kpc) & 0.3 \\ Maximum circular velocity due to dark matter (km s\({}^{-1}\)) & 21.1 \\ Escape velocity at \(R_{200}\) (km s\({}^{-1}\)) & 64 \\ Initial gas mass inside 950 pc\({}^{b}\) (M\({}_{\odot}\)) & 6.1\(\times 10^{8}\) \\ Initial gas temperature (K) & 5214 \\ Initial gas number density\({}^{c}\) (cm\({}^{-3}\)) & 187 \\ \hline \end{tabular} \({}^{a}\)The radius enclosing overdensity of 200 in relation to the critical density of the universe.
\({}^{b}\)The tidal radius of the galaxy.
\({}^{c}\)At the center of the galaxy.
\end{table}
Table 1: Some physical parameters of the isolated galaxy used in our simulations.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Name & \(N_{\rm cell}\) & \(L\) & \(l\) & \(v_{\rm th}\) & Uniform Grid? \\ & & (kpc) & (pc cell\({}^{-1}\)) & (km s\({}^{-1}\)) & \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline OBL3N100 & 100 & 3 & 30 & \(-\)\({}^{a}\) & yes \\ OBL60N170 & 170 & 60 & \(-\)\({}^{b}\) & \(-\)\({}^{a}\) & no\({}^{b}\) \\ CBL3N100 & 100 & 3 & 30 & \(-\)\({}^{c}\) & yes \\ V64L12N200 & 200 & 12 & \(-\)\({}^{d}\) & 64.0 & no\({}^{d}\) \\ V64L6N250 & 250 & 6 & \(-\)\({}^{e}\) & 64.0 & no\({}^{e}\) \\ V64L6N200 & 200 & 6 & 30 & 64.0 & yes \\ V64L3N100 & 100 & 3 & 30 & 64.0 & yes \\ V32L3N100 & 100 & 3 & 30 & 32.0 & yes \\ V16L3N100 & 100 & 3 & 30 & 16.0 & yes \\ V8L3N100 & 100 & 3 & 30 & 8.0 & yes \\ V4L3N100 & 100 & 3 & 30 & 4.0 & yes \\ V2L3N200 & 200 & 3 & 15 & 2.0 & yes \\ V2L3N100 & 100 & 3 & 30 & 2.0 & yes \\ V2L3N50 & 50 & 3 & 60 & 2.0 & yes \\ V1.5L3N100 & 100 & 3 & 30 & 1.5 & yes \\ V1L3N100 & 100 & 3 & 30 & 1.0 & yes \\ \hline \end{tabular} Note. β(1) Label of the simulation. (2) Total number of computational cells per Cartesian direction. (3) Linear size of the cubic computational domain in each Cartesian direction. (4) Numerical resolution. (5) Threshold value of the orthogonal velocity component at the boundaries used in the SBC simulations. (6) Whether the domain was divided uniformly in each direction.
\end{table}
Table 2: Main characteristics of the three-dimensional HD simulations performed in this work.
ber of cells per axis, \(N_{\rm cell}\), between 50 and 250. Except for OBL60N170, V64L12N200, and V64L12N250, all numerical experiments were performed using a uniform grid. All simulations were conducted in the Brazilian supercomputers SDumont 3 and LAI 4. A total of \(\sim\)3.4\(\times\)10\({}^{6}\) processor hours were required to run all the simulations presented in this work.
Footnote 3: [http://sdumont.lncc.br](http://sdumont.lncc.br)
Footnote 4: lai.iag.usp.br
#### 3.3.1 Comparing open, closed and selective boundary condition simulations
Following (Caproni et al., 2015), we estimated the instantaneous gas mass inside a galactocentric radius \(R_{\rm gal}=950\) pc (compatible with the tidal radius of the Ursa Minor dSph galaxy; e.g., Irwin & Hatzidimitriou, 1995), after integrating numerically the mass density distribution obtained in the simulations
\[M_{\rm gas}(t)=\int\int_{V_{\rm gal}}\int\rho(x,y,z,t)dxdydz, \tag{2}\]
where \(M_{\rm gas}\) is the total gas mass at a time \(t\) inside a spherical volume \(V_{\rm gal}\) with a radius of \(R_{\rm gal}\).
The instantaneous mass fraction of the gas inside \(R_{\rm gal}\), \(f_{\rm gas}\), is calculated from
\[f_{\rm gas}(t)=\frac{M_{\rm gas}(t)}{M_{\rm gas,0}}, \tag{3}\]
where \(M_{\rm gas,0}\) is the initial gas mass inside \(R_{\rm gal}\) (see Table 1).
We show in Figure 1 the behavior of \(f_{\rm gas}\) for three distinct simulations: OBL3N100, CBL3N100, and V2L3N100 (see Table 2 for further details). They show similar decreasing rates in the gas mass fraction due to SN feedback considering the first 600 Myr of evolution. After this interval, the situation changes dramatically: the OBC induces the breaking of the previous monotonic trend due to the rising of strong inflows of matter, which leads to extremely high (and nonphysical!) masses in comparison with what would be expected for a dwarf galaxy. This issue was already found by Caproni et al. (2015) in their simulations: the OBC acts as an infinite reservoir of matter, which provides gas whenever the pressure equilibrium within the computational domain is broken due to the domain discretization (e.g., Zingale et al., 2002).
On the other hand, the decrease of the amount of gas still remains after 600 Myr for both CBL3N100 and V2L3N100 runs, even though the loss rates lower gradually until they become almost null after \(\sim\)1.6 Gyr. There is also a systematic offset between the gas mass fractions from CBL3N100 and V2L3N100 runs, the former presenting a substantially higher value after an elapsed time of 3 Gyr (\(\sim\)0.405 against \(\sim\)0.18 for V2L3N100). The reason is that gas flows reaching boundaries with speeds higher than \(v_{\rm th}=2\) km s\({}^{-1}\) are allowed to leave the computational domain, while the CBC retains them, independent of how fast the gas flow is.
Footnote 5: This value is in good agreement with that found by Caproni et al. (2017) using a higher numerical resolution (\(\sim\)12 pc) in comparison with the simulations presented in this work.
The quasi-saturation in the gas removal is detected in both CBL3N100 and V2L3N100 runs. The same result was also found by Caproni et al. (2017) (see their Figure 4). Caproni et al. (2017) also pointed out that reverse shocks generated when the SN-driven gas reaches the computational boundaries could decrease the inferred gas losses due to the gas retention. From a simple analytical calculation using the escape velocity associated with the dark-matter (DM) halo, these authors estimated that the CBC decreased the gas losses by a factor of \(\sim\)2.5 after 3 Gyr of evolution.
An alternative way to estimate the influence of the computational frontiers on the results is to put them farther and compare the amount of gas left inside the galaxy after a certain elapsed time. Thus, we run two additional simulations, V64L6N200 and V64L12N200, in which the computational domain was extended, respectively, by a factor of 2 and 4 in relation to the original box size. These two simulations were compared with the V64L3N100 run, in which \(v_{\rm th}\) was set to 64 km s\({}^{-1}\) (the DM escape velocity used by Caproni et al. (2017) in their analytical calculations). The time evolution of the gas mass fraction inside \(R_{\rm gal}\) for these three simulations is shown in Figure 2.
Until \(\sim\)500 Myr, these three simulations produced the same instantaneous gas mass fractions, as it can be seen in Figure 2. After that, the discrepancy between V64L3N100 and the other two runs increases monotonically until approximately 1.5 Gyr, when such differences stabilize approximately around a factor of about 2.7. Note this factor is compatible with the value estimated previously by Caproni et al. (2017) from arguments based on the comparison between the escape velocity of the dark-matter halo and the velocities in the adjacent cells at the computational boundaries.
Simulations V64L6N200 and V64L12N200 agree with each other, indicating that a computational domain with a size of 6 kpc (\(\sim\)6 times the tidal radius of our fiducial galaxy) is enough to minimize boundary effects on the gas losses in similar simulations performed in this
work. Besides, a low-amplitude oscillatory behavior in the curves concerning these two simulations is seen in Figure 2. We have interpreted this as the result of the competition between outward galactic winds driven by SNe and the DM halo's gravity, which tries to push gas back into the galaxy. In other words, these oscillations could be the numerical realization of the "keeping the gas spread and heated" suggested previously in the literature (e.g., Read et al., 2006; Caproni et al., 2017).
Even though Figure 2 indicates some convergence in the gas mass fraction derived from the simulations V64L6N200 and V64L12N200, their domain dimensions are small in comparison to \(R_{200}\) (\(\sim\)30 kpc; see Table 1) of the dark-matter halo of our fiducial galaxy. To verify whether these results are indeed representative of the gas losses driven by supernovae, we run an additional OBC simulation, OBL60N170, with a length of 60 kpc (\(\sim\)2\(R_{200}\)) in each Cartesian direction. Its derived instantaneous mass fraction is shown by a dashed black line in Figure 2, indicating a quite similar behavior inferred from the simulations V64L6N200 and V64L12N200. Thus, a domain size of about 6 kpc seems to be enough for HD simulations of isolated galaxies with similar properties listed in Table 1.
#### 3.3.2 The impact of the threshold velocity on the selective boundary condition simulations
As it was discussed in the previous section, the usage of \(v_{\rm th}\) = 64 km s\({}^{-1}\) in SBC simulation V64L3N100
Figure 1: Instantaneous gas mass fraction inside a galactocentric radius of 950 pc (tidal radius of the galaxy) for the simulations using OBC (OBL3N100, blue circles), CBC (CBL3N100, green triangles), and SBC with \(v_{\rm th}\) = 2 km s\({}^{-1}\) (V2L3N100, red squares). These three simulations were made considering a cubic domain of \(3^{3}\) kpc\({}^{3}\).
Figure 2: Instantaneous gas mass fraction inside a galactocentric radius of 950 pc (tidal radius of the galaxy) for SBC simulations with different sizes of the computational domain: \(L=12\) kpc (V64L12N200, blue circles), \(L=6\) kpc (V64L6N200, green triangles), and \(L=3\) kpc (V64L3N100, red squares). All these three runs adopt \(v_{\rm th}\) = 64 km s\({}^{-1}\). Dashed black line represents the results from an OBC simulation with \(L=60\) kpc (OBL60N170).
increased the amount of gas left after 3 Gyr of evolution by a factor of 2.7 in comparison to the simulations V64L6N200 and V64L12N200, which made use of larger computational domains. A question that arises is whether it is possible to recover the results found in larger box simulations just varying the value of \(v_{\rm th}\) in the SBC simulations. Thus, we run seven additional simulations with the same initial setup and resolution of V64L3N100, but decreasing the value of \(v_{\rm th}\) mostly by multiples of 2. A comparison among these simulations, as well with the larger box simulation V64L12N200 can be seen in Figure 3.
Again, no apparent differences among all simulations in Figure 3 are noted until approximately 500 Myr of evolution. After this interval, the instantaneous amount of gas left inside \(R_{\rm gal}\) decreases systematically as \(v_{\rm th}\) is lowered from 64 to 1 km s\({}^{-1}\). The largest differences occur when \(v_{\rm th}\lesssim 4\) km s\({}^{-1}\), indicating that only a small portion of the gas that is pushed away by the SNe reaches the boundaries with speeds higher than \(\sim\)4 km s\({}^{-1}\).
It can be seen in Figure 3 that simulations V2L3N100 and V1.5L3N100 led to gas mass fractions closer to that obtained in the simulation V64L12N200, indicating that the appropriated value of \(v_{\rm th}\) must be roughly between 1.5 km s\({}^{-1}\) and 2.0 km s\({}^{-1}\) for simulations with a cubic domain of 3\(\times\)3\(\times\)3 kpc\({}^{3}\).
#### 3.3.3 Effects of the numerical resolution
Adopting simulation V2L3N100 as a reference, we multiplied (divided) by a factor of 2 the number of computational cells, but keeping all additional parameters fixed, generating simulation V2L3N200 (V2L3N50). It
Figure 4: Instantaneous gas mass fraction inside a galactocentric radius of 950 pc for the SBC simulations V2L3N200 (blue circles), V2L3N100 (green triangles), and V2L3N50 (red squares). All these simulations use the same value for \(v_{\rm th}\) (= 2 km s\({}^{-1}\)), but differing in terms of numerical resolution. The solid black line shows the results from the simulation V64L6N250.
Figure 3: Instantaneous gas mass fraction inside a galactocentric radius of 950 pc (tidal radius of the galaxy) for SBC simulations with different threshold speeds (from 64 to 1 km s\({}^{-1}\)). Simulation V64L12N200 is also plotted (purple circles).
means a change in the numerical resolution from 30 pc cell\({}^{-1}\) to 15 pc cell\({}^{-1}\) in the case of V2L3N200, while a resolution of 60 pc cell\({}^{-1}\) is attained for the simulation V2L3N50. We show in Figure 4 the influence of the numerical resolution on the instantaneous amount of gas left inside the galaxy. No difference in the mass fraction among these three simulations is seen during the first 200 Myr, when V2L3N50 begins to show a higher mass-loss rate in comparison to the other ones. This trend is inverted after about 1 Gyr and remains so until the end of the simulations.
Concerning simulations V2L3N100 and V2L3N200, there is no significant difference between them until \(\sim\)1 Gyr, but after 1.5 Gyr, \(f_{\rm gas}\) decreases slowly in V2L3N200, in contrast with simulation V2L3N100 that presents a small-amplitude oscillations around \(f_{\rm gas}\sim 0.185\). The increment in the numerical resolution from 30 to 15 pc cell\({}^{-1}\) is allowed to solve the snowplow transition radius for number densities as low as 1 cm\({}^{-3}\)(e.g., Cioffi et al., 1988; Ostriker and McKee, 1988), avoiding over cooling issues that weaken the kinetic feedback from supernovae (e.g., Creasey et al., 2011; Simpson et al., 2015; Caproni et al., 2017). Thus, a larger fraction of gas reached speeds higher than the threshold speed of 2 km s\({}^{-1}\) in simulation V2L3N200, leaving definitively the computational domain.
At this point, it is interesting to verify whether the monotonic decrease of \(f_{\rm gas}\) in V2L3N200 is due to a rather low value of \(v_{\rm th}\) in SBC. For this aim, we run an additional simulation, V64L6N250, where we doubled the size of the computational domain but keeping the numerical resolution of 15 pc per cell inside a cubic subdomain of 3 kpc in size (see Table 2 for further details). The behavior of \(f_{\rm gas}\) as a function of time is shown in Figure 4 by the black solid line. No monotonic decrease of \(f_{\rm gas}\) after 1.5 Gyr is seen but there are small-amplitude variations around \(f_{\rm gas}\sim 0.185\) instead, as in simulation V2L3N100. It suggests that \(v_{\rm th}=2\) km s\({}^{-1}\) adopted in V2L3N200 is subestimated somehow. Based on the results shown in Figure 3, an increment of about 0.5-1.0 km s\({}^{-1}\) in \(v_{\rm th}\) might be enough to reconcile simulations V2L3N200 and V64L6N250.
Finally, we can also note that the increment of numerical resolution led to a low amount of gas left inside the galaxy after 3 Gyr of evolution, a factor of \(\sim\)2.5 between the lowest and highest numerical simulations in Figure 4 (V2L3N50 and V2L3N200, respectively). This difference is not too big if we consider the usual uncertainties regarding the estimates of the mass in stars, gas and dark matter in galaxies, as well as a relatively poor knowledge concerning the individual efficiencies of the feedback mechanisms to remove gas in those systems. As it was already mentioned, a small fine tuning in the value of \(v_{\rm th}\) can diminish or even eliminate those discrepancies.
## 4 Discussion
Our results showed that no influence of the BCs on the instantaneous gas-loss rates is observed until \(\sim\)600 Myr in the simulations discussed in Section 3.3. It suggests that noncosmological grid-based HD simulations involving isolated galaxies will not be substantially influenced by the choice of a particular BC if the simulated time is less or of the order of some hundreds of Myr, as it is the case of several previous works involving different types of galaxies (e.g., Mac Low and Ferrara, 1999; Fragile et al., 2003; Wada and Venkatesan, 2003; Fragile et al., 2004; Melioli et al., 2015; Melioli and de Gouveia Dal Pino, 2015; Emerick et al., 2019, 2020). For analogous HD simulations but involving longer timescales of evolution, the usage of SBC may be a useful alternative without sacrificing the numerical resolution and/or increasing the computational costs in the case of putting the numerical frontiers far from the galaxy (e.g., Marcolini et al., 2006; Mori and Burkert, 2000; Emerick et al., 2016). To provide a sense of the gain in CPU time, we run four additional simulations evolved during 200 Myr in a workstation equipped with 128 2.2 GHz processors. Two of these simulations adopt SBC with \(v_{\rm th}=2\) km s\({}^{-1}\), while the two complementary ones use OBC. Besides, domain volumes of \(3\times 3\times 3\) and \(60\times 60\times 60\) kpc\({}^{3}\) were built for both SBC and OBC. In the case of the small volume domain (\(3\times 3\times 3\) kpc\({}^{3}\)), 60 computational cells per Cartesian axis were generated, implying a numerical resolution of 50 pc per cell. For the larger volume simulations, we kept the same numerical resolution of 50 pc per cell between -1.5 and 1.5 kpc, but decreasing nonmonotonically this resolution until it reaches the numerical boundaries at -30 and 30 kpc, leading to 102 cells per Cartesian direction. The results can be summarized as follows:
* The size of the computational domain is fixes, and the elapsed time to complete a simulation does not depend strongly on the assumed BC: in the case of a domain size of 3 kpc, \(\sim\)12.8 and 13.1 hours for SBC and OBC, respectively; for a 60-kpc box, the elapsed times were \(\sim\)48.7 and 48.6 hr for SBC and OBC, respectively;
* However, a larger domain implies in a substantial longer time for the completion of the simulation. For instance, a larger computational domain with OBC led to a longer execution time by a factor of \(\sim\)3.7. Even though this factor is smaller than the ratio between the total number of the cells used
in the simulations, \((102/60)^{3}\sim 4.9\), it shows that larger domains imply higher computational costs that could become prohibitive if high-performance computational resources are not accessible in practice.
Besides the avoidance of the frontiers of the computational domain behaving as an infinity reservoir of matter in simulations with gravity, the SBC can decrease (or even eliminate) the occurrence of reversing flows of matter due to a pure CBC. As any flow colliding with a CBC will have its normal velocity reversed, it induces spurious backflows of matter that could modify the previous gas motions at interacting zones, as well as the physical conditions (density and temperature) of the gas (mainly if the created backflows become strong shocks). Note also that these reversing flows are expected to occur even in the absence of gravity forces.
Figure 5: Effects of the SBC (upper panels) and OBC (bottom panels) on the stability of the initial gas configuration under hydrostatic equilibrium with a cored DM gravitational potential after 500 Myr of evolution and considering a cubic domain of 3 kpc\(\times\)3 kpc\(\times\)3 kpc. Left panels display the number density distribution on \(X=0\) plane, while the right panels show the histogram of the absolute velocity of the gas in the whole computational domain for both BCs.
The choice of a particular BC may also influence on the stability of the initial gas configuration under hydrostatic equilibrium with a gravitational potential. To quantify this effect on grid-based simulations of isolated galaxies, we rerun simulations OBL3N100 and V2L3N100 for 500 Myr turning off the SN feedback during the whole simulation. The results are shown in Figure 5. We note in the case of SBC (upper panels in Figure 5) that the initial gas distribution is well preserved during the whole simulation, with spurious speeds being lower than 0.25 km s\({}^{-1}\) (\(\sim\)50 percent of the cells have speeds lower than 20 m s\({}^{-1}\)). On the other hand, the usage of OBC in a relatively small computational domain (lower panels in Figure 5) induces catastrophic inflows of gas that destroyed the initial spherically symmetric cored distribution of the gas, producing spurious speeds as high as some tens of kilometers per second. To reduce such spurious motions using OBC, larger computational domains are needed.
The magnitude of the spurious accretion also depends on the numerical resolution, as pointed out previously by Zingale et al. (2002). We analyzed the impact of the numerical resolution on the time stability of the initial gas configuration rerunning simulations V2L3N50, V2L3N100, V2L3N200 without SN feedback, including also an extra simulation with a lower numerical resolution in comparison with the previous ones (\(l=150\) pc cell\({}^{-1}\)). We show in Figure 6 the behavior of the spurious speeds in terms of numerical resolution after 500 Myr of evolution considering SBC. Trends of the increase of the mean and maximum spurious speed with the decrease of the numerical resolution are clearly seen in Figure 6, even though their values are always very small (\(\lesssim 1\) km s\({}^{-1}\)) in comparison to the OBC simulation shown in Figure 5. These results suggest that SBC may be also useful in numerical problems somehow involving the hydrostatic equilibrium condition.
## 5 Conclusions
In this work, we studied the influence of the computational frontiers on the gas removal process in (small) galaxies. The option for using an initial configuration compatible with a typical dwarf galaxy (tidal radius of about 1 kpc) is justified by keeping the computational domain as small as possible without sacrificing substantially the numerical resolution, and keeping the computational costs relatively low as well. Three different boundary conditions were employed in this work: open (or outflow), closed, and selective boundary conditions. The 16 hydrodynamic simulations with types Ia and II supernovae feedback performed in this work adopted a cubic domain where the galactic center coincides with the center of the computational box. The majority of these simulations have frontiers put at a galactocentric distance corresponding to \(\sim 1.6R_{\rm gal}\). Our main results are summarized as follows.
* No difference in the gas mass fraction left inside the galaxy is noted until about 600 Myr of evolution, independent of the three boundary conditions analyzed in this work. It suggests that similar simulations involving short periods of time can adopt open boundary conditions without any loss of integrity of the results;
* After 600 Myr of evolution, open boundary conditions for a relatively small computational box (sizes smaller than \(\sim 3R_{\rm gal}\) or about 10 times the characteristic radius of the galactic dark-matter halo) act as an infinity reservoir of gas due to dark-matter gravity whenever the pressure equilibrium within the computational domain is broken due to the domain discretization (e.g., Zingale et al., 2002). In this case, closed or selective boundary conditions are preferable if the increase of the computational edges are somehow unfeasible;
* As it was already expected (e.g, Caproni et al., 2015), closed frontiers tend to retain more gas in comparison to the selective boundary condition, impacting on the amount of mass left inside the galaxy: a factor of 2 approximately (see Figure 1);
* Concerning the influence of the value of \(v_{\rm th}\) used in the selective boundary condition simulations, no difference in \(f_{\rm gas}\) is seen until approximately 500 Myr of evolution. It remains true until 3 Gyr for the simulations using \(v_{\rm th}\gtrsim 8\) km s\({}^{-1}\), coinciding
Figure 6: Maximum and mean spurious speeds (blue and orange curves, respectively) as a function of the numerical resolution of non-SN feedback simulations with SBC after 500 Myr of evolution.
with the results from the closed boundary simulation. For the simulations with \(v_{\rm th}\lesssim 4\) km s\({}^{-1}\), the instantaneous amount of gas left inside the galaxy decreases systematically as \(v_{\rm th}\) is lowered;
* For \(v_{\rm th}\lesssim 1.5\) km s\({}^{-1}\), \(f_{\rm gas}\) decreases with time, in contrast with SBC simulations with higher \(v_{\rm th}\) that present a plateau-like behavior after \(\sim\)1.5 Gyr of evolution. Numerical simulations with larger computational domains (\(\gtrsim 6R_{\rm gal}\)) show similar plateau-like behavior, but showing also a small-amplitude oscillation around \(f_{\rm gas}\sim 0.185\) possibly produced by the competition between the pull from the dark-matter gravitational potential and the push due to the supernova feedback;
* In terms of numerical resolution, our results show no difference in the mass fraction during the first 200 Myr when \(l\) is varied from 60 to 15 pc cell\({}^{-1}\). This interval is extended to about 1 Gyr considering simulations with 30 and 15 pc cell\({}^{-1}\) only. The monotonic decreasing of \(f_{\rm gas}\) seen in V2L3N200 is not present in V64L6N250 with a larger computational domain, indicating that \(v_{\rm th}=2\) km s\({}^{-1}\) adopted in V2L3N200 is subsestimated somehow. Based on the results shown in Figure 3, a small increment of about 0.5-1.0 km s\({}^{-1}\) in \(v_{\rm th}\) might be enough to reconcile simulations V2L3N200 and V64L6N250.
* Although the strategy of putting computational frontiers as far as possible from the galaxy is always desirable, our simulations with a selective boundary condition can lead to similar results but at less expensive demands regarding computational resources.
As a final remark, even though we have analyzed the influence of the boundary conditions over the gas-loss rates using a dwarf spheroidal galaxy, the SBC strategy can be adopted for any type of galaxy or astrophysical system that demands closed numerical frontiers (e.g., see Lanfranchi et al., 2021 for an application involving SBC in the context of intermediate-mass black hole feedback in dwarf spheroidal galaxies).
## Acknowledgments
A.C., G.A.L., and J.F.S. thank the Brazilian agency FAPESP (grants 2014/11156-4, 2017/25651-5, 2017/25799-2, 2019/21615-0, and 2022/16883-8). The authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer ([http://sdumont.lncc.br](http://sdumont.lncc.br)), which have contributed to the research results reported within this paper. This work has made use of the computing facilities of the Laboratory of Astroinformatics (IAG/USP, NAT/UCS), whose purchase was made possible by the Brazilian agency FAPESP (grant 2009/54006-4) and the INCT-A. We would like to thank the anonymous referee for the constructive report. ParaView, (Ahrens et al., 2005; Ayachit, 2015), IDL (Interactive Data Language, [https://www.harrisgeospatial.com/Software-Technology/IDL](https://www.harrisgeospatial.com/Software-Technology/IDL))
|
2306.10419 | Multilingual Multiword Expression Identification Using Lateral
Inhibition and Domain Adaptation | Correctly identifying multiword expressions (MWEs) is an important task for
most natural language processing systems since their misidentification can
result in ambiguity and misunderstanding of the underlying text. In this work,
we evaluate the performance of the mBERT model for MWE identification in a
multilingual context by training it on all 14 languages available in version
1.2 of the PARSEME corpus. We also incorporate lateral inhibition and language
adversarial training into our methodology to create language-independent
embeddings and improve its capabilities in identifying multiword expressions.
The evaluation of our models shows that the approach employed in this work
achieves better results compared to the best system of the PARSEME 1.2
competition, MTLB-STRUCT, on 11 out of 14 languages for global MWE
identification and on 12 out of 14 languages for unseen MWE identification.
Additionally, averaged across all languages, our best approach outperforms the
MTLB-STRUCT system by 1.23% on global MWE identification and by 4.73% on unseen
global MWE identification. | Andrei-Marius Avram, Verginica Barbu Mititelu, Vasile PΔiΕ, Dumitru-Clementin Cercel, Εtefan TrΔuΕan-Matu | 2023-06-17T20:28:32Z | http://arxiv.org/abs/2306.10419v1 | # Multilingual Multiword Expression Identification Using Lateral Inhibition and Domain Adaptation
###### Abstract
Correctly identifying multiword expressions (MWEs) is an important task for most natural language processing systems since their misidentification can result in ambiguity and misunderstanding of the underlying text. In this work, we evaluate the performance of the mBERT model for MWE identification in a multilingual context by training it on all 14 languages available in version 1.2 of the PARSEME corpus. We also incorporate lateral inhibition and language adversarial training into our methodology to create language-independent embeddings and improve its capabilities in identifying multiword expressions. The evaluation of our models shows that the approach employed in this work achieves better results compared to the best system of the PARSEME 1.2 competition, MTLB-STRUCT, on 11 out of 14 languages for global MWE identification and on 12 out of 14 languages for unseen MWE identification. Additionally, averaged across all languages, our best approach outperforms the MTLB-STRUCT system by 1.23% on global MWE identification and by 4.73% on unseen global MWE identification.
multiword expression identification; multilingual; lateral inhibition; domain adaptation; PARSEME corpus Article Multilingual Multiword Expression Identification Using Lateral Inhibition and Domain Adaptation
1 2023 11 2548 1
## 1 Introduction
Natural language processing (NLP) is a significant domain of artificial intelligence, with applications ranging from language translation to text classification and information retrieval. NLP allows computers to interpret and process human language, enabling them to perform tasks such as understanding and responding to questions, summarizing texts, and detecting sentiments. Some phenomena present in language can preclude its correct understanding by machines (and even humans sometimes). Such a phenomenon is represented by multiword expressions (MWEs), which are groups of words that function as a unit and convey a specific meaning that is not the sum of the meanings of the component words (i.e., the expression lacks compositionality). Examples of MWEs include idioms (e.g., "break a leg" is used to wish someone good luck), collocations (e.g., "take an exam"), or compounds (e.g., "ice cream"), different authors assuming a more comprehensive or a narrower meaning of this term. The number of MWEs in a language is relatively high. The authors of [1] synthesized papers reporting the number or proportion of MWEs in different languages: English--with an almost equal number of MWEs and single words; French--with 3.3 times greater number of MWE adverbs than that of single adverbs and 1.7 times greater number of MWE verbs than that of single verbs; and Japanese--in which 44% of the verbs are MWEs. Despite being so numerous in the dictionary, MWEs' frequency in corpora is low [2].
Identifying and processing MWEs is crucial for various NLP tasks [3]. In machine translation, for instance, the correct translation of an MWE often depends on the specific context in which it appears. Suppose an MWE is translated rather than appropriately localized for the target language. In that case, the resulting translation may be difficult to understand for native speakers or may convey a wrong meaning [4]. In text classification tasks, MWEs are considered essential clues regarding the sentiment or topic of a text [5]. Additionally, to improve the accuracy of search engines in information retrieval, MWEs can help disambiguate the meaning of a query [6].
Acknowledged recent progress in the field has been made by the PARSEME community [7], which evolved from the COST action with the same name, where the topics of interest were parsing and MWEs ([https://typo.uni-konstanz.de/parseme/](https://typo.uni-konstanz.de/parseme/) last accessed on 21 April 2023). There are two significant outcomes of their activity, (i) a multilingual corpus annotated for verbal MWEs (VMWEs) in 26 languages by more than 160 native annotators, with three versions so far ([https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2282](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2282), [https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2842](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2842), [https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3367](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3367) last accessed on 21 April 2023) [8; 9; 10]; and (ii) a series of shared tasks (also three editions so far) dedicated to the automatic and semi-supervised identification of VMWEs in texts [11; 12; 13], in which the previously mentioned corpora were used for training and testing the participating systems.
Developing systems that can handle multiple languages is another important NLP area. In particular, the ability to accurately process and analyze text in various languages is becoming increasingly important as the world becomes more globalized and interconnected. For example, multilingual NLP systems can improve machine translation, allowing computers to translate text from one language to another accurately. This can be particularly useful in situations where there is a need to communicate with speakers of different languages, such as in global business or international relations. In addition to its practical applications, multilingual NLP is an important area of study from a theoretical perspective. Research in this field can help shed light on the underlying principles of language processing and how these principles differ across languages [14; 15].
Multilingual Transformer models have become a popular choice for multilingual NLP tasks due to their ability to handle multiple languages and achieve strong performance on a wide range of tasks. Based on the Transformer architecture [16], these models are pre-trained on large amounts of multilingual data and can be fine-tuned for specific NLP tasks, such as language translation or text classification. Some models that have become influential in this area include the multilingual bidirectional encoder from transformers (mBERT) [17], cross-lingual language model (XLM) [18], XLM-RoBERTa (XLM-R) [19], and multilingual bidirectional auto-regressive transformers (mBART) [20]. One of the essential benefits of multilingual Transformer models is their ability to transfer knowledge between languages. These models can learn common representations of different languages, allowing them to perform well on tasks in languages that they have yet to be specifically trained on. Thus, multilingual Transformer models are a good choice for NLP tasks that involve multiple languages, such as machine translation or cross-lingual information retrieval [21].
In this work, we leverage the knowledge developed in the two research areas (i.e., MWEs and multilingual NLP) to improve the results obtained at the PARSEME 1.2 shared task [13]. We explore the benefits of combining them in a singular system by jointly fine-tuning the mBERT model on all languages simultaneously and evaluating it separately. In addition, we try to improve the performance of the overall system by employing two mechanisms, (i) the newly introduced lateral inhibition layer [22] on top of the language model and (ii) adversarial training [23] between languages. For the last mechanism, other researchers have experimented with this algorithm and have shown that it can provide better results with the right setting [24]; however, we are the first to experiment with and show the advantages of lateral inhibition in multilingual adversarial training.
Our results demonstrate that by employing lateral inhibition and multilingual adversarial training, we improve the results obtained by MTLB-STRUCT [25], the best system
in edition 1.2 of the PARSEME competition, on 11 out of 14 languages for global MWE identification and 12 out of 14 languages for unseen MWE identification. Furthermore, averaged across all languages, our highest-performing methodology achieves F1-scores of 71.37% and 43.26% for global and unseen MWE identification, respectively. Thus, we obtain an improvement of 1.23% for the former category and a gain of 4.73% for the latter category compared to the MTLB-STRUCT system.
The rest of the paper is structured as follows. Section 2 summarises the contributions of the PARSEME 1.2 competition and the main multilingual Transformer models. The following section, Section 3, outlines the methodology employed in this work, including data representation, lateral inhibition, adversarial training, and how they were employed in our system. Section 4 describes the setup (i.e., dataset and training parameters) used to evaluate our models. Section 5 presents the results, and Section 6 details our interpretation of their significance. Finally, our work is concluded in Section 7 with potential future research directions.
## 2 Related Work
### Multilingual Transformers
This subsection will present the most influential three multilingual language models (MLLMs): mBERT, XLM, and XLM-R. The mBERT model, similar to the original BERT model [17], is a Transformer model [16] with 12 hidden layers. However, while BERT was trained solely on monolingual English data with an English-specific vocabulary, mBERT is trained on the Wikipedia pages of 104 languages and uses a shared word-piece vocabulary. mBERT has no explicit markers indicating the input language and no mechanism specifically designed to encourage translation-equivalent pairs to have similar representations within the model. Although simple in its architecture, due to its multilingual representations, mBERT's robustness to generalize across languages is often surprising, despite needing to be explicitly trained for cross-lingual generalization. The central hypothesis is that using word pieces common to all languages, which must be mapped to a shared space, may lead to other co-occurring word pieces being mapped to this shared space [26].
XLM resulted from various investigations made by the authors in cross-lingual pre-training. They introduce the translation language modeling objective (TLM), which extends the masked language modeling (MLM) objective to pairs of parallel sentences. The reason for doing that is sound and straightforward. Suppose the model needs to predict a masked word within a sentence from a given language. In that case, it can consider that sentence and its translation into a different language. Thus, the model is motivated to align the representations of both languages in a shared space. Using this approach, XLM obtained state-of-the-art (SOTA) results on supervised and unsupervised machine translation using the WMT'16 German-English and WMT'16 Romanian-English datasets [27], respectively. In addition, the model also obtained SOTA results on the Cross-lingual Natural Language Inference (XNLI) corpus [28].
In contrast to XLM, XLM-R does not use the TLM objective and instead trains RoBERTa [29] on a large, multilingual dataset extracted from CommonCrawl ([http://co.mnoncrawl.org/](http://co.mnoncrawl.org/) last accessed on 21 April 2023) datasets. In 100 languages, totaling 2.5 TB of text. It is trained using only the MLM objective, similar to RoBERTa, the main difference between the two being the vocabulary size, with XLM-R using 250,000 tokens compared to RoBERTa's 50,000 tokens. Therefore, XLM-R is significantly larger, with 550 million parameters, compared to RoBERTa's 355 million parameters. The main distinction between XLM and XLM-R is that XLM-R is fully self-supervised, whereas XLM requires parallel examples that may be difficult to obtain in large quantities. In addition, this work demonstrated for the first time that it is possible to develop multilingual models that do not compromise performance in individual languages. XLM-R obtained similar results to monolingual models on the GLUE [30] and XNLI benchmarks.
### Parseme 1.2 Competition
We present the results obtained by the systems participating in edition 1.2 of the PARSEME shared task [13] on discovering VMWEs that were not present (i.e., were not seen) in the training corpus. We will not focus on the previous editions of this shared task for two reasons, (i) the corpora were different, on the one hand, concerning the distribution of seen and unseen VMWEs in the train/dev/test sets, and, on the other hand, smaller for some languages; and (ii) the focus in the last edition, unlike the first two, was on the systems' ability to identify VMWEs unseen in the train and dev corpora, exploring alternative ways of discovering them. Thus, in a supervised machine learning approach, the systems were supposed to learn some characteristics of seen VMWEs and, based on those, find others in the test dataset.
The competing systems used recurrent neural networks [25; 31; 32; 33], but also exploited the syntactic annotation of the corpus [34; 35], or association measures [34; 35]. The shared task was organized on two tracks, closed and open. The former allowed only for the use of the train and dev sets provided by the organizers, as well as of the raw corpora provided for each language, with sizes between 12 and 2474 million tokens. The latter track allowed for the use of any existing resource for training the system, and examples of such resources are as follows, VMWEs lexicons in the target language or another language (exploited due to their translation in the target language) or language models (monolingual or multilingual BERT [25; 33], XLM-RoBERTa [32]). Only two systems participated in the closed track, while seven participated in the open one.
The best-performing system in the open track is MTLB-STRUCT [25]. It is a neural language model relying on pre-trained multilingual BERT and learning both MWEs and syntactic dependency parsing, using a tree CRF network [36]. The authors explain that the joint training of the tree CRF and a Transformer-based MWE detection system improves the results for many languages.
The second and third place in the same track is occupied by the model called TRAVIS [33] that came in two variants, TRAVISmulti (ranked second), which employs multilingual contextual embeddings, and TRAVISmono (ranked third), which employs monolingual ones. These systems rely solely on embeddings, and no other feature is used. The author claims that the monolingual contextual embeddings are much better at generalizations than the multilingual ones, especially concerning unseen MWEs.
## 3 Methodology
In this work, we perform two kinds of experiments, (i) train a model using only the data for a specific language (referred to as monolingual training) and (ii) put multiple corpora from different languages in one place, train the multilingual model on it and then evaluate the trained model on the test set of each language (referred to as multilingual training). For the latter, we also perform additional experiments to improve the results by employing lateral inhibition and adversarial training mechanisms, as depicted in Figure 1.
### Data Representation
BERT has significantly impacted the field of NLP and has achieved SOTA performance on various tasks. Its success can be attributed to the training process, which involves learning from large amounts of textual data using a Transformer model and then fine-tuning it on a smaller amount of task-specific data. The masked language modeling objective used during pre-training allows the model to learn effective sentence representations, which can be fine-tuned for improved performance on downstream tasks with minimal task-specific training data. The success of BERT has led to the creation of language-specific versions of the model for various languages, such as CamemBERT (French) [37], AfriBERT (Afrikaans) [38], FinBERT (Finnish) [39], and RoBERT (Romanian) [40].
The scarceness of data and resources has resulted in recent advances in NLP being limited to English and a few high-resource languages rather than being more widely applicable across languages. To address this issue, MLLMs have been developed and trained using large amounts of unlabeled textual data collected from multiple languages. These models are designed to benefit lower resource languages by leveraging their shared vocabulary, genetic relatedness, or contact relatedness with higher resource languages [41; 42]. Many different MLLMs are available, which vary in terms of their architecture, training objective, data used for pre-training, and the number of languages covered. However, in our experiments, we employ only the mBERT model because it allows us to provide a cleaner comparison with the monolingual BERT models and thus emphasizes the strengths of our approach.
### Lateral Inhibition
The biological process of lateral inhibition represents the capacity of excited neurons to reduce the activity of their neighbors [43]. In the visual cortex, this process is associated with an increased perception under challenging environments, such as low-lighting conditions. Previously, we proposed implementing the lateral inhibition mechanism in artificial neural networks (ANN) to improve the named entity recognition task [22; 44]. The intuition behind introducing this mechanism is that it reduces noise associated with word representations in some instances, such as less frequent words or contexts.
The implementation uses an additional ANN layer that filters the values of a neuron from a previous layer (the word embedding representation) based on values from other adjacent neurons in the previous layer. Equation (1) describes the new layer's forward pass. Here, \(X\) is the layer's input vector (a token embedding representation), \(Diag\) is a matrix with the diagonal set to the vector given as a parameter, \(ZeroDiag\) produces a matrix with the value zero on the main diagonal, and \(W\) and \(B\) represent the weights and bias. \(\Theta\) is the Heaviside function, described in Equation (2). The derivative of the Heaviside function in the backward pass is approximated with the sigmoid function using a scaling parameter \(k\)[45] (see Equation (3)), a method known as surrogate gradient learning [46].
Figure 1: Domain adversarial training algorithm. We have the mBERT feature extractor \(F\) with green, whose role is to generate the token embeddings, the MWE label classifier \(C\) with blue, and the language classifier \(LD\) with orange, whose gradient is reversed and scaled by \(\lambda\) before it is fed into the feature extractor. Additionally, \(C\) has incorporated in its architecture the lateral inhibition mechanism.
\[F(X)=X*Diag(\Theta(X*ZeroDiag(W^{T})+B)) \tag{1}\]
\[\Theta(x)=\begin{cases}1,x>0\\ 0,x\leq 0\end{cases} \tag{2}\]
\[\sigma(x)=\frac{1}{1+e^{-kx}} \tag{3}\]
### Adversarial Training
In recent years, adversarial training of neural networks had a significant influence, particularly in computer vision, where generative unsupervised models have demonstrated the ability to generate new images [47]. A crucial challenge in adversarial training is finding the proper balance between the generator and the adversarial discriminator. As a result, several methods have been proposed in recent times to stabilize the training process [48; 49; 50]. Therefore, Joty et al. [51] introduced cross-lingual adversarial neural networks designed to learn discriminative yet language-invariant representations. In this work, we use the same methodology to learn task-specific representations in a cross-lingual setting and improve the predictive capabilities of a multilingual BERT model.
Our approach is rooted in the Domain Adversarial Neural Network (DANN) algorithm, initially designed for domain adaptation [52]. DANN consists of a deep feature extractor \(F\), responsible for extracting relevant features \(f\) from the input data, and a deep label classifier \(C\), which uses those features to make predictions about the label of the input \(x\). Together, these two components form a standard feed-forward architecture. In order to improve the performance of the model on a target domain where labeled data are scarce, an additional component is added to the architecture, called a domain classifier \(D\), which is responsible for distinguishing between samples from the source and target domains \(d\). This domain classifier is connected to the feature extractor via a gradient reversal layer, which multiplies the gradient by a negative constant during training. The gradient reversal layer helps ensure that the feature distributions over the two domains are as similar as possible, resulting in domain-invariant features that can better generalize to the target domain. The overall training process minimizes the label prediction loss on the source examples and the domain classification loss on all samples. Thus, we have the following equations that are used to update the parameters of each of the three components:
\[\begin{split}\theta_{C}&=\theta_{C}-\alpha\frac{ \partial L_{y}}{\partial\theta_{C}}\\ \theta_{D}&=\theta_{D}-\alpha\frac{\partial L_{y}}{ \partial\theta_{D}}\\ \theta_{F}&=\theta_{F}-\alpha(\frac{\partial L_{y}}{ \partial\theta_{F}}-\lambda\frac{\partial L_{y}}{\partial\theta_{F}})\end{split} \tag{4}\]
where \(\theta_{C}\) are the parameters of the label classifier, \(L_{y}\) is the loss obtained by the label classifier when predicting the class labels \(y\), \(\theta_{D}\) are the parameters of the domain classifier, \(L_{d}\) is the loss obtained by the domain classifier when predicting the domain labels \(d\), \(\theta_{F}\) are the parameters of the feature extractor, \(\lambda\) is the hyperparameter used to scale the reverse gradients, and \(\alpha\) is the learning rate.
### Monolingual Training
In the monolingual training experiments, we treat the MWE task as sequence tagging, so we try to predict a label for each input token. To attain that, we employ a feed-forward layer that maps the embeddings produced by a BERT model into the specific MWE class logits and then apply the softmax activation function to obtain the probabilities. This mechanism is succinctly described in the following equation:
\[p_{i}=softmax(e_{i}W^{T}+b) \tag{5}\]
where \(p_{i}\) are the class MWE probabilities for the token \(i\), \(e_{i}\) are the embeddings produced by the language model, \(W^{T}\) is the transpose of the feed-forward layer, and \(b\) is its bias. We use the same BERT models for each language as in [25].
### Multilingual Training
We fine-tune the mBERT model for multilingual training using the same methodology as in the monolingual case. However, we improve the predictions by first employing the lateral inhibition layer on top of the embeddings. The lateral inhibition layer has been shown to improve the performance of language models in named entity recognition tasks [22; 44; 53], and we believe that it would do the same for MWE identification since the methodology is similar for the two tasks. Therefore, the equation that describes the resulting system becomes:
\[p_{i}=softmax(LI(e_{i})W^{T}+b) \tag{6}\]
where \(LI\) is the lateral inhibition layer and the rest of the terms are the same as in Equation (5).
We also adapt the multilingual training by employing the DANN algorithm with a language discriminator instead of the domain discriminator. Thus, we create language-independent features out of the mBERT model by reversing the gradient that comes out of the language discriminator when backpropagating through the language model. The gradient reversal mechanism in our system is described using the following equations
\[\begin{array}{l}\theta_{C}=\theta_{C}-\alpha\frac{\partial L_{y}}{\partial \theta_{C}^{2}}\\ \theta_{LD}=\theta_{LD}-\alpha\frac{\partial L_{ld}}{\partial\theta_{L}}\\ \theta_{F}=\theta_{F}-\alpha(\frac{\partial L_{y}}{\partial\theta_{F}}- \lambda\frac{\partial L_{ld}}{\partial\theta_{F}})\end{array} \tag{7}\]
where \(\theta_{C}\) are the parameters of the MWE classifier, \(L_{y}\) is the loss obtained by the MWE classifier when predicting the MWE labels \(y\), \(\theta_{LD}\) are the parameters of the language discriminator, \(L_{ld}\) is the loss obtained by the language discriminator when predicting the language labels \(ld\), \(\theta_{F}\) are the parameters of the mBERT model (i.e., the feature extractor in DANN), \(\lambda\) is the hyperparameter used to scale the reversed gradients, and \(\alpha\) is the learning rate.
Finally, we employ the lateral inhibition layer and the DANN methodology with a language discriminator on the mBERT model for multilingual training. The forward procedure of this approach, which is used to compute the loss between the predicted MWE probabilities for a given text and the corresponding ground truths, and the loss between the predicted language probabilities and the corresponding ground truths of the given text, is described in Algorithm 1 as follows:
* Tokenize the \(text\) using the mBERT tokenizer, obtaining the tokens \(tok_{i}\) (Line 1).
* Generate the multilingual embeddings \(emb_{i}\) for each of the above tokens \(tok_{i}\) using the mBERT model (Line 2).
* Apply the lateral inhibition layer on each of the embeddings \(emb_{i}\) (Line 3).
* Use the MWE classifier composed of lateral inhibition layer output to produce the probabilities \(\hat{y}_{i}\) of a token to belong to a certain MWE class (Line 4).
* Use the language discriminator on the embedding \(emb_{[CLS]}\) corresponding to the token [CLS] to produce the probabilities \(\hat{ld}_{i}\) of the text to belong to a certain language (Line 5).
* Compute the loss \(L_{y}\) between the predicted MWE probabilities and the ground truth MWE labels (Line 6) and the loss \(L_{ld}\) between the predicted language probabilities and the ground truth language labels (Line 7).
In Algorithm 2, we outline the backward procedure used to update the parameters of our models as follows:
* Compute the gradients \(\nabla_{C}\) for the MWE classifier using the MWE loss \(L_{y}\) (Line 1).
* Compute the gradients \(\nabla_{LD}\) for the language discriminator using the language discriminator loss \(L_{ld}\) (Line 2).
* Compute the gradients \(\nabla_{F}\) of the mBERT model using \(\nabla_{C}\) and \(-\nabla_{LD}\) multiplied by \(\lambda\) (Line 3).
* Update the model parameters (i.e., \(\theta_{C}\), \(\theta_{LD}\), and \(\theta_{F}\)) using the gradient descent algorithm (Lines 4-6).
```
Input: text, ground truth MWE labels \(y_{i}\), and ground truth language labels \(ld_{i}\) Output: MWE identification loss \(L_{y}\) and language discrimination loss \(L_{ld}\)\(tok_{i}\leftarrow\) tokenize(\(text\)) \(emb_{i}\leftarrow\) mbert(\(tok_{i}\)) \(h_{i}\leftarrow\) lateral_inhibition(\(emb_{i}\)) \(\hat{y}_{i}\leftarrow\) mwe_classifier(\(l_{i}\)) \(\hat{l}_{d}\leftarrow\) language_discriminator(\(emb_{[CLS]}\)) \(L_{y}\leftarrow\) cross_entropy_loss(\(y_{i}\), \(\hat{y}_{i}\)) \(L_{ld}\leftarrow\) cross_entropy_loss(\(ld_{i}\), \(\hat{l}_{d}\))
```
**Algorithm 1**Algorithm describing the forward pass of the multilingual training with lateral inhibition and language adversarial training.
## 4 Experimental Settings
### Dataset
The corpus used to evaluate our models is the PARSEME dataset version 1.2. The corpus was manually annotated with VMWEs of several types. Some are universal because they exist and were annotated in all languages in the project. These universal types are verbal idioms (e.g., the Romanian "a face din tantar armasar"--eng. "to make a mountain out of a molehill") and light verb constructions (e.g., the Romanian "a face o vizita"--eng. "to pay a visit") in which their verb is light in the sense that its semantic contribution to the meaning of the whole expression is almost null, its role being rather only that of carrying the verb specific morphological information, such as tense, number, or person. There are also light verb constructions in which the verb carries a causative meaning (e.g., the Romanian "a da batai de cap"--eng. "to give a hard time"), and they are also annotated in all languages. The types of VMWEs that apply only to some of the languages in the project are called quasi-universal: inherently reflexive verbs (e.g., the Romanian "a-si imagina"--eng. "to imagine (oneself)"), verb-particle constructions (e.g., "to give up"), multi-verb constructions (e.g., "make do"), and inherently adpositional verbs (e.g., "to rely on"). For Italian, a language-specific type was defined, namely inherently clitic verbs (e.g., "prendersela"--eng. "to be angry").
The dataset used in the PARSEME shared task edition 1.2 contains 14 languages, including German (DE), Basque (EU), Greek (EL), French (FR), Irish (GA), Hebrew (HE),
Hindi (HI), Italian (IT), Polish (PL), Brazilian Portuguese (PT), Romanian (RO), Swedish (SV), Turkish (TR), and Chinese (ZH). The number of tokens ranges from 35 k tokens (HI) to 1015 k tokens (RO), while the number of annotated VMWEs ranges from 662 (GA) to 9164 (ZH). The dataset split was made to ensure a higher number of unseen VMWEs in the dev (100 unseen VMWEs with respect to the train set) and test (300 unseen VMWEs with respect to the train + dev files) sets. More statistics regarding the PARSEME 1.2 dataset are depicted in Table 1.
In addition to the annotation with VMWEs, the multilingual PARSEME corpus is also tokenized, morphologically, and syntactically annotated, mostly with UDPipe [54]. Thus, the syntactic analysis follows the principles of Universal Dependencies ([https://universaldependencies.org/](https://universaldependencies.org/) last accessed on 21 April 2023) [55].
### Fine-Tuning
We followed the fine-tuning methodology employed by MTLB-STRUCT (the corresponding configuration files for each language are available at [https://github.com/shivaat/MTLB-STRUCT/tree/master/code/configs](https://github.com/shivaat/MTLB-STRUCT/tree/master/code/configs) last accessed on 21 April 2023) with the tree conditional random fields [56] disabled. Thus, we trained our models for 10 epochs using a batch size of 32 and the Adam optimizer [57] with a learning rate of 3 \(\times\) 10\({}^{-5}\). We set the maximum input sequence length to 150, the scaling parameter \(k\), used in the gradient approximation of the lateral inhibition Heaviside function, to 10, which was empirically shown to create a good enough surrogate gradient [22], and the hyperparameter \(\lambda\) to 0.01 in the DANN algorithm for scaling the reversed gradient. We did not employ k-fold cross-validation in our experiments, and we measured the model performance in terms of precision, recall, and F1-score at the token level using the following equations:
\[\text{Precision}=\frac{TP}{TP+FP} \tag{8}\]
\[\text{Recall}=\frac{TP}{TP+FN} \tag{9}\]
\[\text{F1-score}=\frac{2\cdot Precision\cdot Recall}{Precision+Recall} \tag{10}\]
where \(TP\) is the number of true positives, \(FP\) is the number of false positives, and \(FN\) is the number of false negatives. As suggested by the PARSEME 1.2 competition evaluation
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Lang.**} & \multicolumn{3}{c}{**Training**} & \multicolumn{3}{c}{**Validation**} & \multicolumn{3}{c}{**Test**} \\ & **\#Sent.** & **\#Tok.** & **Len.** & **\#Sent.** & **\#Tok.** & **Len.** & **\#Sent.** & **\#Tok.** & **Len.** \\ \hline DE & 6.5 k & 126.8 k & 19.3 & 602 & 11.7 k & 19.5 & 1.8 k & 34.9 k & 19.1 \\ EL & 17.7 k & 479.6 k & 27.0 & 909 & 23.9 k & 26.3 & 2.8 k & 75.4 k & 26.7 \\ EU & 4.4 k & 61.8 k & 13.9 & 1.4 k & 20.5 k & 14.4 & 5.3 k & 75.4 k & 14.2 \\ FR & 14.3 k & 360.0 k & 25.0 & 1.5 & 39.5 k & 25.1 & 5.0 k & 126.4 k & 25.2 \\ GA & 257 & 6.2 k & 24.2 & 322 & 7.0 k & 21.8 & 1.1 k & 25.9 k & 23.1 \\ HE & 14.1 k & 286.2 k & 20.2 & 1.2 k & 25.3 k & 20.2 & 3.7 k & 76.8 k & 20.2 \\ HI & 282 & 5.7 k & 20.4 & 289 & 6.2 k & 21.7 & 1.1 k & 23.3 k & 21.0 \\ IT & 10.6 k & 282.0 k & 27.4 & 1.2 k & 32.6 k & 27.1 & 3.8 k & 106.0 k & 27.3 \\ PL & 17.7 k & 298.4 k & 16.8 & 1.4 k & 23.9 k & 16.8 & 4.3 k & 73.7 k & 16.7 \\ PT & 23.9 k & 542.4 k & 22.6 & 1.9 k & 43.6 k & 22.1 & 6.2 k & 142.3 k & 22.8 \\ RO & 10.9 k & 195.7 k & 17.9 & 7.7 k & 134.3 k & 17.4 & 38.0 k & 685.5 k & 18.0 \\ SV & 1.6 k & 24.9 k & 15.5 & 596 & 8.8 k & 14.9 & 2.1 k & 31.6 k & 15.0 \\ TR & 17.9 k & 267.5 k & 14.9 & 1.0 k & 15.9 k & 15.0 & 3.3 k & 48.7 k & 14.7 \\ ZH & 35.3 k & 575.5 k & 16.2 & 1.1 k & 18.2 k & 16.0 & 3.4 k & 55.7 k & 16.0 \\ \hline Total & 175.7 k & 3512.7 k & 20.1 & 29.3 k & 522.2 k & 19.8 k & 81.9 k & 1581.6 k & 20.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The statistics of PARSEME 1.2: number of sentences (#Sent.), of tokens (#Tok.), and the sentence average length (Len.) on each of the three splits: training, validation, and test.
methodology ([https://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/](https://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/) last accessed on 21 April 2023), we compute the strict variant of the F1-score. Thus, we consider the predicted label of a group of tokens as true positive only if it perfectly matches the ground truth [58].
## 5 Results
The results of our evaluation for both monolingual and multilingual training, with and without lateral inhibition and adversarial training, for all the 14 languages, are displayed in Table 2. We improved the performance of MTLB-STRUCT, the best overall system according to the competition benchmark ([https://multiword.sourceforge.net/PHITE.php?sitesig=CO](https://multiword.sourceforge.net/PHITE.php?sitesig=CO) NF&page=CONF_02_MWE-LEX_2020_lb_COLING_rb_&subpage=CONF_40_Shared_Task last accessed on 21 April 2023), on 11 out of 14 languages for global MWE prediction (the three remaining languages are German, Italian, and Romanian) and on 12 out of 14 languages for unseen MWE prediction (the two remaining languages are German and Greek). Out of all the cases where our methods underperformed, the only high difference was obtained in the German language, our best system being behind the MTLB-STRUCT system by approximately 3.43% on global MWE prediction and approximately 6.57% on unseen MWE prediction. We believe that this is due to the employment of the German BERT ([https://huggingface.co/bert-base-german-cased](https://huggingface.co/bert-base-german-cased) last accessed on 21 April 2023) by the MTLB-STRUCT team, while we still used the mBERT model for this language.
For the global MWE prediction, we managed to improve the performance in 11 languages, the highest F1-score was obtained by the monolingual training once (i.e., Chinese), by the simple multilingual training three times (i.e., Greek, Irish, and Turkish), by the multilingual training with lateral inhibition three times (i.e., French, Hebrew, and Polish), by the multilingual adversarial training once (i.e., Basque), and by the multilingual adversarial training with the lateral inhibition three times (i.e., Hindi, Portuguese, and Swedish). On the other hand, for the unseen MWE prediction, we managed to achieve better results in 12 languages. The simple multilingual training obtained the highest F1-score only once (i.e., Swedish), the multilingual training with the lateral inhibition three times (i.e., French, Turkish, and Chinese), the multilingual adversarial training five times (i.e., Irish, Hebrew, Hindi, Polish, and Romanian), and the multilingual adversarial training with lateral inhibition three times (i.e., Basque, Italian, and Portuguese).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Language**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Global MWE-Based**} & \multicolumn{3}{c}{**Unseen MWE-Based**} \\ & & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline \multirow{6}{*}{DE} & MTLB-STRUCT [25] & 77.11 & **75.24** & **76.17** & **49.17** & **49.50** & **49.34** \\ & Monolingual & 74.26 & 72.82 & 73.53 & 40.35 & 41.79 & 41.06 \\ & Multilingual & **77.26** & 68.47 & 72.60 & 37.85 & 43.22 & 40.35 \\ & Multilingual + LI & 69.07 & 66.38 & 67.70 & 39.15 & 43.85 & 41.37 \\ & Multilingual + Adv & 69.00 & 68.33 & 68.66 & 39.18 & 45.11 & 41.94 \\ & Multilingual + LI + Adv & 71.37 & 68.08 & 69.69 & 41.47 & 43.85 & 42.77 \\ \hline \multirow{6}{*}{EL} & MTLB-STRUCT [25] & 72.54 & 72.69 & 72.62 & 38.74 & **47.00** & **42.47** \\ & Monolingual & 72.33 & **73.00** & 72.66 & 38.30 & 46.75 & 42.11 \\ \cline{1-1} & Multilingual & **74.60** & 72.38 & **73.48** & **38.92** & 42.21 & 40.50 \\ \cline{1-1} & Multilingual + LI & 72.52 & 72.90 & 72.71 & 37.90 & 45.78 & 41.47 \\ \cline{1-1} & Multilingual + Adv & 73.23 & 72.18 & 72.70 & 38.81 & 44.48 & 41.45 \\ \cline{1-1} & Multilingual + LI + Adv & 73.42 & 72.59 & 73.00 & 38.64 & 44.16 & 41.21 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results obtained by the monolingual and multilingual training, together with the results obtained by the best system of the PARSEME 1.2 competition, MTLB-STRUCT. LI is the lateral inhibition component, while Adv is the domain adaptation technique for cross-lingual MWE identification. We measure the precision (P), recall (R), and F1-score (F1) for each global and unseen MWE identification experiment. The best results in each language are highlighted in bold.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Language**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Global MWE-Based**} & \multicolumn{3}{c}{**Unseen MWE-Based**} \\ & & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline \multirow{10}{*}{EU} & MTLB-STRUCT [25] & 80.72 & 79.36 & 80.03 & 28.12 & 44.33 & 34.41 \\ & Monolingual & 81.61 & **80.40** & 81.00 & 34.94 & **49.29** & 40.89 \\ & Multilingual & **86.49** & 77.03 & **81.49** & 33.32 & 45.04 & 39.17 \\ & Multilingual + LI & 84.07 & 78.66 & 81.28 & 37.38 & 44.48 & 40.62 \\ & Multilingual + Adv & 82.77 & 78.71 & 80.69 & 36.46 & 48.44 & 41.61 \\ & Multilingual + LI + Adv & 84.80 & 78.42 & 81.48 & **39.71** & 46.46 & **42.82** \\ \hline \multirow{10}{*}{FR} & MTLB-STRUCT [25] & 80.04 & 78.81 & 79.42 & 39.20 & 46.00 & 42.33 \\ & Monolingual & 79.84 & **79.54** & 79.69 & 38.89 & 44.87 & 41.67 \\ & Multilingual & 81.80 & 77.04 & 79.35 & 43.17 & 44.55 & 43.85 \\ & Multilingual + LI & **81.85** & 78.96 & **80.37** & **45.48** & **48.40** & **46.89** \\ & Multilingual + Adv & 80.12 & 78.59 & 79.35 & 41.60 & **48.40** & 44.74 \\ & Multilingual + LI + Adv & 80.47 & 78.22 & 79.33 & 40.87 & 45.19 & 42.92 \\ \hline \multirow{10}{*}{GA} & MTLB-STRUCT [25] & 37.72 & 25.00 & 30.07 & 23.08 & 16.94 & 19.54 \\ & Monolingual & 33.67 & 23.17 & 27.45 & 24.02 & 17.28 & 20.10 \\ & Multilingual & 54.91 & 34.63 & 42.48 & 45.91 & 28.61 & 35.25 \\ & Multilingual + LI & 55.31 & 34.63 & 42.60 & 45.79 & 27.76 & 34.57 \\ & Multilingual + Adv & **56.12** & **35.78** & **43.70** & **48.42** & **30.31** & **37.28** \\ & Multilingual + LI + Adv & 55.72 & 34.63 & 42.72 & 45.79 & 27.76 & 34.57 \\ \hline \multirow{10}{*}{HE} & MTLB-STRUCT [25] & 56.20 & 42.35 & 48.30 & 25.53 & 15.89 & 19.59 \\ & Monolingual & 54.09 & 40.76 & 46.49 & 26.02 & 15.94 & 19.77 \\ & Multilingual & 61.38 & 40.76 & 48.98 & 34.76 & 17.81 & 23.55 \\ & Multilingual + LI & **61.63** & 42.54 & **50.23** & 34.46 & 19.06 & 24.55 \\ & Multilingual + Adv & 58.40 & 42.15 & 48.96 & **35.35** & **21.88** & **27.03** \\ & Multilingual + LI + Adv & 59.89 & **42.74** & 49.88 & 34.92 & 20.62 & 25.93 \\ \hline \multirow{10}{*}{HI} & MTLB-STRUCT [25] & 72.25 & **75.04** & 73.62 & 48.75 & 58.33 & 53.11 \\ & Monolingual & 66.53 & 70.28 & 68.35 & 49.35 & 61.35 & 54.70 \\ & Multilingual & **77.78** & 71.77 & 74.65 & **62.72** & 58.65 & 60.61 \\ & Multilingual + LI & 77.08 & 68.95 & 72.78 & 61.83 & 56.49 & 59.04 \\ & Multilingual + Adv & 75.46 & 73.11 & 74.26 & 60.95 & **62.43** & **61.68** \\ & Multilingual + LI + Adv & 75.53 & 73.85 & **74.68** & 60.31 & **62.43** & 61.35 \\ \hline \multirow{10}{*}{IT} & MTLB-STRUCT [25] & 67.68 & **60.27** & **63.76** & 20.23 & 21.33 & 20.81 \\ & Monolingual & 64.53 & 59.59 & 61.96 & 20.81 & **24.06** & 22.32 \\ & Multilingual & 69.37 & 56.40 & 62.21 & 22.22 & 19.38 & 20.70 \\ & Multilingual + LI & **71.27** & 56.01 & 62.72 & 23.02 & 20.12 & 21.28 \\ & Multilingual + Adv & 65.65 & 58.33 & 61.78 & 20.83 & 21.88 & 21.43 \\ & Multilingual + LI + Adv & 69.18 & 57.85 & 63.01 & **25.51** & 23.44 & **24.43** \\ \hline \multirow{10}{*}{PL} & MTLB-STRUCT [25] & 82.94 & 79.18 & 81.02 & 38.46 & 41.53 & 39.94 \\ & Monolingual & 81.89 & 79.33 & 80.85 & 38.30 & 41.99 & 40.06 \\ & Multilingual & 84.02 & 77.03 & 80.37 & 40.34 & 37.50 & 38.87 \\ & Multilingual + LI & **85.14** & 79.26 & **82.09** & **44.48** & 41.33 & 42.84 \\ & Multilingual + Adv & 82.55 & **79.85** & 81.18 & 40.75 & **45.19** & **42.86** \\ & Multilingual + LI + Adv & 83.19 & 78.74 & 80.90 & 41.01 & 41.67 & 41.34 \\ \hline \multirow{10}{*}{PT} & MTLB-STRUCT [25] & 73.93 & 72.76 & 73.34 & 30.54 & 41.33 & 35.13 \\ & Monolingual & 74.81 & 70.94 & 73.01 & 33.81 & 39.05 & 35.98 \\ \cline{1-1} & Multilingual & 75.93 & 70.94 & 73.35 & 34.06 & 39.18 & 36.44 \\ \cline{1-1} & Multilingual + LI & **77.15** & 71.89 & 74.43 & **35.61** & 39.18 & 37.31 \\ \cline{1-1} & Multilingual + Adv & 73.36 & 73.48 & 73.42 & 30.33 & 40.13 & 34.55 \\ \cline{1-1} & Multilingual + LI + Adv & 75.51 & **73.53** & **74.49** & 33.76 & **41.78** & **37.36** \\ \hline \hline \end{tabular}
\end{table}
Table 2: _Cont._
Also, the monolingual training has not achieved the highest F1-score for unseen MWE prediction for any language. These findings are summarized in Table 3.
We further compared the average scores across all languages obtained by our systems. In Table 4, we compared our results with the ones obtained by each system at the latest edition of the PARSEME competition ([https://multiword.sourceforge.net/PHITE.php?sitesig=CONF&page=CONF_02_MWE-LEX_2020_lb_COLING_rb_&subpage=CONF_50_Shared_task_results](https://multiword.sourceforge.net/PHITE.php?sitesig=CONF&page=CONF_02_MWE-LEX_2020_lb_COLING_rb_&subpage=CONF_50_Shared_task_results) last accessed on 21 April 2023): MTLB-STRUCT [25], Travis-multi/mono [33], Seen2Unseen [34], FipsCo [10], HMSid [35], and MultiVitamin [32]. For the global MWE identification, we outperformed the MTLB-STRUCT results with all the multilingual training experiments, the highest average F1-score being obtained by the simple multilingual training without lateral inhibition or adversarial training. It achieved an average F1-score of 71.37%, an improvement of 1.23% compared to the MTLB-STRUCT F1-score (i.e., 70.14%).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Language**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Global MWE-Based**} & \multicolumn{3}{c}{**Unseen MWE-Based**} \\ & & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline \multirow{8}{*}{RO} & MTLB-STRUCT [25] & 89.88 & **91.05** & **90.46** & 28.84 & 41.47 & 34.02 \\ & Monolingual & 90.39 & 90.11 & 90.25 & 46.82 & 51.09 & 48.86 \\ & Multilingual & 91.34 & 88.46 & 89.88 & **49.90** & 48.12 & 48.99 \\ & Multilingual + LI & 90.78 & 88.85 & 89.81 & 45.06 & 45.15 & 45.10 \\ & Multilingual + Adv & 89.14 & 90.13 & 89.63 & 46.27 & **56.44** & **50.85** \\ & Multilingual + LI + Adv & 89.95 & 88.78 & 89.36 & 45.44 & 50.30 & 47.74 \\ \hline \multirow{8}{*}{SV} & MTLB-STRUCT [25] & 69.59 & 73.68 & 71.58 & 35.57 & 53.00 & 42.57 \\ & Monolingual & 73.01 & 73.68 & 73.34 & 44.32 & **54.62** & 48.93 \\ & Multilingual & **78.92** & 70.79 & 74.63 & **50.78** & **54.62** & **52.63** \\ & Multilingual + LI & 75.48 & 73.68 & 74.57 & 46.77 & 52.66 & 49.54 \\ & Multilingual + Adv & 75.42 & **74.41** & 74.91 & 46.70 & 53.50 & 49.87 \\ & Multilingual + LI + Adv & 77.62 & 74.10 & **75.82** & 49.47 & 51.82 & 50.62 \\ \hline \multirow{8}{*}{TR} & MTLB-STRUCT [25] & 68.41 & 70.55 & 69.46 & 42.11 & 45.33 & 43.66 \\ & Monolingual & 69.11 & 72.89 & 70.95 & 43.75 & 47.88 & 45.72 \\ & Multilingual & 67.52 & **73.27** & **71.18** & 41.83 & 47.56 & 44.51 \\ & Multilingual + LI & **69.92** & 72.28 & 71.08 & **47.94** & **49.19** & **48.55** \\ & Multilingual + Adv & 68.41 & 70.37 & 69.38 & 43.54 & 47.23 & 45.31 \\ & Multilingual + LI + Adv & 68.22 & 69.77 & 68.99 & 43.04 & 44.30 & 43.66 \\ \hline \multirow{8}{*}{ZH} & MTLB-STRUCT [25] & 68.56 & 70.74 & 69.63 & 58.97 & 53.67 & 56.20 \\ & Monolingual & **72.33** & **72.88** & **72.60** & 59.74 & **58.03** & 58.87 \\ \cline{1-1} & Multilingual & 72.03 & 71.32 & 71.67 & 62.30 & 55.87 & 58.91 \\ \cline{1-1} & Multilingual + LI & 69.82 & 70.36 & 70.09 & 62.50 & 57.31 & **59.79** \\ \cline{1-1} & Multilingual + Adv & 69.29 & 69.47 & 69.38 & 62.42 & 54.73 & 58.32 \\ \cline{1-1} & Multilingual + LI + Adv & 70.64 & 68.58 & 69.59 & **65.41** & 54.73 & 59.59 \\ \hline \hline \end{tabular}
\end{table}
Table 2: _Cont._
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Method**} & **\#Highest** & **\#Highest** \\ & **Global MWE** & **Unseen MWE** \\ \hline MTLB-STRUCT [25] & 3 & 2 \\ \hline Monolingual & 1 & 0 \\ Multilingual & 3 & 1 \\ Multilingual + LI & 3 & 3 \\ Multilingual + ADV & 1 & 5 \\ Multilingual + LI + ADV & 3 & 3 \\ \hline Total (ours) & 11 & 12 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The number of times we managed to obtain the highest F1-score with each system developed in this work for both global MWE (#Highest Global MWE) and unseen MWE (#Highest Unseen MWE) predictions.
For unseen MWE identification, we improved the average results obtained by MTLBSTRUCT using all the methodologies employed in this work. The highest average F1-score was obtained by the multilingual adversarial training with 43.26%, outperforming the MTLB-STRUCT system by 4.73%.
## 6 Discussion
According to our experiments, the average MWE identification performance can be improved by approaching this problem using a multilingual NLP system, as described in this work. An interesting perspective of our results on this task is how much improvement we brought compared to the PARSEME 1.2 competition's best system. These results are shown at the top of Figure 2 for global MWE prediction and at its bottom for unseen MWE prediction. In general, the most significant relative improvements were achieved in the Irish language by employing multilingual training that, combined with adversarial training, boosted the performance by 45.32% for the global MWE prediction and by 90.78% for the unseen MWE prediction. On the other hand, for the same language, by using the monolingual training, we decrease the system's performance on global MWE prediction by 8.71% and slightly increase it by 2.86% on unseen MWE prediction. We believe that these improvements in Irish were due to the benefits brought by the multilingual training since this language contained the least amount of training sentences (i.e., 257 sentences), and it has been shown in previous research that superior results are obtained when such fine-tuning mechanisms are employed [59]. However, the Hindi language also contains a small number of training samples (i.e., 282 sentences), but our multilingual training results are worse when compared to Irish. We assume that this is the outcome of the language inequalities that appeared in the mBERT pre-training data [60] and the linguistic isolation of Hindi since there are no other related languages in the fine-tuning data [61].
The second highest improvements for global MWE prediction were achieved in the Swedish language with 2.45% for the monolingual training, 4.26% for the multilingual training, 4.17% for the multilingual training with the lateral inhibition, 4.65% for the multilingual adversarial training, and 5.92% for the multilingual adversarial training with lateral inhibition. We observe a relatively high difference between the first and the second place, but we believe again that this is due to the small number of sentences for Irish compared to Swedish. On the other hand, the results for unseen MWE prediction outline that the second highest improvements were attained in Romanian with 43.62% for the monolingual training, 44.00% for the multilingual training, 32.56% for the multilingual training with lateral inhibition, 49.47% for the multilingual adversarial training, and 40.32% for the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**\#Lang.**} & \multicolumn{3}{c}{**Global MWE-Based**} & \multicolumn{3}{c}{**Unseen MWE-Based**} \\ & & **AP** & **AR** & **AF1** & **AP** & **AR** & **AF1** \\ \hline MTLB-STRUCT [25] & 14/14 & 71.26 & **69.05** & 70.14 & 36.24 & 41.12 & 38.53 \\ TRAVIS-multi [33] & 13/14 & 60.65 & 57.62 & 59.10 & 28.11 & 33.29 & 30.48 \\ TRAVIS-mono [33] & 10/14 & 49.50 & 43.48 & 46.34 & 24.33 & 28.01 & 26.04 \\ Seen2Unseen [34] & 14/14 & 63.36 & 62.69 & 63.02 & 16.14 & 11.95 & 13.73 \\ FipsCo [10] & 3/14 & 11.69 & 8.75 & 10.01 & 4.31 & 5.21 & 4.72 \\ HMSid [35] & 1/14 & 4.56 & 4.85 & 4.70 & 1.98 & 3.81 & 2.61 \\ MultiVitaminBootser [32] & 7/14 & 0.19 & 0.09 & 0.12 & 0.05 & 0.07 & 0.06 \\ \hline Monolingual & 14/14 & 70.60 & 68.52 & 69.54 & 38.52 & 42.42 & 40.38 \\ Multilingual & 14/14 & 75.23 & 67.88 & **71.37** & 42.72 & 41.60 & 42.15 \\ Multilingual + LI & 14/14 & 74.36 & 68.24 & 71.17 & **43.48** & 42.20 & 42.78 \\ Multilingual + Adv & 14/14 & 72.78 & 68.92 & 70.80 & 42.26 & **44.30** & **43.26** \\ Multilingual + LI + Adv & 14/14 & 73.96 & 68.56 & 71.16 & 43.24 & 42.75 & 43.00 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The average precision (AP), recall (AR), and F1-scores (AF1) over all languages obtained by our systems are compared with the results obtained by each system at the PARSEME 1.2 competition on global and unseen MWE identification. We also depict the number of languages used to train each system (#Lang). The best results are highlighted in bold.
multilingual adversarial training with lateral inhibition. In addition, the improvements are more uniform on the unseen MWE prediction than the global one.
## 7 Conclusions and Future Work
Failure to identify MWEs can lead to misinterpretation of text and errors in NLP tasks, making this an important area of research. In this paper, we analyzed the performance of MWE identification in a multilingual setting, training the mBERT model on the combined PARSEME 1.2 corpus using all the 14 languages found in its composition. In addition, to boost the performance of our system, we employed lateral inhibition and language adversarial training in our methodology, intending to create embeddings that are as language-independent as possible. Our evaluation results highlighted that through this approach, we managed to improve the results obtained by MTLB-STRUCT, the best system of the PARSEME 1.2 competition, on 11 out of 14 languages for global MWE identification and 12 out of 14 for unseen MWE identification. Thus, with the highest average F1-scores of 71.37% for global MWE identification and 43.26% for unseen MWE identification, we class ourselves over MTLB-STRUCT by 1.23% for the former task and by 4.73% for the latter.
Possible future work directions involve analyzing how the language-independent features produced by mBERT are when lateral inhibition and adversarial training are involved, together with an analysis of more models that produce multilingual embeddings,
Figure 2: Improvements brought by our methodologies (i.e., Monolingual, Multilingual, Multilingual+LI, Multilingual+Adv, and Multilingual+LI+Adv) on global (**top**) and unseen (**bottom**) MWE prediction compared to the results of MTLB-STRUCT, the best system in the PARSEME shared task edition 1.2.
such as XLM or XLM-R. In addition, we intend to analyze these two methodologies, with possible extensions, for multilingual training beyond MWE identification, targeting tasks, such as language generation or named entity recognition. Finally, since the languages in the PARSEME 1.2 dataset may share similar linguistic properties, we would like to explore how language groups improve each other's performance in the multilingual scenario.
Conceptualization, A.-M.A., V.B.M., V.P. and D.-C.C.; methodology, A.-M.A. and V.P.; software, A.-M.A.; validation, A.-M.A., V.B.M., D.-C.C. and S.T.-M.; formal analysis, A.-M.A.; investigation, A.-M.A., V.B.M. and D.-C.C.; resources, A.-M.A. and V.B.M.; data curation, A.-M.A.; writing--original draft preparation, A.-M.A., V.B.M. and V.P.; writing--review and editing, A.-M.A., V.B.M., D.-C.C. and S.T.-M.; visualization, A.-M.A.; supervision, D.-C.C. and S.T.-M.; project administration, D.-C.C.; funding acquisition, D.-C.C. All authors have read and agreed to the published version of the manuscript.
This research has been funded by the University Politehnica of Bucharest through the PubArt program.
The PARSEME 1.2 dataset used in this work has been open-sourced by the competition organizers and is available for public usage at [https://lindat.mff.cuni.cz/repository](https://lindat.mff.cuni.cz/repository) /xmlui/handle/11234/1-3367 (last accessed on 21 April 2023).
The authors declare no conflict of interest.
|
2305.09418 | Leaf Only SAM: A Segment Anything Pipeline for Zero-Shot Automated Leaf
Segmentation | Segment Anything Model (SAM) is a new foundation model that can be used as a
zero-shot object segmentation method with the use of either guide prompts such
as bounding boxes, polygons, or points. Alternatively, additional post
processing steps can be used to identify objects of interest after segmenting
everything in an image. Here we present a method using segment anything
together with a series of post processing steps to segment potato leaves,
called Leaf Only SAM. The advantage of this proposed method is that it does not
require any training data to produce its results so has many applications
across the field of plant phenotyping where there is limited high quality
annotated data available. We compare the performance of Leaf Only SAM to a Mask
R-CNN model which has been fine-tuned on our small novel potato leaf dataset.
On the evaluation dataset, Leaf Only SAM finds an average recall of 63.2 and an
average precision of 60.3, compared to recall of 78.7 and precision of 74.7 for
Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask
R-CNN model on our data, but the SAM based model does not require any extra
training or annotation of our new dataset. This shows there is potential to use
SAM as a zero-shot classifier with the addition of post processing steps. | Dominic Williams, Fraser Macfarlane, Avril Britten | 2023-05-16T13:16:33Z | http://arxiv.org/abs/2305.09418v2 | ### Leaf Only SAM: A Segment Anything Pipeline for Zero-Shot Automated Leaf Segmentation
### Abstract
Segment Anything Model (SAM) is a new "foundation model" that can be used as a zero-shot object segmentation method with the use of either guide prompts such as bounding boxes, polygons, or points. Alternatively, additional post processing steps can be used to identify objects of interest after segmenting everything in an image. Here we present a method using segment anything together with a series of post processing steps to segment potato leaves, called Leaf Only SAM. The advantage of this proposed method is that it does not require any training data to produce its results so has many applications across the field of plant phenotyping where there is limited high quality annotated data available. We compare the performance of Leaf Only SAM to a Mask R-CNN model which has been fine-tuned on our small novel potato leaf dataset. On the evaluation dataset, Leaf Only SAM finds an average recall of 63.2 and an average precision of 60.3, compared to recall of 78.7 and precision of 74.7 for Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask R-CNN model on our data, but the SAM based model does not require any extra training or annotation of our new dataset. This shows there is potential to use SAM as a zero-shot classifier with the addition of post processing steps.
### Introduction
One of the main challenges facing plant breeding is that of plant phenotyping [1, 2]. That is the determination of plant performance and characteristics whilst plants are growing. Continued advances in genetic technologies have reduced genotyping costs for plant scientists and breeders, enabling increasingly large datasets to be generated [3]. It is important for advances in plant phenotyping techniques to be made at a similar rate to enable an understanding of plant behaviour and provide data to help understand the physiological impact of genetics. Plant imaging is one of the techniques that can be used to do this and, combined with advances in computer vision techniques, can provide data on plant performance that can show how different genotypes response to stress conditions [4, 5, 6]. This paper investigates the problem of measuring potato plants and relating imaging data to leaf area and leaf mass measurements.
There have been ongoing leaf segmentation (LSC) and counting (LCC) challenges over the past several years [7]. Various instance segmentation models have been applied to these images and Mask R-CNN [8] has been shown to perform well in such tasks [9]. Detectron2 [10], offers a framework for applying Mask R-CNN using various backbone region proposal networks and is used in this paper to compare the results of Leaf Only SAM in leaf segmentation tasks to a trained instance segmentation model. There have also been a number of studies trying to expand the generalisability of models produced for the LCC and LSC to other plant crops by using image generation methods [11] or using domain adaptation methods [12]. We have investigated whether Segment Anything can be used to produce a segmentation method in a new crop without the need for training and fine-tuning as an alternative solution to the generalisation problem.
The recently released Segment Anything Model (SAM) presents a "foundation model" that can be used to carry out an annotation free segmentation [13] and it has performed well in a variety of applications. There are several ways it can be used; to generate impressive segmentation results with
limited user prompts; to generate highly accurate object masks either from a series of positive or negative points; or to go from bounding boxes to object segmentation [14]. It can however be used as an automatic segmentation technique on its own without any additional user input. A number of studies have been published which utilise SAM for various medical image analysis problems [15-17]. One weakness of many of these methods is that SAM cannot determine the class of the objects being segmented and will segment everything in an image detecting background objects as well as objects of interest. Some early studies have ignored this problem and have instead compared performance of the model by comparing the closest detected object to each ground truth object. This is not possible in real world settings. This limitation can be overcome by applying post processing techniques to the output to evaluate the objects detected and only keep the ones that are of interest. For example one study has used SAM to detect craters on Mars by applying a Hough transform to the detected objects to only keep circular objects [18].
Segment anything used data from many sources online including the leaf counting and segmentation challenge dataset which was used to evaluate the performance of the model [13]. So, this is not an unseen problem for the segment anything model. The fact we have used a novel dataset not previously publicly available for this work ensures that the segment anything model has not had previous sight of the images we have used and highlights the adaptability and generalisation of the proposed approach.
In this paper we present new data with manual annotations on images of potato leaves. This presents a similar challenge to the leaf segmentation challenge, but we have included additional data on leaf area, leaf fresh mass, and leaf dry mass that provides an additional focus to the dataset and ensures that we are evaluating performance that closely links to relevant problem to be solved. We also encounter the limitations of image data collection which itself is not a perfect measure of the biologically relevant traits of leaf area and mass. We present a pipeline that uses Segment Anything with the addition of novel post processing steps as a zero-shot segmentation method. We compare performance of this approach with a Mask R-CNN trained on a subset of our data to see how our method compares against fine-tuned state of the art image segmentation models.
## Methods
Plants and Imaging
A total of 106 potato plants were grown in two batches. The plants were propagated in tissue culture and then planted into 10x10cm pots and grown in a glasshouse. The first set of plants were 57 potato plants of variety Desiree. These plants were grown in 4 trays of 15 plants with the last tray missing 3 plants which did not survive transplanting into soil. Once a week, each plant was photographed with a DLSR with a top-down shot taken roughly 80cm above the plants which were placed on a common paper background. Each week 12 plants were harvested with three plants being taken from each tray. The harvest plants had their total leaf area measured in cm2, the number of leaves was counted, and the fresh mass of the leaves was weighed. The leaves were then placed in an oven at 50degC for 7 days and then the dry mass was weighed thereafter. Fresh mass can be highly variable with the time since last watering occurred so dry mass is generally favoured as a measure for plant growth. After 5 weeks all of the plants were harvested, and the experiment was complete. A second set of plants consisting of 49 potato plants of variety Voyager was planted three weeks later than Desiree and the same process was applied but with 10 plants being harvested each week instead of 12.
A total of 318 images of potato plants of a varying age between 1 week and 6 weeks growth were gathered. 128 images were manually annotated using the labelme software [19], with a series of points being marked on the leaf boundary to segment each leaf into individual polygons. The annotated images were from the second and third week of imaging and consisted of 45 images of one week old Voyager plants, 34 images of three week old Desiree and 49 images of two week old Desiree. For 32 of these images the plants were then harvested, meaning corresponding ground truth leaf number, leaf area and leaf mass data for these plants is available. To create our segmentation model this dataset was split into random, training (80/128), validation (32/128), and test (16/128) data sets. This resulted
in 990 labelled instances of leaves in the training set and 282 and 199 in the validation and test sets respectively. Since no training was carried out for the Segment Anything Model, both the training and validation data sets were used in model development but only the test set is used for evaluation so a comparison can be made with the Mask R-CNN model. Figure 0(a) shows an example image of the canopy of a potato plant and Figure 0(b) shows the labelled leaf polygons.
### Leaf Only SAM
We first prompted segment anything with a 32x32 grid of points evenly spaced on the image to generate fully automatic segmentation masks. Segment anything has the option to perform multi-level segmentation where the model is also applied to smaller crops of the image to improve performance in detection of smaller objects. We utilised this to also segment at an additional crop layer. This gives an output of a series of segmentation masks for the images. This includes masks corresponding to objects of interest (the plant leaves) but also many other background objects. We refer to this step as Base SAM when carrying out comparisons. An additional four post processing steps were added to the output to identify only leaf objects.
The first post process step was a colour checking step. This utilises the fact we know we are looking for green plant material so finds green masks in the image. The original RGB images were converted to the HSV colour space. The mean hue and saturation were then used to determine if the objects found were leaves or not by applying thresholds to these values. A mean hue of between 35 and 75 and saturation over 35 were used. We refer to this step as Green pixels when carrying out comparisons.
One of the problems SAM suffers from is ambiguity about the level of object wanted. In our case a segmentation of the entire plant is also a valid object and was often found. A check step was then introduced to remove this if present. If more than two masks were found for an image after the colour check was applied, then a total mask of all pixels was generated. If any objects had an Intersection over Union (IoU) of more than 90% with this mask, then they were assumed to contain the entire plant and so were removed from our list of leaf objects. We refer to this step as Not all when carrying out comparisons.
A small number of other miscellaneous objects were still detected at this point. These were clearly not leaves due to their shape and as a result a shape filter was used to remove these objects. For every mask, the area of the mask was compared to the area of the minimum enclosing circle. If the ratio of mask area was less than 0.1 of the area of minimum enclosing circle the object was clearly not leaf shaped and so removed from our list of leaf objects. Since we wish to detect partially occluded leaves and there is reasonable diversity in leaf shape this step could not be too prescriptive on shape wanted. We refer to this step as Correct shape when carrying out comparisons.
Figure 1: Example a) Image and b) Ground Truth Label pair.
There were still some objects that were present that were a collection of multiple different leaves. We often detected both individual leaves and a mask containing several leaves covering the same area. To remove multi leaf masks we detected multi leaf objects by a simple sum of all the mask objects in the image - labelling each pixel by how many masks it was part of. Any mask with a mean score of over 1.5 was assumed to be a duplicate object. These duplicate masks were then checked to see if they were 90% contained in other masks indicating they were in fact a leaf. Masks that were not contained in other masks were then checked to see if at least 90% of their area was contained in other masks and removed if this was the case.
\(\mathrm{Mask}\) R-CNN
Mask R-CNN [8] remains a popular architecture for performing instance segmentation. A Mask R-CNN approach was developed using the Detectron2 framework to compare with the segmentation provided by the proposed Leaf Only SAM technique. Both 50 and 101 layer ResNet feature proposal networks (ResNet-50-FPN and ResNet-101-FPN) were employed as backbones to investigate the effect of CNN depth on the segmentation of our novel potato dataset and trained over 2000 iterations. Training and inference was performed using a single NVIDIA Quadro RTX 8000 with a batch size of 16 images and where images were resampled to have a maximum side length of 1333 pixels. Additional image augmentation techniques, such as rotation, mirroring, and other transformations, which improve training dataset variability were not employed in this work.
### Evaluation Metrics
In order to evaluate the performance of the two methods applied to our leaf segmentation dataset, a number of key metrics were identified. Average Precision (AP) and Average Recall (AR) are used in assessing models applied to the Common Objects in COntext (COCO) dataset and are used here. Specifically two definitions of each are used, the first where Precision and Recall averaged over a number of IoU thresholds T \(\in[0.5:0.05:0.95]\) denoted as AP and AR, as well as that where T = 0.75, denoted as AP0.75 and AR0.75.
In addition to Precision and Recall, the DSC was used. As this poses a binary classification problem of leaf vs. non-leaf, the DSC is equivalent to the F1 score and is calculated for each proposed instance as
\[\mathrm{DSC}=(2\ *\mathrm{TP})/(2\ *\mathrm{TP}+\mathrm{FP}+\mathrm{FN}), \tag{1}\]
where TP, FP, and FN are the true positive, false positive, and false negative detections respectively.
For the calculation of DSC for the SAM based methods each ground truth mask was compared to the closest detected object since no classification labels are produced. It therefore measures only the accuracy of the segmentation masks themselves not the ability to determine if they are leaves or not. The performance is evaluated after each of the described steps in turn so the effect of each of these can be seen.
### Results and Discussion
\begin{table}
\begin{tabular}{l l l l l l l l} \cline{2-9} & Model & Backbone & AR\({}_{75}\) & AP\({}_{75}\) & AR & AP & DSC \\ \hline \multirow{4}{*}{Validation} & Mask R-CNN & ResNet-50-FPN & 83.3 & 81.3 & 74.6 & 72.1 & 0.849 \\ & Mask R-CNN & ResNet-101-FPN & 81.9 & 79.3 & 73.7 & 70.4 & 0.84 \\ & Base SAM & - & 78.9 & 10.4 & 70.5 & 9.5 & 0.792 \\ & Leaf Only SAM & - & 78.8 & 71.0 & 70.4 & 63.4 & 0.786 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of segmentation performance of Segment Anything with Mask R-CNN models trained on the leaf potato dataset
Looking at Table 1 we can see that our Leaf Only SAM model is not able to perform as well as a fine-tuned mask R-CNN model. We achieved an AP\({}_{75}\) of 60.3 and AR\({}_{75}\) of 64.7 which while not poor scores are less than the AP\({}_{75}\) of 74.7 and AR\({}_{75}\) of 78.7 achieved by Mask R-CNN. This is not surprising since our model had not been trained on similar potato images like the Mask R-CNN model. We can also see that the post processing steps introduced in Leaf Only SAM are important in improving the precision of the model. Base SAM achieves an AP of only 12.6, the recall of Base SAM is slightly higher than the recall of our model but a 1.5 reduction in recall is a good trade off for a 47.7 increase in precision. The DSC of SAM and our Leaf Only SAM, which measures the accuracy of the best fit mask to each ground truth object, shows a worse performance compared to Mask R-CNN indicating that fine-tuned models can still outperform general foundation models like SAM. It may be possible to improve results of SAM by fine tuning the model more heavily on leaf data.
The results for the different steps in our Segment Anything model, as displayed in Table 2, show the importance of adding additional post processing steps. Each line refers to an additional step being added as described in the methods section. Segment anything alone achieves an average precision of only 12.6. Each of the post processing steps we have developed increases precision. The first step removing objects of the wrong colour has the biggest effect, but further increases are seen with each step. There is a slight reduction in the recall of the method with the additional steps. This is a result of them removing found objects. The first step removing pixels of the wrong colour has biggest reduction most of the other steps had no reduction in recall until the final step which slightly lowers recall indicating instances it is removing correctly segmented leaves.
In order to determine what is causing the remaining low precision an analysis was carried out of the false positives masks generated by our Leaf Only SAM model. The was done by manually looking at the outlines of each of the false negative masks plotted on the original image. These were then put into one of 5 categories leaf (i.e., judged to be a true positive), multiple leaves together, only partial segmentation of leaf, a part of the plant that is not a leaf and finally objects which are not part of the plant. Any leaves that were occluded, and so are only partially visible, were classed as a leaf if all of the visible leaf was segmented. Figure 2 shows examples of the segmentations obtained using a) Base SAM, b) Leaf Only SAM, and c) Mask R-CNN. The yellow outlines in Figure 2b indicate a false positive detection. We can see that these are a combination of multiple leaf objects, objects missed from manual segmentation and some small objects which are inaccurately segmented.
\begin{table}
\begin{tabular}{l c c c c c} \cline{2-6} \multicolumn{1}{c}{} & AR\({}_{75}\) & AP\({}_{75}\) & AR & AP & DSC \\ \hline Base SAM & 64.7 & 12.6 & 59.6 & 11.7 & 0.729 \\ Green Pixels & 63.7 & 54.8 & 58.8 & 51.7 & 0.718 \\ Not all & 63.7 & 59.7 & 58.8 & 56.0 & 0.700 \\ Correct Shape & 63.7 & 59.9 & 58.8 & 56.2 & 0.700 \\ Remove multi-leaf Objects & 63.2 & 60.3 & 58.4 & 56.4 & 0.700 \\ \hline \end{tabular}
\end{table}
Table 2: Results of ablation study showing the relative performance of our different post processing steps
Figure 3 shows the results of this false positive analysis, we can see 37% of the false positives were judged to be actually leaves. The evaluation was not done side by side with manual annotations so some of these objects will have failed the false positive step due to not reaching the 75% IoU threshold and can therefore be thought of as correct but poor segmentations. Other false positive leaves will be those leaves which are very small or heavily occluded so were missed by manual annotation, the mean size of masks in this category was 18,500 pixels compared to 34,900 for all found masks. This shows the value of using SAM to improve manual data labelling. 23% were of things not part of the plant. The remaining 40% were of plant material but not leaves. There were more masks containing multiple leaves than mask containing only parts of leaves, but both categories were found. A model that was fine tuned on potato plants may be more able to judge where the boundaries of a full leaf are so avoiding these problems.
In order to help understand how our segmentation technique can be related back to real world plant measurements, a comparison looking at both leaf area and leaf dry mass was made. The correlation was calculated between pixel counts on both the manually annotated images and automatically segmented images with the leaf area and leaf dry mass measures. These results, which can be seen in Table 3, show that there is good agreement between manually annotated pixel counts and both leaf
Figure 3: Results of manual classification of false positives from manual classification data.
Figure 2: Leaf segmentation on the image from Figure 1 using a) Base SAM. b) Leaf Only SAM. c) Mask R-CNN
area and dry mass r\(<\)0.9. The relationship with our automatic segmentation method was weaker r=0.74 and r=0.625 respectively. The relationship of physically measured leaf area to the image derived methods is shown in Figure 4. The stronger relationship between the manually segmented data and leaf area compared to the automatically derived segmentation can be seen. This shows that improving the accuracy of the segmentation method could improve the relationship to manual measures.
## Conclusions
Our pipeline builds upon segment anything and achieves reasonable accuracy on the leaf segmentation task in potato without any fine tuning or training on potato images. This shows that segment anything is a powerful tool that has potential to be used in the field of plant phenotyping. The removal of the need to have access to annotated data for model training would speed up adoption in more minor crops or growing settings.
There was a reduction of just over 10% for both precision and recall when compared to a fine-tuned model with a slightly larger reduction in dice score. Comparison with leaf area and leaf mass shows
Figure 4: Plot showing the relationship between leaf area physically measured and that from images. For both manual image annotation and automatic derived data.
\begin{table}
\begin{tabular}{l c c c c} \hline & & No. pixels & No. pixels Leaf \\ & Leaf area & Leaf dry mass & manual & Only SAM \\ \hline Leaf area & 1 & & & \\ Leaf dry mass & 0.891 & 1 & & \\ No. pixels manual & 0.930 & 0.951 & 1 & \\ No. pixels Leaf Only SAM & 0.740 & 0.625 & 0.760 & 1 \\ \hline \end{tabular}
\end{table}
Table 3: Correlation between physical measures of leaf area and dry mass with image derived measurements.
that improvements in leaf segmentation techniques could lead to improved relationship with manual data.
Further work could be done in improving the post processing steps. The inclusion of a small CNN based classifier for the masks generated by SAM, similar to the classification branch of Mask R-CNN, could also be another way to improve performance.
## Acknowledgements
This work was supported by strategic research programme (2022-2027) funding from the Rural and Environmental Science and Analytical Services Division of the Scottish Government.
## Data and Code Availability
The data associated with this paper, which consists of images, image annotations and manual measurements, can be found on Zenodo here[20].
The code for Leaf Only SAM can be seen on Github here.
|
2305.11257 | Heterogeneous contributions can jeopardize cooperation in the Public
Goods Game | When studying social dilemma games, a crucial question arises regarding the
impact of general heterogeneity on cooperation, which has been shown to have
positive effects in numerous studies. Here, we demonstrate that heterogeneity
in the contribution value for the focal Public Goods Game can jeopardize
cooperation. We show that there is an optimal contribution value in the
homogeneous case that most benefits cooperation depending on the lattice. In a
heterogeneous scenario, where strategy and contribution coevolve, cooperators
making contributions higher than the optimal value end up harming those who
contribute lower. This effect is notably detrimental to cooperation in the
square lattice with von Neumann neighborhood, while it can have no impact in
others lattices. Furthermore, in parameter regions where a higher-contributing
cooperator cannot normally survive alone, the exploitation of lower value
contribution cooperators allows their survival, resembling a parasitic
behavior. To obtain these results, we employed various distributions for the
contribution values in the initial condition and conducted Monte Carlo
simulations. | Lucas S. Flores, Mendeli H. Vainstein, Heitor C. M. Fernandes, Marco A. Amaral | 2023-05-18T18:47:43Z | http://arxiv.org/abs/2305.11257v2 | # Heterogeneous contributions can jeopardize cooperation in the Public Goods Game
###### Abstract
When studying social dilemma games, a crucial question arises regarding the impact of general heterogeneity on cooperation, which has been shown to have positive effects in numerous studies. Here, we demonstrate that heterogeneity in the contribution value for the focal Public Goods Game can jeopardize cooperation. We show that there is an optimal contribution value in the homogeneous case that most benefits cooperation depending on the lattice. In a heterogeneous scenario, where strategy and contribution coevolve, cooperators making contributions higher than the optimal value end up harming those who contribute lower. This effect is notably detrimental to cooperation in the square lattice with von Neumann neighborhood, while it can have no impact in others lattices. Furthermore, in parameter regions where a higher-contributing cooperator cannot normally survive alone, the exploitation of lower value contribution cooperators allows their survival, resembling a parasitic behavior. To obtain these results, we employed various distributions for the contribution values in the initial condition and conducted Monte Carlo simulations.
## I Introduction
Evolutionary Game Theory provides a mathematical and theoretical framework to understand the dynamics of cooperative behavior in a competitive environment. In this framework, individuals are seen as rational players who usually interact with each other aiming to maximize their own gains. One of the central questions in this field is how cooperation can emerge and persist in such a context, where individuals are motivated by self-interest and are exposed to the pressures of natural selection [1]. Despite the numerous contributions made to this field, the underlying mechanisms which allow cooperation continue to be the subject of intensive research.
Public Goods Games (PGGs) are a classic model used in game theory to study the evolution of cooperation. In a PGG, participants must decide how much to contribute to a common pool of resources. The total contribution is multiplied by a factor greater than one, reflecting the positive feedback of cooperation, and then divided equally among all participants, regardless of their individual contribution. This creates a dilemma, as participants have an incentive to free-ride and not contribute, while the overall outcome is improved by cooperation. An example of a real-world interaction that is often described by the dynamics of PGGs is the provision of public goods and services, such as education, health care, and environmental protection [2]. In these contexts, individuals must decide whether to contribute to the provision of these goods and services, and their decision is influenced by the level of contributions made by others.
Many mechanisms have been proposed to sustain cooperation, such as memory [3], spatial reciprocity [4; 5; 6; 7; 8; 9], punishment [10; 11; 12], reward [13], aspiration [14; 15], commitment to contribute [16], voluntary interactions [17], and heterogeneity [18; 19; 20; 21; 22; 23; 24; 25].
Typically, studies of the PGG assume that all players contribute equally. However, this assumption does not reflect real-world situations where the distribution of wealth in society is heterogeneous, with some individuals possessing significantly more resources than others. To address this issue, researchers have explored the impact of heterogeneity in the distribution of contribution values on cooperation levels in the PGG. For instance, some studies have examined situations where the contribution depends on the number of cooperators (\(C\)s) in the group [26; 27], and have observed an increase in cooperation levels. In addition, was investigated the effect of using a uniform distribution for the contribution of cooperators [28] and demonstrated that increasing the range of the distribution can lead to a greater benefit for cooperation. Heterogeneity in players' contribution, depending on the players' age, was explored in [29] yielding an enhancement in cooperation for some situations.
At a first glance, a topic that may appear unrelated to heterogeneity is the effect of noise in regular lattices [30; 31; 32]. However, as we will demonstrate in this paper, different contribution values in the PGG can be interpreted as distinct noise scenarios when the update rule follows the Fermi function. In this work, we investigate the PGG in the classical homogeneous scenario where all cooperators contribute a fixed value of \(c\) and explore how it affects the amount of cooperation. When dealing with heterogeneous scenarios, we assume that initially each cooperator contributes according to a given distribution, while defectors consistently contribute nothing. A crucial aspect of our model is that the players always copy the contribution value when switching strategies. Through Monte Carlo simulations, we reveal that contribution heterogeneity can actually hinder cooperation in some parameter regions. However, our study also uncovers interesting phase transitions and second-order free-riding effects. These findings shed new light on the complex relationships between wealth disparity, cooperation, and the provision of public goods and services.
Model
Here, we study the heterogeneous Focal Public Goods Game (FPGG, also called pairwise interactions [33]), in which cooperators (\(C\)) may contribute distinct values, whereas defectors (\(D\)) contribute nothing. For the FPGG in a regular lattice, a player's payoff, \(\Pi_{X}\), is calculated by summing the contributions from all their first neighbors, including themselves. Next, all contributions are multiplied by a factor \(r\) and the resulting value is equally distributed among all \(G\) members of the group. Finally, each cooperator pays a cost, thus resulting in
\[\Pi_{C_{c_{i}}} =\frac{r}{G}\sum_{k=1}^{G}c_{k}-c_{i} \tag{1}\] \[\Pi_{D} =\frac{r}{G}\sum_{k=1}^{G}c_{k}, \tag{2}\]
where \(c_{k}\) is player \(k\)'s contribution (\(c_{k}=0\) if it is a defector) and we denote the central player and their contribution by \(C_{c_{i}}\) and \(c_{i}\), respectively. We initialize the simulation with an equal fraction of randomly distributed cooperators and defectors. Uniform, Gaussian, and Bernoulli discrete distributions of the contributions are used among the cooperating half of the initial population when dealing with the heterogeneous scenario. When referring to a homogeneous case, we use a fixed \(c_{i}\equiv c\) for all cooperators in the population.
The system evolves as follows: first, a player \(X\) and one of their neighbors \(Y\) are randomly chosen, and their payoffs are calculated from the equations above. Player \(X\) adopts \(Y\)'s strategy, and contribution value (\(c_{Y}\)) with probability
\[W_{X\to Y}=\frac{1}{1+e^{-(\Pi_{Y}-\Pi_{X})/K}}\, \tag{3}\]
where \(K\) is the noise associated with irrationality. The same process is repeated \(N\) times characterizing one Monte Carlo Step (MCS), where \(N\) is the total population size. We used \(t_{max}=10^{5}\) MCS and \(K=0.1\). Averages were performed over 100 random initial conditions where appropriate. The simulations were performed on the square lattice with von Neumann neighborhood (\(G=5\)) and on the triangular lattice (\(G=7\)), both with \(N=100^{2}\) and periodic boundary conditions.
## III Results
### Homogeneous case
First, we address the effect of a homogeneous contribution value for all cooperators in the population. In this case, Eqs. (1) and (2) become
\[\Pi_{C} =\frac{rc}{G}\,N_{C}^{C}-c \tag{4}\] \[\Pi_{D} =\frac{rc}{G}\,N_{C}^{D}, \tag{5}\]
where \(N_{C}^{i}\) are the number of cooperators in player \(i\)'s group and contribute the same value \(c\) each. If we substitute Eqs. (4) and (5) into Eq. (3), \(c\) factors out enabling us to define an effective noise \(K^{\prime}=K/c\). Therefore, a simulation with fixed \(K=0.1\) but with a varying contribution value \(c\) can be interpreted as viewing the system under different noise scenarios. Fig. 1 shows the phase diagram (\(r\times c\)) for the steady-state densities of cooperation on the square (left panel) and triangular (right panel) lattices. For each \(c\), there is a corresponding critical value \(r_{c}\) below which cooperation is unviable. For the square lattice, there is an optimal intermediate contribution value (\(c_{o}\approx 0.4\), i.e, \(K^{\prime}\approx 0.25\)), that sustains cooperation with \(r_{c_{o}}\approx 4.3\). In contrast, the triangular lattice exhibits a distinct behavior, where \(r_{c}\) decreases monotonically as the contribution value \(c\) increases. Both observations are consistent with well-known results: previous studies have established that regular lattices with zero clustering coefficient, such as the square lattice, exhibit an optimal noise value that can sustain cooperation [31; 32; 33]. Conversely, in lattices with non-null clustering coefficients, such as the triangular lattice, a deterministic scenario is the best for cooperation to persist, in the sense that the critical \(r\) tends to the lowest values in this case.
A qualitative understanding of the above results can be achieved by recalling that the square lattice has only one important deterministic transition, occurring at \(r=G\), whereas the triangular lattice has two such transitions at \(r=G/2\) and \(G\), as illustrated in the appendix of [9]. In the regime where \(r>G\), cooperation dominates in the low-noise case (\(c\gg 1\)) for both lattices. Thus, the introduction of noise can either harm cooperation or maintain the domination unchanged. On the other hand, for low values of \(r\) such that \(r<G\) for the square lattice and \(r<G/2\) for the triangular lattice, cooperation becomes extinct, and therefore the introduction of noise can only benefit cooperators or maintain their extinction.
For the triangular lattice, between these limits (\(G/2<r<G\)), cooperators coexist with defectors in the low noise case. Here, the introduction of noise can have different effects, depending on the parameter values. This is, in fact, the case since cooperation is favored slightly below \(r=G\) and compromised above the \(r=G/2\) transition. Were if not for this second transition at \(r=G/2\), the results for the triangular lattice would be similar to those of the square lattice since it also has a locally optimal intermediate contribution value if we restrain ourselves to the range \(r\gtrapprox 5.5\) and the phase diagrams are similar above this value, presenting a maximum in cooperation density in both cases if \(r<G\).
### Heterogeneous case
Consider a population consisting solely of cooperators with varying contribution values. In this scenario, when
a player adopts a neighbor's strategy, they also adopt their contribution value. As a result, cooperators with similar \(c_{i}\) values tend to cluster together. We can use the phase diagrams presented in Fig. 1 to gain insight into the behavior of this heterogeneous population.
Fig. 2 illustrates a specially prepared initial condition of cooperators, \(C_{c_{i}}\), with different contribution values (\(c_{i}\)): 0.5 (light red), 1 (light blue), and 2 (dark blue), in the absence of defectors. In general, we observe that lower contribution cooperators behave similarly to defectors. Consequently, for sufficiently small \(r\), the \(C_{0.5}\) cooperators dominate the population (top row). When \(r\) is sufficiently large, we observe coexistence between \(C_{0.5}\) and \(C_{2}\) (middle row). In this case, \(C_{2}\) forms clusters within a sea of \(C_{0.5}\), creating a situation that resembles the classic \(C\)_vs_\(D\) game. For smaller \(r\) values,
Figure 1: The phase diagrams of the equilibrium cooperation density for the square lattice with von Neumann neighborhood (left panel) and for the triangular lattice (right panel) show the combined effects of the multiplicative factor, \(r\), and the contribution value, \(c\), on cooperation for homogeneous populations. These diagrams illustrate that there exists an optimal contribution value (\(c_{o}\approx 0.4\)) that minimizes the corresponding critical \(r\) value above which cooperation can occur for the square lattice case (\(r_{c_{o}}\approx 4.3\)). On the other hand, for the triangular lattice, the critical \(r\) value decreases monotonically as the contribution increases.
Figure 2: Temporal evolution of cooperators in clusters with different \(c\) values on the square lattice: \(c=0.5\) (light red), \(c=1.0\) (light blue), and \(c=2\) (dark blue). The top row shows the dominance of the lower contribution cooperators for \(r=3.7\). The middle row illustrates the coexistence between the lowest and the highest contribution cooperators for \(r=4.78\). Finally, the bottom row illustrates the regime where higher contributing cooperators begin to dominate (\(r=5.1\)). The number of MCS steps taken during each time evolution vary, highlighting the unique characteristics of each situation. The last column corresponds to 300 MCS for the first row and 1300 MCS for the other two.
where \(C_{2}\) cannot survive, we find that \(C_{0.5}\) and \(C_{1}\) can also coexist. Moreover, the second row in Fig. 2 demonstrates that \(C_{1}\) can initially coexist with both \(C_{0.5}\) and \(C_{2}\) individually, but its density eventually declines until it becomes extinct when all three contribution values are present. This implies that \(C_{0.5}\) and \(C_{2}\) exhibit some synergistic effect that jeopardizes the survival of \(C_{1}\) cooperators. This observation is crucial and will be further explored in the subsequent sections of this paper. Finally, for larger \(r\) values, we see that the \(C_{2}\) starts to predominate in the population (bottom row). An important observation is that the higher contribution cooperators always cluster within the sea of lower contribution cooperators, suggesting that the notion of cooperators and defectors depends on whom the game is played against. In Appendix A, we provide a quantitative analysis of the payoffs that shows that in the absence of defectors, the lowest contribution cooperators behave as defectors, while a higher contribution cooperator behaves like a cooperator with a different contribution value.
#### iii.1.3 Coexistence and parasitism
Next, we will study a mix of different contribution cooperators in the presence of defectors. In principle, different cooperators could coexist if \(r\) is higher than both their homogeneous \(r_{c}\) values. However, due to the behavior discussed previously, different \(C\)s can jeopardize one another. The equilibrium density of cooperators as a function of \(r\) is presented in Fig. 3 for both homogeneous cases (with \(c=0.5\), \(c=1.0\) and \(c=2\), dashed curves) and the heterogeneous case (solid curve), where cooperators contribute the aforementioned values with equal probability in the initial configuration. First, when \(r\) is only slightly above the critical value (\(r_{c_{o}}\approx 4.3\)) of the lowest contributing cooperator, \(C_{0.5}\), all other contribution \(C\)s become extinct. As \(r\) approaches, but still remains below, the critical value (\(r_{c}\approx 4.58\)) of \(C_{1}\), a coexistence phase emerges for \(C_{0.5}\) and \(C_{1}\), despite the fact that the latter cannot survive alone. This is due to a parasitic behavior between them, which occurs in the first highlighted light grey region. As we further increase \(r\), at some point it will become higher than both homogeneous critical values of \(C_{0.5}\) and \(C_{1}\), characterizing the beginning of the coexistence phase between them. At some point for a high enough value of \(r\), \(C_{1}\) cooperators will be favored and coexist with defectors alone. For \(r\) close to, but still below the critical value (\(r_{c}\approx 4.8\)) of \(C_{2}\) cooperators, a similar parasitism mechanism occurs and now \(C_{2}\) cooperators take advantage of \(C_{0.5}\) or \(C_{1}\) cooperators (in different regions). Above all of the homogeneous \(r_{c}\) values, we observe another coexistence region between \(C_{2}\) cooperators and \(C_{0.5}\) or \(C_{1}\) (right dark grey rectangle).
From the inset of Fig. 3, it is clear that the higher contribution \(C\)s benefit from the lower contribution ones. Besides being able to survive in regions where they would be extinct when only in the presence of defectors, we observe that their presence decreases the density of \(C_{0.5}\) when compared to their homogeneous equilibrium density. Overall, this lowers the total density of cooperators in this region. In addition, Fig. 3 indicates that the critical \(r\) value required to maintain cooperation in a heterogeneous scenario is determined by the lowest homogeneous \(r_{c_{i}}\) value among all cooperators, since when \(r\) is low all cooperators with a homogeneous \(r_{c}>r\) cannot survive. This finding is consistent with a similar model that used uniformly distributed contributions taken from \(U(1-\sigma,1+\sigma)\)[28], where \(U(a,b)\) represents a continuous uniform distribution in the interval \([a,b]\). In that study, increasing \(\sigma\), which eventually makes \(U\) encompass our optimal contribution value (\(c_{0}\approx 0.4\)), reduced the critical \(r\) required to sustain cooperation until it eventually plateaued for a sufficiently high \(\sigma\) value.
Fig. 4 shows snapshots of the time evolution for the parasitism and coexistence between \(C_{2}\) and \(C_{0.5}\) cooperators. In the first row (\(r=4.77\)), we show the parasitic behavior where \(C_{2}\) cooperators close to the lower contribution \(C_{0.5}\) survive while an isolated cluster of \(C_{2}\) is driven to extinction. The second row shows the region where \(r\) is greater than all homogeneous critical \(r_{c}\) values(\(r\gtrsim 4.8\)). While \(C_{2}\) cooperators survive on their own because there are no higher contribution \(C\)s to hinder their growth, they will harm all other lower contribution \(C\)s. To see which other cooperators sur
Figure 3: Equilibrium density of cooperation as a function of \(r\). The solid line corresponds to the heterogeneous case, where cooperators initially contribute with values of 0.5, 1, and 2, uniformly distributed. The dashed lines represent the homogeneous cases for each individual contribution value. We observe that there are dominance regions for all possible \(C\)s. In the light-grey rectangles, we identify the parasitism regions, where higher contribution cooperators survive at the expense of lower contribution ones, where they cannot survive on their own (see inset for the case between \(C_{1}\) and \(C_{0.5}\)). Specifically, the higher contribution \(C\)s can exploit any lower contribution cooperators, but for different values of \(r\). However, in the dark-grey rectangles, coexistence regions appear where \(C_{1}\) coexist with \(C_{0.5}\), and \(C_{2}\) can coexist with both.
vive with them, we can refer to the case with fewer strategies for the same \(r\) region. With only \(D\), \(C_{1}\) and \(C_{2}\) in the population, the \(r\) values in question are high enough to favor \(C_{2}\) with the exception of \(r=4.8\) (data not shown). For this value, \(C_{1}\) cooperators can coexist with \(C_{2}\). However, in the case \(D\), \(C_{0.5}\) and \(C_{2}\) (data not shown), the \(r\) values are still in the coexistence region between \(C_{0.5}\) and \(C_{2}\). Therefore, \(C_{2}\) can coexist with both strategies for \(r=4.8\) but only with \(C_{0.5}\) for higher values of \(r\).
### Homogeneous vs heterogeneous cases
By comparing the homogeneous cases with the heterogeneous one, represented by the dashed and solid lines in Fig. 3, respectively, we observe that heterogeneity can favor cooperation, leading to a higher equilibrium density for \(r<5\), when compared to the homogeneous case with \(c=2\). This is due to the high density of the lower contribution cooperators that compensate the low density of high contribution cooperators when alone. On the other hand, when compared with the homogeneous case with \(c=0.5\), the presence of high contribution cooperators prevents the expansion of lower contribution ones, resulting in a worse scenario for cooperation.
In summary, determining whether heterogeneity is beneficial for cooperation depends on the reference point. To obtain a more definitive answer, it is useful to compare the heterogeneous case to the homogeneous case at the optimal value. Fig. 5 illustrates this scenario for various initial distributions. Near the critical point (\(r_{c}\approx 4.3\)), high-contribution cooperators do not hinder the optimal \(C\)s since they are unable to survive at such low \(r\) values. However, for distributions that do not encompass optimal cooperators, the critical \(r\) will be higher, as previously discussed. As \(r\) increases, higher contribution \(C\)s begin to survive and can jeopardize cooperation, as shown in the figure.
As demonstrated in Fig. 3, various coexistence, parasitism, and dominance regions exist for different \(C\)s. As a result, for continuous distributions of the initial contributions, we expect a superposition of all possible scenarios, making predictions for each sample challenging, as indicated by the large standard deviations of the cooperator's equilibrium density in Fig. 5. In the inset of Fig. 5, we present the mean equilibrium contribution among samples and its standard deviation for the uniform \(U(c_{\circ},5)\) case. Both of these quantities increase as \(r\) increases. For low \(r\) values, only one type of \(C\) with a contribution close to the optimum survives. However, as \(r\) increases, higher contribution \(C\)s become viable, leading to a decrease in the density of cooperation compared to the homogeneous optimal case. For intermediate \(r\) values, a range of contributions can survive, which also increases with \(r\). Now, contributions closer to the optimal value survive for some samples, explaining the increase in the average density of cooperation. For high enough values of \(r\), only the highest contribution is favored and therefore survives alone, matching the homogeneous \(c=5\) curve. In Fig. 5 (b), we also display the contributions that survive for each sample. Interestingly, we observe four distinct regions that describe the individual contributions: for \(r<4.7\), only one contribution survives with defectors; for \(4.7<r<4.8\), two contributions can coexist; for \(4.8<r<4.9\), three contributions can coexist; for \(r>4.9\), either two contributions coexist or one survives alone. Despite this, all
Figure 4: Time evolution of a cooperator clusters with \(c=0.5\) (light red), \(c=1.0\) (light blue) and \(c=2\) (dark blue) in the sea of defectors (red) on the square lattice for simulation times \(t=1,200,500,1000\) and \(10000\) (MCS). The top row illustrates the parasitism region (\(r=4.77\)), where high contribution cooperators (\(C_{2}\)) can only survive if they are in close proximity to low contribution \(C_{0.5}\). The bottom row shows the coexistence region (\(r=4.81\)), where the highest contribution cooperators, \(C_{2}\) survive both in contact with the \(C_{0.5}\) and on their own.
possible scenarios are worse than the homogeneous one at the optimal contribution value for cooperation below \(r=5\). This behavior is consistent for other distributions, including Gaussian and the uniform \(U(0,5)\) (data not shown). Therefore, in terms of equilibrium density and critical \(r\) value, heterogeneity can only harm cooperation compared to the optimal homogeneous case.
It is important to note that the results presented above are specific to the square lattice, but they can be expected for all lattices with a null clustering coefficient [33], where optimal noise values exist. For lattices with a non-null clustering coefficient, the deterministic scenario is known to be the best for cooperation in terms of the critical \(r\) value. Therefore, the best scenario for cooperation is expected to be achieved with a sufficiently high contribution, which corresponds to low noise and, consequently, to the smallest possible \(r_{c}\) value. Above this value, the dynamics will not change as the deterministic scenario has already been established. As discussed previously, Fig. 1 (b) confirms this behavior for the triangular lattice. For the heterogeneous case, high contribution cooperators will generally be preferable for cooperation since the deterministic case is the best. It is important to remember that higher contribution \(C\)s have an advantage over lower contribution \(C\)s. Now, the higher contribution \(C\)s that survive the initial steps and manage to cluster will always survive alone. Therefore, compared to a homogeneous case, heterogeneity will benefit cooperation (by reducing \(r_{c}\)) if it allows for higher contributions. However, compared to the optimal homogeneous case, heterogeneity will not matter since all the lower contributions will become extinct, and the higher contributions will also be in the optimal scenario.
## IV Conclusion
The concept of heterogeneity has become a crucial area of study in evolutionary game theory. Researchers have delved into the various benefits of heterogeneity in diverse situations, including those explored in the paper. While our work aimed to investigate the positive effects of heterogeneity, we also uncovered a potential downside. Unlike other papers, our study revealed how heterogeneity can impede cooperation. By examining both the positive and negative aspects of heterogeneity, our work contributes to a deeper understanding of the complexities of evolutionary game theory. The findings of our study can provide valuable insights into the optimization of cooperative behavior in various scenarios, helping to enhance the overall efficiency and effectiveness of social systems.
In summary, changing the contribution value of cooperators using the imitation update rule is essentially equivalent to scaling the noise. As there is an optimal noise value for sustaining cooperation, there exists an optimal homogeneous contribution for cooperators that minimizes the critical \(r\) value. We found that for regular lattices with a null clustering coefficient, heterogeneity can be detrimental to cooperators in the optimal scenario. However, for regular lattices with a non-null clustering coefficient, heterogeneity will not harm optimal cooperators. Therefore, we conclude that the homogeneous optimal scenario is the best for cooperation compared to the heterogeneous case studied in this work. This is due to the fact that higher contribution cooperators jeopardize smaller ones, and can even parasite them.
Figure 5: Equilibrium density of cooperators as a function of \(r\) for different initial contribution distributions. We observe that for contribution ranges that encompass the optimal homogeneous case \(c_{0}\) and lower values (triangles), the equilibrium density is exactly the same as that of the homogeneous optimal case (solid black line). However, for distributions with contributions higher than the optimal (red circles), cooperation is inhibited for intermediate \(r\) values (\(4.4\lessapprox r<5\)), when compared to the optimal. Moreover, in this case, there is great variability among samples, as indicated by the large standard deviations. Here, \(U(a,b)\) represents a continuous uniform distribution in the range \([a,b]\). The inset shows the mean contribution vale and the standard deviation from it between samples in the equilibrium for the \(U(c_{0},5)\). We also show in (b) all contributions that survived for each sample for the same distribution. While for low \(r\) values only one contribution survive, for high enough values two or even three contributions start to survive together.
It is well established that the deterministic case is the best scenario for the classic PGG with group interactions for all types of regular lattices [33]. Therefore, we expect that all regular lattices will exhibit a behavior similar to that of the triangular lattice in our model for the PGG. Moreover, it has been shown that the Focal PGG can be mapped to the Prisoner's Dilemma game for certain parametrizations [9; 17]. However, by these approaches, different contributions are related to different group sizes, which must be considered when studying heterogeneity's in the PD game.
## Credit authorship contribution statement
L. S. Flores contributed with Software and all authors contributed equally in Conceptualization, Formal analysis and Writing.
###### Acknowledgements.
L.S.F. thanks the Brazilian funding agency CAPES (Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior) for the Ph.D. scholarship. We used ChatGPT (chat.openai.com) and Grammarly (www.grammarly.com) to improve the quality of the written text in English. The simulations were performed on the IF-UFRGS computing cluster infrastructure.
## Appendix A
Here, we show that cooperators with the lowest contributions behave as defectors in their absence. We start from Eqs. (1) and (2) for the classical FPGG, where all cooperators equally contribute \(c\) and write the payoff difference for a given configuration
\[\Pi_{C}-\Pi_{D} =\frac{rc}{G}\left(N_{C}^{C}-N_{C}^{D}\right)-c \tag{10}\] \[=c\left(\frac{r}{G}\Delta N_{C}-1\right), \tag{11}\]
where \(\Delta N_{C}\) is the difference in the number of cooperators between the two groups.
Now, suppose that instead of \(C\) and \(D\), we have only two types of interacting cooperators, \(C_{c_{i}}\) and \(C_{c_{j}}\), where the subscripts denote their contribution value and \(\Delta c=c_{i}-c_{j}>0\). In this case, the payoffs are
\[\Pi_{C_{c_{k}}}=\frac{r}{G}\left(N_{c_{i}}^{k}c_{i}+N_{c_{j}}^{k}c_{j}\right)- c_{k}, \tag{12}\]
where \(N_{c_{m}}^{k}\) is the number of neighbors contributing \(c_{m}\) in the group where \(C_{c_{k}}\) is at the central site, for \(k\in\{i,j\}\). The payoff difference for this situation is
\[\Pi_{C_{c_{i}}}-\Pi_{C_{c_{j}}}=(c_{i}\Delta N_{c_{i}}+c_{j}\Delta N_{c_{j}}) \frac{r}{G}-\Delta c, \tag{13}\]
which generalizes Eq. 11 for two types of contributions.
Considering the same spatial configuration as in the \(C\)_vs_\(D\) case, but with defectors replaced by \(C_{c_{j}}\), and cooperators by \(C_{c_{i}}\) we have \(\Delta N_{c_{i}}=\Delta N_{C}\) by construction. To obtain \(\Delta N_{c_{j}}\), we recall that the total number of players in a group is \(G\) (including the focal player) and, therefore,
\[N_{c_{i}}^{k}+N_{c_{j}}^{k}=G. \tag{14}\]
Then, by subtracting one of these equations from the other (\(k\in\{i,j\}\)), we obtain
\[\Delta N_{c_{i}}=-\Delta N_{c_{j}}. \tag{15}\]
Therefore, \(\Delta N_{C}=\Delta N_{c_{i}}=-\Delta N_{c_{j}}\) always holds. Finally, with this result, we can rewrite Eq. 13 as
\[\Pi_{C_{c_{i}}}-\Pi_{C_{c_{j}}}=\Delta c\left(\frac{r}{G}\Delta N_{C}-1 \right). \tag{16}\]
By comparing Eqs. 11 and 16, we see that the lower contribution \(C_{c_{j}}\) behaves as a defector and the higher contribution \(C_{c_{i}}\) behaves as a cooperator whose contribution is \(\Delta c\). We illustrate this behavior in Fig. 6 for the case with only cooperators that contribute \(c=0.5\), \(c=1.0\) and \(c=2.0\) in the population (in the absence of defectors). We see that when \(C_{1}\) cooperators coexist with \(C_{0.5}\) we have \(\Delta c=0.5\) and therefore the density curve matches the homogeneous \(C_{0.5}\) with defectors case. When \(C_{2}\) cooperators coexist with \(C_{0.5}\), \(\Delta c=1.5\) and therefore the density curve matches the homogeneous \(C_{1.5}\) case.
Figure 6: Equilibrium density of \(C_{1}+C_{2}\) in function of \(r\) for only cooperators in the population with contributions \(0.5\), \(1\) and \(2\). We observe in more details the situation explored in Fig. 2, showing all transitions between cooperators where the smaller contribution behaves as a defector and the other contributions as cooperators with different contributions than the original ones. |
2306.07165 | Explainable AI and Machine Learning Towards Human Gait Deterioration
Analysis | Gait analysis, an expanding research area, employs non invasive sensors and
machine learning techniques for a range of applicatio ns. In this study, we
concentrate on gait analysis for detecting cognitive decline in Parkinson's
disease (PD) and under dual task conditions. Using convolutional neural
networks (CNNs) and explainable machine learning, we objectively analyze gait
data and associate findings with clinically relevant biomarkers. This is
accomplished by connecting machine learning outputs to decisions based on human
visual observations or derived quantitative gait parameters, which are tested
and routinely implemented in curr ent healthcare practice. Our analysis of gait
deterioration due to cognitive decline in PD enables robust results using the
proposed methods for assessing PD severity from ground reaction force (GRF)
data. We achieved classification accuracies of 98% F1 sc ores for each
PhysioNet.org dataset and 95.5% F1 scores for the combined PhysioNet dataset.
By linking clinically observable features to the model outputs, we demonstrate
the impact of PD severity on gait. Furthermore, we explore the significance of
cognit ive load in healthy gait analysis, resulting in robust classification
accuracies of 100% F1 scores for subject identity verification. We also
identify weaker features crucial for model predictions using Layer Wise
Relevance Propagation. A notable finding o f this study reveals that cognitive
deterioration's effect on gait influences body balance and foot landing/lifting
dynamics in both classification cases: cognitive load in healthy gait and
cognitive decline in PD gait. | Abdullah Alharthi | 2023-06-12T14:53:00Z | http://arxiv.org/abs/2306.07165v1 | # Explainable AI and Machine Learning Towards Human Gait Deterioration Analysis
###### Abstract
Gait analysis, an expanding research area, employs non-invasive sensors and machine learning techniques for a range of applications. In this study, we concentrate on gait analysis for detecting cognitive decline in Parkinson's disease (PD) and under dual-task conditions. Using convolutional neural networks (CNNs) and explainable machine learning, we objectively analyze gait data and associate findings with clinically relevant biomarkers. This is accomplished by connecting machine learning outputs to decisions based on human visual observations or derived quantitative gait parameters, which are tested and routinely implemented in current healthcare practice. Our analysis of gait deterioration due to cognitive decline in PD enables robust results using the proposed methods for assessing PD severity from ground reaction force (GRF) data. We achieved classification accuracies of 98% F1 scores for each PhysioNet.org dataset and 95.5% F1 scores for the combined PhysioNet dataset. By linking clinically observable features to the model outputs, we demonstrate the impact of PD severity on gait. Furthermore, we explore the significance of cognitive load in healthy gait analysis, resulting in robust classification accuracies of 100% F1 scores for subject identity verification. We also identify weaker features crucial for model predictions using Layer-Wise Relevance Propagation. A notable finding of this study reveals that cognitive deterioration's effect on gait influences body balance and foot landing/lifting dynamics in both classification cases: cognitive load in healthy gait and cognitive decline in PD gait.
**Keywords:** Deep convolutional neural networks (DCNN), deep learning, ground reaction forces (GRF), gait, interpretable neural networks, Parkinson's disease, perturbation.
## 1 Introduction
Human gait is the unique manner in which each individual walks. Gait involves a cyclical sequence of movements of both lower limbs that can be described as a series of transitions between states [1]. There is significant interest in sensing and recognizing human gait for various applications, including healthcare [2], [3], sports [4], [5], biometrics [6],[7],[8], and human-robot interaction [9], [10]. Gait provides
important information about a person's physical and physiological characteristics, such as weight, gender, health, and age.
The first focus of this work was on gait in Parkinson's disease (PD) patients. PD is a common neurodegenerative disorder caused by loss of neurons in the midbrain [11]. While a definitive PD diagnosis requires pathological examination, limited reliable neuropathological criteria prevent conclusive diagnosis during life. Historically, only 80-90% of clinical PD diagnoses have been confirmed at autopsy [12]. PD is associated with reduced life expectancy, partly due to the 25-40% of patients who develop dementia [11], [12]. Currently, PD diagnosis and severity rating rely primarily on clinical evaluation and subjective surveys using scales like the Unified Parkinson's Disease Rating Scale, Hoehn and Yahr staging, and Schwab and England activities of daily living [13]. Gait deviation is a hallmark of PD, and disease progression increases fall risk [11]. However, in early PD subtle gait changes may lead to inconclusive visual evaluation, partly because slow walking and short strides can also indicate age, mood, or other conditions [12]. Consistent with this, PD manifests as tremor, rigidity, and bradykinesia. Gait analysis aids PD diagnosis and tracking, though current methods are semi-subjective.
The second focus was on gait under cognitive load ("dual tasks"). Gait patterns vary within and between individuals due to factors like dual-tasking, environment, energy optimization, and emotional state. Dual tasking alters gait in all individuals, indicating higher-level cognitive input is required for gait [15], [16]. Humans continuously adjust their gait to minimize energy cost, even for small savings [17], [18],[19]. Emotional states like happiness or fear also impact gait [20]. However, gait inconsistency could enable biometric verification if signatures under cognitive load prove individually distinct. Gait analysis may detect and characterize age-related cognitive decline using inexpensive sensors, aiding diagnosis of mobility impairment, increased fall risk [21], [22], and disorders like Alzheimer's disease or vascular dementia [23].
The motivation here is to categorize cognitive load decline in PD and identify subjects using gait under cognitive load. However, gait is a sequence of periodic events that naturally vary slightly in all individuals due to temporary psychological or lasting physiological conditions. Therefore, visual observation, harmonic analysis, Fourier decomposition, or their combination may not adequately represent gait cycle nonlinearity and nonstationary. Recent advances in deep learning offer an alternative for processing ground reaction force (GRF) signals to achieve reliable gait classification. Deep artificial neural networks (ANNs) using raw sensor data implement automatic feature extraction, avoiding subjective feature engineering [24]. This enables analyzing gait deterioration, though a known ANN limitation is opacity, hindering understanding [25] of predictions in terms of domain knowledge [26]. This limits feedback to improve sensor design and data processing. To address this, layer-wise relevance propagation (LRP) [26] relates ANN predictions to input data. By conservation of relevance [27], [26], it produces relevance maps attributing portions of predictions to raw input, identifying important areas. LRP shows success in image classification [28], [29] and gait-based subject identification [30].
This work presents a deep convolutional neural network (DCNN) to analyze GRF data and categorize PD and dual-task gait deterioration. We validate classification using LRP relevance scores to add noise. Sensor fusion uses CNNs to classify gait, and explainable CNNs [26] relate results to observable gait events, identifying the most relevant for each class. This uses the defined cyclic patterns in healthy gait to query what parts are essential for recognition and which act as background (irrelevant) in CNN processing. LRP interprets CNN predictions and identifies highest-weighted gait events for recognition.
Background
To enhance the understanding of our work, we provide an overview of the key concepts involved in analyzing gait. This includes a concise summary of the theoretical foundations of popular machine learning frameworks and the essential procedures for training, validating, and testing convolutional neural networks. Additionally, we introduce the Layer-Wise Relevance Propagation (LRP) approach, which improves the interpretability and explainability of artificial neural networks. Moreover, we provide a brief overview of the latest research on gait parameters and an update on the literature since our last review paper on this topic [24].
### Deep Convolutional Neural Networks (CNNs)
CNNs are state-of-the-art machine learning models that have proven effective in a variety of classification tasks, providing valuable insights into complex data. These networks can learn high levels of abstraction and features from large datasets by applying convolutional operations to the input data. CNNs are composed of convolution layers, perceptron layers, pooling layers, and normalization layers. A set of filters and weights are shared among these layers. The convolutional layers output a feature map that is automatically extracted from the raw input data, followed by a perceptron layer based on neurons that map the features to an output. Each convolutional layer is then followed by pooling layers that reduce computational cost by decreasing the size of representation and making the convolution layer output more robust. The convolutional neural network shown in Figure 1.
### Convolutional Layer
Convolution operation is performed on the input data and a filter or kernel to produce a feature map as shown in figure 1. In this process, the filter slides over input data and performs convolution. The convolution operations output feature maps. The learning occurs at the kernel by updating the kernel values during training. At the end, the feature map output consists of different feature maps produced by different kernels as convolution layer output. An activation function is utilized to produce nonlinear feature maps that can be optimized during training to pass valuable neurons values to the next layer. A mathematical representation of a convolution operation in one dimension with an input vector \(x\), a kernel \(\omega\) with \(i\), \(d\) to denote iterators, and (\(\circ\)) to denote the element-wise multiplication, can be expressed as _C(\(i\))_ with \(i\) is the index of an element in the new feature map (ch. 9 [31]):
\[C(i)=(\omega\circ x)[i]=\sum_{d}x(i-d)\ \omega(d)\text{-} \tag{1}\]
Gait is captured as two dimensions signal as spatial and temporal, therefore the convolution operation in eq. 1 can be extended to two dimensions. Such that the spatiotemporal input is a large set of data points, and the kernel is a set of data smaller
Figure 1: Illustration of a convolutional neural network with two-dimensional convolutional blocks and subsampling as pooling operation followed by Perceptron layer.
in size than the input. Then the convolution operation slides the kernel over the input and computes element-wise multiplication and add the values in a smaller future map. With a 2-D input \(x\) and a 2-D kernel \(\omega\) with (_i,j_), (_d,k_) are iterators, the mathematical representation of a convolution in two dimensions can expressed as \(C(i,j)\) with (_i,j_) is the index of an element in the new feature map [31]:
\[C(i,j)=(\omega\circ x)[i,j]=\sum_{d}\sum_{k}x(i-d,j-k)\omega(d,k) \tag{2}\]
### Backpropagation
It is short for "backward propagation of errors", it is an algorithm based on gradient descent. The method moves in a reverse order from the output layer to the input layer while calculating the gradient of the error function based on the network weights, the aim is to minimize \(J(\theta)\) using an optimal set of parameters in \(\theta\). It is based on performing the partial derivative to minimize the cost function. The partial derivative is expressed as \(\frac{\partial}{\partial\theta_{i,j}^{\prime}}J(\theta)\). The output layer calculates the error of the network layers \(L\) with: \(\delta^{(L)}=\alpha^{(L)}-y\), such that the error of node \(j\) in layer \(l\) is denoted as \(\delta_{j}^{(l)}\) and the activation of node \(j\) of layer \(l\) is denoted as \(\alpha_{j}^{(l)}\) and \(y\) is the output of the output layer, then the backpropagation can be expressed for neural networks as:
\[\delta^{(L)}=((\theta^{(l)})^{(T)}\delta^{(l+1)})\circ\alpha^{(l)}\circ(1- \alpha^{(l)}) \tag{3}\]
Here the \(\delta\) values of the output layer L are calculated by multiplying the \(\delta\) values in the next layer (in reverse direction) with the \(\theta\) matrix of layer l, hence T denote matrix. We then perform elementwise multiply (\(\circ\)) with the \(g^{\prime}\) which is the derivative of the activation function, which is evaluated with the input values given by \(z^{(L)}\).Where \(g^{\prime}\left(z^{(L)}\right)=\alpha^{(l)}\circ(1-\alpha^{(l)})\).
The partial derivatives needed for backpropagation is performed by multiplying the activation values and the error values for each training example t and m is the number of training data as:
\[\frac{\partial}{\partial\theta_{i,j}^{\prime}}J(\theta)\text{=}\frac{1}{m} \left\lvert\sum_{t=1}^{m}\alpha_{j}^{(t)(l)}\delta_{j}^{(t)(l+1)}\right\rvert \tag{4}\]
### Evaluation Measure
The widely used accuracy measure for gait analysis is the confusion matrix [32]. It is a table to visualize the number of predictions classified correctly and wrongly for each class. The table consists of true positive, true negative, false positive, and false negative classification occurrences. One of the advantages of the confusion matrix display is that it is straightforward to identify the decision confusions, thus possibly concluding on the quality of the data involved. It shows each class prediction as follows:
**True positive,** TP: It is the number of positive classes correctly predicted as positive.
**True negative,** TN: It is the number of negative classes correctly predicted as negative.
From this confusion matrix table, the number of predictions classified correctly and wrongly are used to calculate different rates of measure to evaluate the performance of a machine learning model. Performance measures, such as accuracy,
recall, precision and F1 values of a model can be defined using the following equations.
**Accuracy**: indicator of the ratio between the correctly predicted data to total number of samples in the dataset, defined as: \(\frac{\boldsymbol{TP+TN}}{\boldsymbol{TP+TN+FP+FN}}\)
**Recall**: the proportion of positive classes identified correctly, defined as: \(\frac{\boldsymbol{TP}}{\boldsymbol{TP+FN}}\)
**Precision**: the fraction of positive cases correctly identified over all the positive cases predicted, defined as: \(\frac{\boldsymbol{TP}}{\boldsymbol{TP+FP}}\)
**F1 Score**: the harmonic mean of Precision and Recall, defined as: \(\frac{\boldsymbol{2+Precision+Recall}}{\boldsymbol{Precision+Recall}}\)
There are popular evaluation measures used for classification problems such as Area Under the Curve (AUC) and Receiver Operating Characteristic (ROC). In this thesis we use the confusion matrix over the Area Under the Curve (AUC) because the number of TP, TN, FP and FN samples are values of interest to understand the confusion in gait classes for further analysis using LRP.
### Layer-Wise Relevance Propagation (LRP)
LRP [25], [26][27] is a backward propagation method which identifies which parts of the ANN input vector carry most weight in the model prediction. In this thesis we quantify the contribution of a single component of an input \(x_{t}\) (in our case, a sensor signal at a specific time frame) to the prediction of \(f_{c}(x)\) (\(c\) denote a class of gait) made by the DCNN classifier \(f\). The outputted gait class prediction is redistributed to each intermediate node via backpropagation until the input layer. The LRP outputs a "heat map" over the original signal to highlight the signal sections with the highest contributions to the model prediction, e.g., the data sections with the maximum variability given the classes. We first note that a neural network consists of multiple layers of neurons (feature maps in the case of a convolution layer), where neurons are activated as follows [26]:
\[a_{k}=\sigma(\sum_{j}a_{j}\omega_{jk}\,+\,b_{k}) \tag{5}\]
Here, \(a_{k}\) the neuron activation and \(a_{j}\) is the activation of the neuron in the previous layer in forward direction; \(\omega_{jk}\) denote the weight received in forward direction by neuron \(k\) from neuron \(j\) in the previous layer and \(b_{k}\) is the bias. The sum is computed over all the \(j^{\text{th}}\) neurons that are connected to \(k^{\text{th}}\) neuron. \(\sigma\) is a nonlinear monotonically increasing activation function. These activations, weights, and biases are learned by the DCNN during supervisory training. During training, the output \(f_{c}(x)\) is evaluated in a forward pass and the parameters (\(\omega_{jk}\,+\,b_{k}\)) are updated by back-propagating using model error. For the latter, we base our computations on categorical cross entropy [33].
The LRP approach decomposes the DCNN output for a given prediction function of gait class \(c\) as \(f_{c}\) for input \(x_{t}\) and generates a "relevance score" \(R\) for the \(i^{\text{th}}\) neuron received from \(R_{j}\) for the \(j^{\text{th}}\) neuron in the previous layer which is received from \(R_{k}\), for the \(k^{\text{th}}\) neuron in the lower layer (see figure 2), where the relevance conservation principle is satisfied as:
\[\sum_{i}R_{i\gets j}\,=\,\sum_{j}R_{j\gets k}\,=\,\sum_{k}R_{k}\,=\,f _{c}(x) \tag{6}\]
The LRP starts at the DCNN output layer after removing the _Softmax_ layer. In this process, a gait class \(c\) is selected as an input to LRP, and the other classes are eliminated. The backpropagation for unpooling for the pooling layer is computed by redirecting the signal to the neuron for which the activation was computed in the forward pass. As a generalization, consider a single output neuron \(i\) in one of the model layers, which receives a relevance score \(R_{j}\) from an lower layer neuron \(j\), or the output of the model (class _c_). The scores are redistributed between the connected neurons throughout the network layers, based on the contribution of the input signals \(x_{i}\) using the activation function (computed in the forward pass and updated by back-propagating during training) of neuron \(j\) as shown in figure 2. The latter will hold a certain relevance score based on its activation function and passes its value to consecutive neurons in the reverse direction. Finally, the method outputs relevance scores for each sensor signal at a specific time frame. These scores represent a heat map, where the high relevance scores at specific time frames highlight the areas that contributed the most to the model classifications. There are other propagation rules such as (\(\alpha\beta\)-rule) [26].
\[R_{j}=\sum_{k}(\alpha\ \ \frac{a_{j}\omega_{jk}{}^{+}}{\sum_{j}a_{j}\omega_{jk}{}^{ +}}-\beta\ \frac{a_{j}\omega_{jk}{}^{-}}{\sum_{j}a_{j}\omega_{jk}{}^{-}})R_{k} \tag{7}\]
Where each sum corresponds to \(R_{j\gets k}\) a relevance message and \(a_{j}\omega_{jk}{}^{+}\) and \(a_{j}\omega_{jk}{}^{-}\) denote the positive and negative part of \(a_{j}\ \omega_{jk}\) respectively. The parameters \(\alpha\) and \(\beta\) are chosen so that \(\alpha-\beta=1\) and \(\beta\geq 0\). A propagation rule can be chosen by selecting \(\beta\) = 0 to in result the following rule :
\[R_{j}=\sum_{k}\frac{a_{j}\omega_{jk}{}^{+}}{\sum_{j}a_{j}\omega_{jk}{}^{+}}R_{k} \tag{8}\]
There are other stabilizing terms that can be used to avoid divisions by zero as explained in [26],[27]. For the LRP-\(\gamma\) rule, let the neurons interconnection be as follow [30]:
\[a_{k}=\max\bigl{(}0,\sum_{0,j}a_{j}\ \omega_{jk}\bigr{)} \tag{9}\]
Figure 2: DNN and LRP signal processing flow. Red arrows indicate the relevance propagation flow [26].
Here, \(a_{j}\) denote input activation and \(\omega_{jk}\) denote the weight received by neuron \(j\) from neuron \(k\) in the above layer. The sum is computed over all neurons \(j\) in the lower layer plus a bias term \(\omega_{0k}\) with \(a_{0}=1\). The LRP-\(\gamma\) rule as shown in figure 3 is given by:
\[R_{j}=\sum_{k}\frac{a_{j}(\omega_{jk}+\gamma\omega_{jk}^{+})}{\sum_{0,j}a_{j}( \omega_{jk}+\gamma\omega_{jk}^{+})}\ R_{k} \tag{10}\]
### Gait Parameters
Gait can be perceived as a transformation of a brain activity to muscle contraction patterns resulting in a walking sequence. It is a chain of commands generated in the brain and transmitted through the spinal cord to activate the lower neural center, which will consequently result in muscle contraction patterns assisted by sensory feedback from joints, muscles and other receptors to control the movements. This will result in the feet recurrently contacting the ground surface to move the trunk and lower limbs in a coordinated way, delivering a change in the body center-of-mass position.
Gait is a sequence of periodic events characterized as repetitive cycles for each foot [27]. Each cycle is divided into two phases (see figure 4):
Figure 3: Illustration of the LRP propagation procedure applied to a neural network. The prediction at the output is propagated backward in the network, using various propagation rules, until the input features are reached. The propagation flow is shown in re[30].
a) **Stance Phase** (approximately 60% of the gait cycle, with the foot in contact with the ground). This phase is subdivided into four intervals (**A, B, C, D**).
b) **Swing Phase** (approximately 40% of the gait cycle with the foot swinging and not in contact with the ground). This phase is subdivided into three intervals (**E, F, G**).
**Stance Phase:**
**A.**: Heel strike or Initial contact: It starts the moment the foot touches the ground, and it is the initial double-limb support interval. In the case of the right foot leading, the double support starts with left foot being on the ground when the right foot heel makes initial contact and finishes when the left foot leaves the ground with the left toe-off prepared to swing. At the end of this interval, the body weight is completely shifted onto the stance (leading) limb. This term is adopted in clinical psychology to denote the contact of the heel of the extended limb with the walking surface.
**B.**: Loading response or Foot flat: This is a single support interval following the initial double support interval. The bodyweight is transferred on to the supporting limb. The trunk is at its lowest position, the knee is flexed, and a plantarflexion occurs at the ankle.
**C.**: Mid-stance: This is a single support interval between opposite toe-off and heel-off. It starts from elevation of opposite limb until both ankles are aligned in coronal plane. The trunk is in its highest point and slowing its forward speed. The body center-of-mass is aligned with the forefoot (ball of the foot).
Figure 4: Important gait events and intervals in a normal gait cycle.
DTABLE]
## 2 Experimental Results
### Experimental results
The experimental results of the proposed method are presented in Table 1. The results are shown in Table 2. The results are shown in Table 3. The results are shown in Table 4. The results are shown in Table 5. The results are shown in Table 6. The results are shown in Table 7. The results are shown in Table 8. The results are shown in Table 9. The results are shown in Table 10. The results are shown in Table 11. The results are shown in Table 12. The results are shown in Table 13. The results are shown in Table 14. The results are shown in Table 15. The results are shown in Table 16. The results are shown in Table 17. The results are shown in Table 18. The results are shown in Table 19. The results are shown in Table 10. The results are shown in Table 11. The results are shown in Table 12. The results are shown in Table 13. The results are shown in Table 14. The results are shown in Table 15. The results are shown in Table 16. The results are shown in Table 17. The results are shown in Table 18. The results are shown in Table 19.
Recent work on wearable and floor sensors has been applied for medical applications such as the impact of muscle fatigue on gait characteristics [38], health monitoring [39] and age-related differences [40]. In recent work on wearable sensors by Turner et al. [41], LSTM network is proposed to process and classify pressure sensors signals. The sensors were placed inside the shoes and participants were asked to walk eight walking trails. The aim of this study was to analyze artificially induced gait alterations. The results are promising for a potential use in the diagnosis of gait abnormalities or other neuromuscular movement disorders in patients. In a recent work on wearable sensors. Tran et al. [42], proposed multi-model LSTM and CNN to classify IMUs spatiotemporal signals. The proposed models outperformed previous results on the whuGAIT [43] and OU-ISIR [44] datasets using hybrid network.
## 3 Material and Methodology
### Parkinson's disease data
To assess GRF data from PD patients we used the open access benchmark from PhysioNetorg [48]. It consists of data from 93 PD patients (mean age: 66.3 years; 63% men), as detailed in table 1, with different levels of PD progression, as detailed in table 2. Data from 73 healthy controls (mean age: 66.3 years; 55% men) are also present. The dataset consists of GRF measurements collected as participants walked for approximately two minutes. Each subject had eight sensors placed underneath each foot to measure force [N] as a function of time. The output of the 16 sensors was recorded at 100 frames per second. Also, the sum of the eight sensors of each foot is added to each subject sample and the timestamp, yielding 19 columns in total. The data set was collected by three research groups, namely: Ga group [45], Ju group [46] and Si group [47] with the sub-parts of the dataset named after these groups. The Ju and Si groups recorded usual healthy walking at a self-selected speed. The Ga repeated this, and included additional samples for each subject, where they performed dual task while walking [45].
### Cognitive Load Data
The iMAGiMAT footstep imaging system is an original Photonic Guided-Path Tomography floor sensor head [49],[50],[51],[52]). It can record unobtrusively temporal samples from a number of strategically placed distributed POF sensors on
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Subjects & Number & Male & Female & Group \\ \hline PD patients & 29 & 20 & 9 & Ga [45] \\ Healthy Subjects & 18 & 10 & 8 & Ga [45] \\ PD patients & 29 & 16 & 13 & Ju [46] \\ Healthy Subjects & 26 & 12 & 14 & Ju [46] \\ PD patients & 35 & 22 & 13 & Si [47] \\ Healthy Subjects & 29 & 18 & 18 & Si [47] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets subjectβs discription.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Severity (0) & Severity (2) & Severity (2.5) & Severity (3) & Group \\ Healthy & & & & \\ \hline
18 & 15 & 8 & 6 & Ga[45] \\
26 & 12 & 13 & 4 & Ju [46] \\
29 & 29 & 6 & 0 & Si [47] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of subjects with the severity rating.
top of a deformable underlay of a commercial retail floor carpet. Each sensor comprises of low cost POF (step-index PMMA core with fluorinated polymer cladding and polyethylene jacket, total diameter 1mm, NA=0.46) terminated with a LED (Multicomp OVL-3328 625nm) at one end and a photodiode (Vishay TEFD4300) at the other. The sensors constitute a carefully designed set to allow collaborative sensor fusion and deliver spatiotemporal sampling adequate for discerning gait events. The 1m x 2m area system is managed by 116 POF sensors, arranged in three parallel plies, sandwiched between the carpet top pile and the carpet underlay: a lengthwise ply with 22 POF sensors at 0\({}^{\circ}\) angle to the walking direction and two independent plies, each consisting of 47 POF sensors, arranged diagonally at 60 \({}^{\circ}\) and -60 \({}^{\circ}\) respectively (see [49], figure 6 therein). The electronics is contained in a closed hard-shell periphery at carpet surface level and is organised in 8-channel modules: LED Driver boards as well as input transimpedance amplifier boards to receive the data and send it to a CPLD (Complex programmable logic device) to reformat the data for processing by a Raspberry pi single board computer for export via Ethernet/WiFi. The operational principle of the system is based on recording the deformation caused by the GRF variations, as bending affects the POF sensors transmitted light intensity is affected by surface bending. This captures the specifics of foot contact and generates robust data without constraints of speed or positioning anywhere on the active surface.
Twenty-one physically active subjects aged 20 to 40 years, 17 male and 4 females, without gait pathology or cognitive impairment, participated in this experiment. The study was carried out under the University of Manchester Research Ethics Committee (MUREC), ethical approval number 2018-4881-6782. All participants were informed about the data recording protocol in accordance with the ethics board general guidelines and each subject written consent was obtained prior to experiments. Each participant was asked to walk normally, or while performing cognitively demanding tasks, along the 2 m length direction of the iMAGiMAT sensor head. The captured gait data is unaffected by start and stop, as it is padded on both ends with unrecorded several gait cycles before the first footfall on the sensor. With a capture rate of 20 timeframes/s (each timeframe comprising the readings of all 116 sensors), experiments yielded 5s long adjacent time sequences, each containing 100 frames. The recorded gait spatiotemporal signals were able to capture around 4 to 5 uninterrupted footsteps at each pass.
Five manners of walking were defined as normal gait plus four different dual tasks, and experiments were recorded for each subject, with 10 gait trials for each manner of walking in a single assessment session; thus the total number of samples is 10 \(\times\) 5 = 50 per-subject. The five manners of walking are defined as follows:
* [noitemsep,topsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,ppt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parseppt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=0pt,parsepparsep=0pt,parsep=parseppt,parsep=0pt,parsepparsep=0pt,parsep=parseppt,parsep=parseppt,parsep=0pt,parsep=parseppt,parsep=parseppt,parsep=0pt,parsepparsep=0pt,parsepparsep=0pt,parseppt,parsep=parseppt,parsep=0pt,parsepparsep=0pt,parseppt,parsep=parseppt,parsep=0pt,parsepparsep=0pt,parseppt=parsep,parseppt=parseppt,parsep
### Perturbation
Human gait is inconsistent between individuals, and even for a single individual, requiring models to be reliable and robust to variance in the input data. Applied to the LRP analysis, the interpretation of important input data points needs to be robust to noise and variance in the input data stream. Considering this, a random perturbation noise analysis on the LRP relevance score can aid in the choice of LRP method, as well as in designing a DCNN model resilient to noise due to the inconsistency of gait. This perturbation analysis is achieved as described in the following sections.
The "greedy" iterative procedure proposed in [53] allows to choose the appropriate LRP method and to evaluate the quality of gait classification relevance
Figure 5: Proposed DCNN architectures; a) Single DCNN, b) Parallel DCNN, c) Quadruplets DCNN. The color-coding of boxes: convolution layers and fully connected layer (blue); pooling layers (green); concatenation layers and flattening layers (brown); dropout layers (navy); input at the top and softmax output layers (gray). The diagrams are generated using Neutron from GitHub repository based on the modelsβ weights and biases.
scores. This is achieved by step-wise removal of information from the spatiotemporal input signal. At each step, regions with the highest relevance scores are replaced by Gaussian noise by a "most relevant first" (MoRF) approach [53]). The change in the model performance is then monitored at each step by running the model to re-predict the test data with the perturbation accumulated by that step. The most desirable LRP method is the one with the strongest drop in accuracy at the first few steps [187], where the most relevant information is removed by the perturbations, and slower decline further as less important regions are removed. The accuracy drop is quantified using the area over the most relevant perturbation curve (AOPC) [53].
In terms of assessing the significance of DCNN model architecture, this task is achieved by progressively removing the highest relevance scores yielded by the best LRP method selected using the method described above and testing the model performance by re-predicting on the test data for each model. Models which substantially drop performance after only a few perturbation steps are considered to be the most amenable to exploiting LRP. This is because the decline in performance allows to assert that those few removed regions are critical for accurate classification performance, therefore it is indicative of meaningful relationships between input patterns and learnt classes. In contrast, if removing a region with little impact on the classification performance, the implication is that it is of lesser interest in terms of seeking such relationships.
### Proposed DCNN Architectures
The classification of gait ground reaction force (GRF) signals is a challenging task that requires the use of advanced machine learning techniques. In previous work, the authors of this study experimented with several deep convolutional neural network (DCNN) models to process and classify spatiotemporal 3D matrices of raw sensor signals. The researchers' extensive experimentation led them to identify three different network architectures that showed promising results. The architectures are shown in Figure 5 and are detailed in the following sections.
#### 3.4.1 Dcnn
A 2D-DCNN model (figure 5(a)) built for PD severity classification consists of four convolutional layers, each followed by an average pooling and two fully connected layers, yielding a total of 10 stacked layers. The four convolutional layers have \(n\) channels each that assign one frame of the input to a single convolutional layer channel. The convolutional layers use a stride of 1, same-padding, and a 2 x 2 filter for the average-pooling layers.
#### 3.4.2 Parallel DCNN
This network architecture (figure 5(b)) has been proposed specifically in our previous work [150] to process GRF signals. It is inspired by inception neural network architectures [154], the aim is to have filters with learnable parameters operate on the same level to recognize the salient parts in the sample. The network consists of two stages with parallel streams fused with concatenation layers, where each stream has its weights and biases launched uniformly and updated during training via backpropagation. The network is topped up with fully connected layers and a softmax layer, giving a total of 18 stacked layers.
#### 3.4.3 Quadruplet DCNN
The quadruplet network shown in figure 5 (c) is an original model and its architecture implemented with multi-parallel streams by generalizing Siamese [54] and triplet networks [55]. This network consists of convolutional layers, max-pooling
and average-pooling with each stream has its activations, weights, and biases launched uniformly and updated separately via backpropagation. The goal of this network is to learn the spatial sensor signals and temporal signals separately and simultaneously (similar to inception neural network [56]), using two types of pooling layers. This allows the capturing of gait pattern and the ability of the network to generalize on unseen data.
## 4 DCNN Implementation and Results
All algorithms for LRP computation are implemented in Python 3.7.3 programming language using Keras 2.2.4, TensorFlow 1.14.0 and iNNvestigate GitHub repository [57]. All codes are run using a desktop with intel core i7 6700 CPU @3.4 GHz. After data standardization, the deep CNN model is applied on the dataset in order to test the validity of the algorithms for identifying gait signatures. We compared the CNN predictions to manually labelled ground truth in several experiments, PD severity staging, individuals' identity and the changes to normal gait incurred by cognitive load. The models' classification performance is evaluated using confusion matrices. The performance of the LRP methods is examined in detail in the discussion subsection.
### Experiment (1) on PD Gait Data
#### 4.1.1 Data Pre-Processing
Each sample recorded in the dataset contains 19 columns of data with varying column length, as for some subject's gait was recorded for a longer time (12119 frames) than for others (less than 1000 frames). In order to make the input data length consistent, the datasets were split to equal size parts of 500 frames such that single long recording are divided to several chunks of 500 frames. The timestamp columns were deleted as it doesn't report information about gait. The final sample size is 18 columns and 500 rows or frames, as shown in figure 6. This choice is justified since the gait cycle is approximately one second and the sample captures heel strike and toe off for both feet over five gait cycles. The input dataset is a tensor with dimensions \(m\times\)500\(\times\)18 where m= 2698 for Ga group, 2198 for Ju and 1509 Si group (see example sequences in figure 6). The input is reshaped for the 2D-DCNN as K\(\times\)50\(\times\)15\(\times\)12 building upon our previous work with different inputs and algorithms [58].
Data standardization is performed as a pre-processing step to reduce the redundancy and dependency among the data, such that the estimated activations, weights, and biases will update similarly rather than at different rates during the training process. The standardization involves rescaling the distribution of values with mean at zero and rescaling the standard deviation to unity.
\[\widehat{x_{n,s}}=\frac{x_{n,s}-\mu(x_{n,s})}{\vartheta(x_{n,s})} \tag{11}\]
Here \(\widehat{x_{n,s}}\) is PD data rescaled such that \(\mu\) is the mean values and \(\vartheta\) is the standard deviation. Next, the dataset is randomly split into training 60%, hold-out validation 20% and testing 20% with a _random state_ parameter with different seed.
#### 4.1.2 Feature Learning and Classifications
The three models are trained, validated, and tested separately, using a batch size of 200 samples for each iteration, 200 epochs are optimal to train the models, determined by a trial way (on the three datasets combined) and the error using categorical cross-entropy. A method for stochastic optimization (ADAM [59]) is used to train the proposed models. The optimizer parameters are adjusted as follows: \(\alpha\) =0.002, \(\beta\)1 =0.9, \(\beta\)2 =0.999, \(\varepsilon\)=1e-08. Where \(\alpha\) is the learning rate or the proportion that weights are updated; \(\beta\)1 is the exponential decay rate for the first moment estimates; \(\beta\)2 is the exponential decay for the second-moment estimates; \(\varepsilon\) is a small number to avoid any division by zero in the implementation.
The loss computed by the categorical cross-entropy in every iteration is used to validate the models and update the weights and biases. To improve the model's performance a regularization method is utilized together with dropouts, as shown in figure 5. The models are trained, validated and tested, three times separately (for each dataset) and four times (datasets combined) with different random state, to test the models' ability in classifying the three datasets and with the later combined. Accuracy is reported as confusion matrices in figure 7, as well as precision, recall and f1-score. The mean performance and standard errors of the different random state runs are reported in table 3.
\begin{table}
\begin{tabular}{c c c c|c c c c|c c} \hline \hline \multirow{2}{*}{CNN Model} & Ga & Ju & Si & A & A & A & A & A \\ & & & & Seed & Seed & Seed & Seed & \(m\) & \(St\) \\ & & & & 42 & 100 & 200 & 2020 & & \\ \hline Single & 98\% & 98\% & 98\% & 95\% & 93\% & 96\% & 96\% & 95\% & 0.70\% \\ Parallel & 96\% & 97\% & 96\% & 96\% & 95\% & 95\% & 96\% & 95\% & 0.28\% \\ Quadruplet & 97\% & 97\% & 98\% & 95\% & 94\% & 94\% & 95\% & 94\% & 0.28\% \\ \hline \hline \end{tabular}
* A: Ga U Ju U Si; \(ST\): Standard error; \(m\): Mean performance
\end{table}
Table 3: Models F1-Score for each Dataset and F1-Score, Mean and Standard Error with Datasets Combine.
Figure 6: Example GRF data recorded at 100 frames per second for healthy subject and subjects with PD severity ratings 2, 2.5, and 3, with sample length of 500 timeframes \(\hat{x_{n,S}^{-}}=[\text{x}_{1}\quad...\quad\text{x}_{18}]\in\mathbb{R}^{50 0\times 18}\). The signals with lower amplitude (\(\hat{x_{n,S}^{-}}=[\text{x}_{1}\quad...\quad\text{x}_{16}]\)\(<\)500 s) represent pressure sensor signal under each foot (different colors for each of the 8 sensors). In each sample, the calculated sum of the 8 sensor outputs for each foot is also shown additionally (x\({}_{17}\)&x\({}_{18}\))\(>\)500 N, different colors for left and right foot).
#### LRP and Model selection
A number of LRP methods were tested with the three DCNNs models. All the models returned the same results; therefore, we report the result from single DCNN, to identify the best performing backpropagation method implemented in iNNvestigate GitHub repository: Deep Taylor [27], Deep Taylor bounded [60], deconvnet (deconvolution) [61], guided backprop (guided backpropagation) [62], and LRP sequential preset a flat (LRP-SPF) [60]. The DCNN classification accuracy is evaluated for each of the above LRP methods separately, by performing a sequence of perturbation steps, as described in section 3.3. (Progressively replacing MoRF regions of size 7x7, representing 0.544% (0.00544 = 7x7/ 50x15x12) of the input stream, with Gaussian noise), and observing for each LRP method the cumulative change in the model performance. The baseline for comparison is established by replacing regions of the input data with random Gaussian noise regions instead of replacing the regions based on LRP methods. Next, we subtract the LRP maps accuracy from the input with the randomly replaced regions accuracy to show only the LRP accuracy change. As shown in figure 8, in our case the LRP curves recover after around the 15th perturbation step, because the remaining spatiotemporal regions are less and less relevant and the baseline accuracy is reached around the 25th perturbation step, as then all remaining regions are unimportant for the classification. As expected, the exhibited rate of change is proportional to the importance of the information perturbed at each step.
Figure 8: LRP method selection by perturbation steps progressively removing information with the highest relevance scores. Steeper initial decrease indicates better identification of gait events with most weight in the classifications.
Figure 7: Modelsβ predictions on 1281 sample are shown as confusion matrices: a) Single DCNN, b) Parallel DCNN, c) Quadruplet DCNN.
The choice of a DCNN model most suitable for LRP is justified by utilizing the same MoRF protocol [53], whereby each of the three DCNN models is perturbed step wise with Gaussian noise by progressively replacing MoRF regions of size 7\(\times\)7 and re-predict gait class for 100 steps. In contrast with the LRP method selection illustrated in figure 8, instead of comparing with a baseline, here the rate of decline in accuracy (by removing the mean to show only the rate of change) with subsequent perturbation steps is used to identify the model with the steepest drop based on each model returned 100 classification accuracies, as manifested in figure 9. This prediction drop takes place at a faster rate in models where the classification uses data patterns in more compact regions within the gait cycle sequence. This allows more straightforward identification of the gait cycle events corresponding to such regions, with reference to the standard cycle presented in figure 4. Among the three proposed models, the Parallel DCNN (see figure 7.b) experiences the steepest drop based on the 100 classification accuracies, as shown in figure 9. This is because the model's performance is not very sensitive to the gait cycle.
Figure 10: Gait events processed SA (see equation 13) signal top. The highlighted gray area in (a) is explained in (b) based on gait events for one foot from figure 4 as: A- Heel strike, B- Loading response or flat foot, C- Mid-stance or single support, D-Terminal stance or heel rising, E- Pre-swing or double-limb support, F- Initial swing and Mid-swing or toe-off, G-Terminal swing.
Figure 9: Semi-log plot of the perturbation effect on the proposed DCNNs architectures. The decline in accuracy results from progressively removing information from the input data based on LRP-SPF and re-predict, at each step, 100 steps total.
decrease in accuracy with perturbation, as figure 9 shows, therefore is the preferred candidate to attempt the identification of gait events most vulnerable to gait deterioration due to PD. Table 3 summarizes the performance of the three DCNN models.
#### 4.1.4 Gait Event Assignment Using LRP
Gait GRF data take the form of periodic sequences which are characterized as repetitive cycles for each foot. We note that the normal gait cycle is initiated by the heel strike of one foot, followed by other gait events described in figure 4, in strict order. Therefore, the LRP-generated heat map of the temporal variations in the GRF signal can reveal which events in the gait cycle are most relevant for the classifications. Consequently, gait event assignment is best performed on the data sequences in figure 6 after spatial averaging and standardization. A representative spatially averaged sensor signal sequence is shown in figure 4.10 (a) for a healthy subject. The highlighted gray area corresponds to one gait cycle, while the plotted signal is given by the Spatial Average (SA) metric, computed as:
\[SA[n]=\frac{1}{18}\sum_{i=1}^{18}(x_{i}[n] \tag{12}\]
Figure 11: Perturbation with Gaussian noise based on LRP relevance scores, used for LRP and DCNN selection. Top plot: healthy gait processed GRF signals; Middle plot: GRF with Perturbation noise; Bottom plot: LRP relevance scores for the selected LRP Sequential Preset a Flat (LRP-SPF) and Parallel DCNN. \(x_{i}=[x_{1}\quad...\quad x_{18}]\) represent the 18 signals after data standardization (see figure 6). The LRP plot is dominated by \(x_{18}\) because all the signals are plotted on top of each other in the temporal domain. LRP relevance score are highly dependent on the temporal changes, whereas spatial variation does not affect the model prediction.
Here \(x_{i}\) are the readings from individual sensors and \(n\) enumerates the frames in each sample. Recall that each foot has 8 sensors attached (16 total) and the two sums one for each 8 sensors for each foot is available giving 18 signals in total. Figure 10 (b) shows the expanded gait cycle from figure 10(a) with the gait events color-coded and labelled as per figure 4.
The random data sample in figure 11 is used to illustrate the choice of the LRP method by the perturbation approach, as well as the LRP relevance scores obtained for the Parallel DCNN classifications. It shows the processed original GRF signals in the top plot; the middle plot shows the regions replaced with Gaussian noise, in view of the relevance scores shown in the bottom plot. While temporal data patterns yield classifications, the temporal maps of LRP scores highlight data intervals most significant for a given class. The plot of LRP scores consists of sharp peaks, well defined in the temporal domain, thus attributable to time-stamped gait events. Figure 12 displays the spatially averaged data signals for the four classes with their respective LRP score maps. The most prominent peaks are attributed to observable gait events, labelled in consistence with the gait cycle in figure 4. These are further discussed in section 5.
Figure 12: LRP method applied on randomly selected samples for the four PD severity ratings. SA of gait spatiotemporal signals: green; SA for LRP relevance scores over the same temporal period: blue. Vertical red bars with number labels display consistency with gait events listed below with capital letters as per figure. 4 and 10: 1, 3 and 6 - Heal strike and foot flattening (A); 2- Mid-stance and single support (C); 4- Loading response after the double support interval (B), 5 and 8 - Terminal swing and ready for the heel strike (G), 7- (F) Initial swing and Mid-swing or toe-of.
### Experiment (2) on Cognitive Load Gait Data
#### 4.2.1 Data Pre-Processing and Feature Learning and Classifications
In this experiment gait analysis is handled as a supervised learning process. Here, we propose a CNN model, based on the above extensive experimentation, as automatic feature extractor and classifier. The model shown in figure 5.a maps the gait spatiotemporal signal \(\widehat{x_{n,s}}\) to an output label \(y\) by learning an approximation function \(y=f\big{(}\widehat{x_{n,s}}\big{)}\).
Similar to experiment (1) An Adam is utilized to train and validate the model (for several experiments) using a batch size of 100 samples for each iteration; 200 epochs are found optimal to train the model. The training and validation sizes are set to be 70% and 10% respectively, where 20% is reserved for testing the model accuracy. The model is trained, validated, tested for several runs with data split using different random state parameters with different seeds. The mean performance and standard error are used to report the accuracy as follows:
\[SE=\frac{\sqrt{\frac{S(P1-\mu)^{2}}{q}}}{\sqrt{q}} \tag{13}\]
A set of measured data as \(x_{n,s}=[x_{n,1}\quad...\quad x_{n,116}]\in\mathbb{R}^{n\times 116}\) is harvested from the iMAGiMAT system, where \(n\) is the number of the data block (100 frames) and \(s\) enumerates the POP sensors. A total number of 1050 samples are recorded for 21 subjects and placed in a 3D matrix of dimensions \(1050\times 100\times 116\). The recorded amplitude of data varies due to the weight of each subject, therefore, data standardization is implemented as a pre-processing step, to ensure that the data is internally consistent, such that the estimated activations, weights, and biases update similarly, rather than at different rates, during the training process and testing stage. The standardization involves rescaling the distribution of values with a zero mean unity standard deviation, using equation (12):
The proposed model is trained, validated, and tested on m\(\times\)n\(\times\)s (m=number of samples, n=number of frames, S=number of POP sensors) spatiotemporal samples as K\(\times\)100\(\times\)116 for several runs using different _random state_ parameters. m is chosen on the basis of experimental protocols and the mean performance and standard error are used to calculate the accuracy. Experiments are conducted to investigate the ability of the deep CNN to identify gait signature patterns by fusing 116 POP sensors
Figure 13: Gait signature classification confusion matrix for 21 subjects. The diagonal squares are the true positives, in this case 100% of classification and elsewhere are the false positives (0%)
in the model's deep layers, to extract gait patterns automatically in the following experiments.
#### 4.2.1.1 (21) Subject gait signature verification
To demonstrate the model's ability to verify the identity of a subject based on their gait signature, we assigned each subject's data a label numbered from 0 to 20, containing 50 samples of normal and cognitive load as explained in section 3.3.1. The model is trained, validated and tested on m=1050 samples with different _random state_ parameters, and the mean performance and standard error are used to calculate the accuracy. The median classification confusion matrix is shown in figure 13, where the model achieved F1-score of 100% prediction and mean performance and standard error of 99.5\(\pm\)0.28 %. Figure 14 demonstrates the learning curve performance of the CNN over the iterations while training. We generate the training loss for each of the training sets and validation loss for each of the validation sets over the epochs. Figure 14 (a) shows the average training and validation losses.
The training loss starts from 3 and gradually reduces to 0. The validation loss generally follows the training loss, with a few spikes, and stablizes after 150 epochs. As expected, accuracy increases with decreasing loss, demonstrated in figure 14 (b). The average training and validation accuracy stablizes after 150 epochs after a few spikes.
For additional testing of the model performance in real-life scenarios, we evaluate the model on imposter and client classification. The client's data are used for the
\begin{table}
\begin{tabular}{c c c c|c c} \hline \hline Subject & F1- score & Subject & F1- score & Subject & F1- score \\ number & & number & & & \\ \hline
0 & 95\% & 7 & 87\% & 14 & 100\% \\
1 & 65\% & 8 & 90\% & 15 & 75\% \\
2 & 93\% & 9 & 90\% & 16 & 80\% \\
3 & 90\% & 10 & 77\% & 17 & 100\% \\
4 & 87\% & 11 & 91\% & 18 & 100\% \\
5 & 91\% & 12 & 90\% & 19 & 80\% \\
6 & 73\% & 13 & 100\% & 20 & 69\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Models Classification Accuracy for Each Subject.
Figure 14: Model training and validation loss in (a) and model accuracy in (b) for 21 subjects in 5 classes
model training and validation and only 20% of that data is used for testing, while the imposters data are only used at the testing stage. The model at the testing stage predicts the client's gait sample identity with an F1 score of 100% and unable to predict the imposter which returns 0% F1 score. This is achieved by taking 17 subjects as clients (m=850 samples), and 4 subjects as imposters (m=200 samples). Clients were split 70%-10%-20% for training validation and testing respectively. In the testing stage, the model was able to correctly distinguish imposters and clients in 100% of cases.
#### 4.2.1.2 Gender Classification
To demonstrate the model ability to recognize gait signatures, we perform a two-class classification based on the gender of the subject using the normal gait and cognitive load samples. The model is trained and validated with 6 subjects (m=300 samples) including 3 males and 3 females. Model testing is by predicting the gait class of two new subjects (m=100 samples, never seen by the model), the selection of males and females are done randomly. In this experiment the deep CNN prediction achieved F1-score of 95%, with 96% true positive prediction for the male samples, and 94% true positive prediction for the female samples.
#### 4.2.1.3 (21) Subjects Cognitive Load Classification
The aim of this experiment is to show that in healthy subjects the influence of cognitive load on gait varies from subject to subject and the normal gait can be predicted with higher true positive rates than predictions under cognitive load. Five types of gait signatures, normal and four cognitively demanding task patterns, are learned for 21 subjects. The performance observed for the 5 classes is shown in figure 15, as the median confusion matrix based on several runs with F1-score of 50% and mean performance and standard error of \(48.25\pm 1.03\%\).
The results show that normal gait is predicted by a true positive incidence of 92%, while there is notable confusion between the dual tasks performed by the 21 subjects. The different _random state_ parameters return the same result, where the normal gait true positive prediction is higher than 90% and substantial confusion between the dual task cases. Figure 16 shows the CNN learning curves over the training iterations, where the training loss declines from 1.8 to 0 while the validation loss rises from 1.8 to around 4 for 200 epochs, resulting in low validation accuracy as per figure 16 (b) to evidence severe overfitting.
\begin{table}
\begin{tabular}{l l l} \hline Classifier & Experiment 1 & Experiment 3 \\ \hline SGD & 77\% & 42\%, N=47\% \\ KNN & 87\% & 51\%, N=81\% \\ GPC & 5\% & 22\%, N=0\% \\ CNN & 100\% & 50\%, N=92\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: F1-Score Predictions for Comparison of CNN with Classical Classifiers.
#### 4.2.1.4 Single Subjects Cognitive Load Classification
In this experiment gait patterns are investigated within each subject, to show that each subject gait under cognitive load can be learned and predicted. This is achieved by training, validating, and testing the CNN to classify each subject gait pattern using the normal gait and cognitive load. Each subject data is split using _random state_ to cover all the 5 classes for testing with m=50 samples. The model evaluation using the F1- score is detailed for each subject in table 4. Gait data is predicted with more than 85% F1-score for 16 subjects, and for 6 subjects F1-scores are between 65% and 77%.
#### 4.2.1.5 Binary Classification Under Cognitive Load
To study patterns for each of the 4 dual tasks (M2-M5) representing variants of cognitive load, we organize the data into four groups so that binary classification performance to distinguish between gait under normal (class 0) and cognitive load (one of classes 1, 2, 3 or 4, depending on the particular data group) conditions can be studied separately for each dual task. The CNN is trained 16 times, implementing 4 runs with each of the 4 data groups. The F1-scores for each run are shown in table 5. The first run in each data group is based on training and validating the CNN on 20 subjects and test the model on 1 subject, to see if we can predict gait of one person from 20 people. In the second run, the numbers are 19 and 2, respectively; in the third - 17 and 4 respectively. The last run is based on splitting the data into 70% for training, 10% for validation and 20% for testing, using m=420 samples with _random state_ of 200 seed parameters (since the accuracy doesn't change with the _random
Figure 16: Model loss (a) and training and validation accuracy (b) under cognitive load, 21 subjects and 5 classes.
Figure 15: Confusion matrix for classification under cognitive load: 21 subjects, 5 classes.
state seed_). As shown in table 5, the highest classification performance is achieved in the first runs (except for the group containing class 3). This is used essentially in the implementation of LRP to analyse the gait classes for that subject in the first run, as reported further comparison with Statistical Classifiers.
#### 4.2.2 Comparison With Statistical Classifiers
Gait signature recognition achieved high accuracy compared to Cognitive Load Classification; therefore, the validity of these achieved classifications is verified with statistical classifier. Here we compare the classification results achieved by CNN in experiment 1 and 3 with statistical classifier algorithms, such as Stochastic Gradient Descent (SGD) [198], K-Nearest Neighbors (KNN) [199], and Gaussian Process Classifier (GPC) [200]. To change the format of the statistical classifiers input, the data are flattened to length 11600=100\(\times\)116, with and samples of m =1050 for experiment 1 and 3. The classification F1-scores on experiment 1 and 3 are shown in table 6. GPC fails in the true positive prediction of normal gait, while KNN achieves the best classification results. However, CNN outperforms the statistical classifiers for both gait signature recognition and normal gait prediction.
#### 4.2.3 LRP Analysis of Gait Spatiotemporal Classifications
The focus of this section is to identify the features picked up by the model to classify gait under cognitive load. To obtain accurate LRP relevance scores _Ri_, the model true positive prediction should be high. Therefore, the gait class with a high positive rate is considered for LPR analysis. The learned CNN model parameters in experiment 1, 3 and 5 were frozen for LRP analysis. Experiment 4 is to check if there is a variation in gait within a subject; therefore, it is not considered for LRP analysis. LRP Sequential Preset a Flat (LRP-SPF) based on the LRP was utilized for this work, as it has shown sensitivity to gait inconsistency using perturbation in PD case.
The iMAGiMAT system captures a sequence of periodic events as distinct, but similar cycles for each foot. This spatiotemporal sequence is generated by the change of light transmission intensity in the POP sensors: \(x_{i}=[x_{1}\quad...\quad x_{116}]\in\mathbb{R}^{n\times 116}\). However, a typical interpretation of the gait cycle, based on visual observation, is
Figure 17: Representative gait cycle spatial average of a spatiotemporal signals (see equation 13). Gait events recorded by the sensors in a typical full gait cycle of two steps (figure 4: A,B,C,D,E,F,G): 1- Heel strike, 2- Foot-flattening, 3- Single support, 4- opposite Heel strike, 5- opposite Foot-flattening, 6- Double support, 7-Toe-off, 8- Foot swing, 9- Heel strike, 10- Double support, 11- Toe-off, 12- Foot swing, 13- opposite Heel strike, 14- Single support, 15-Toe-off.
derived much less from the spatial component than the temporal one.
Thus, to progress towards interpreting the CNN classifications in terms of observable gait events, we average over the spatial domain according to:
\[SA[n]=\frac{1}{s}\sum_{s=1}^{116}(\ x_{n,s}) \tag{14}\]
Here \(x_{n,s}\) are the readings from individual sensors \(s\) at a specific frame \(n\) within each sample and \(SA\) is the frame \(n\) spatial average calculated as the arithmetic mean over all sensors. Figure 17 displays a typical \(SA\) of the spatiotemporal gait signal, labelling the main gait events over a two-step gait cycle.
Figures 18 and 19 display randomly chosen samples of single subjects, returning 100% true positives prediction for gait signature verification in experiment 1; figure 20 displays randomly selected samples of normal gait classified with 100% true positives in experiment 3; figure 21 shows predicted gait samples in experiment 5 for a subject never seen by the model when the training set is 20 subjects.
The top panels in figures 18, 19, 20 and 21 display calculated \(SA\) aligned against the relevance "heat map", generated from the calculated LRP scores and displayed in
Figure 18: LRP methods applied on a single subject from experiment 1 testing data (each column is one pair), to identify gait events relevant for the CNN prediction to classify the cognitive load impact on gait. SA of gait spatiotemporal signals: black; SA for LRP relevance signals over gait temporal period: blue; POF LI (Plastic Optical Fiber Light Intensity). Vertical red bars with numbers display correspondence to gait events as per figure 17: 1,5- Loading response or Foot flat and Double support, 2,3,4 - Loading response or Foot flat and Single support.
the bottom panels (to be discussed further in section 5). The SA temporal sequences have different values on the \(y\) axis due to the nature of the captured gait signal, which is influenced by the individual anthropometry of subjects.
\begin{table}
\begin{tabular}{c c c} \hline Reference & Methods & Accuracy [\%] \\ \hline E. Abdulhay et al. [63] & SVM & 92.7 \\ Y. N. Jane [64] & Q-BTDNN & 91.5 \\ ErtΓΌgrul et al. [65] & 1D-LBP+MLP & 88.89 \\ Medeiros et al. [66] & PCA & 81.00 \\ Wu et al. [67] & SVM & 84.48 \\ This work & Parallel 2D-DCNN & **95.5\(\pm\)0.28** \\ \hline \end{tabular}
\begin{tabular}{c c} \hline \hline Perceptron; PCA: Principal Component Analysis; Q-BTDNN: Q-back propagated time delay ANN.
\end{table}
Table 7: PD classification results on PhysioNet three datasets.
Figure 19: Consistent with identifying gait events relevant for the CNN prediction, random subjects from experiment 1 gait events are: 1,4- Loading response or Foot flat and Single support, 2,3- Foot swing and opposite Heel strike, 5- Loading response or Foot flat.
## 5 Discussion
The study presented delves into the promising realm of explainable artificial intelligence (AI) and deep learning methods for predicting gait deterioration. The focus is on identifying the impact of cognitive load and Parkinson's disease (PD) on gait patterns, and this is achieved by analyzing spatiotemporal data obtained from sensors placed under the feet. To carry out this investigation, Convolutional Neural Networks (CNNs) were utilized. These powerful neural networks can effectively learn from complex spatiotemporal data and produce highly accurate predictions. In addition, the CNNs were perturbed to provide insights into the features within the spatiotemporal gait ground reaction force (GRF) signals that are most relevant to the models' predictions. The results of this study are presented in detail in the following sections, with each data classification and perturbation analyzed and discussed in depth.
### PD Data
The spatiotemporal signal in figure 6 implies that gait has normal events. Abnormal gait, otherwise difficult to detect visually, can be detected by machine learning, in alignment with the knowledge of the ground truth labels. However, the magnitude of GRF in newton shows a decrease attributable to the severity of PD. The main objective of this work is to find the best deep learning model for PD severity rating and relate the model predictions to the gait cycle events shown in figure 4 and figure 10.
Research towards machine learning classifications from PD data specifically PhysioNet data is based on the use of manual feature extraction methods with the classical machine learning methods as shown in table 7. The best classification results from manual extraction are reported in [63] using SVM classifier (92.7%). Our previous work on PD severity classification [58], reported that the 2DCNN outperformed the 1DCNN, SVM, decision tree algorithm, logistic regression algorithms, multi-layer protection and LSTM. In this article, we explore three architectures for automatic extraction and LRP analysis. The proposed DCNNs identified PD, as well as rated the severity of the deviation from healthy gait, achieving better classification performance with F1-score of 98% for
Figure 20: LRP methods applied on normal gait samples (from different subjects) from experiment 3 testing data, to identify gait events relevant for the CNN prediction to classify the cognitive load impact on gait. Gait events are: 1,2,3-loading response or Foot flat and Double support.
datasets combined with different random state (see table 3). The best classification accuracy is achieved with the parallel 2D-DCNNs, with mean performance and standard errors of 95.5% and 0.28%, respectively. Additionally, the parallel 2D-DCNNs exhibit robustness at perturbation with Gaussian noise as shown in figure 9. This suggests that the model is adequate for detecting gait deterioration from the spatiotemporal GRF signal. As an additional substantial enhancement, our LRP approach allows classification results to be related to visual observations similar to those established in medical practice to diagnose PD.
The DCNN classifies the raw spatiotemporal signals as healthy or within three severity ratings as shown in the confusion matrix (figure 7). The best LRP method is selected by applying a perturbation technique, which detects the highest sensitivity to removal of information from the input data sequence (figure 8). The selected LRP -SPF was found to be superior to well-known methods such as deconvolution and guided backpropagation.
Figure 21: LRP methods applied on a single subject from experiment 5 testing data (each column is one pair), to identify gait events relevant for the CNN prediction to classify the cognitive load impact on gait. Gait events are: 1- Heel strike, 2- Toe-off, 3- between Foot swing and opposite Heel strike, 4- between Double support and Toe-off.
Among the DCNN architectures (figure 5), the Parallel DCNNs model shows the steepest decrease in the perturbation procedure. Therefore, that model is learnt and used to generate the heat map or relevance randomly selected samples (figure 12), without distinction between left and right foot signal is for two feet. The gait cycle events identified as key at each level of PD severity are listed below:
**1)**: PD Severity Level 0 (Healthy Gait): 1- Heel strike and foot flattening (A), 2- Midstance and single support (C). This indicates that the healthy person's ability to maintain balance is stronger than the PD patients', with strong balance suggesting that the forces are applied rhythmically to achieve the lower limbs' synchronized movement with stable posture.
**2)**: PD Severity Level 2: 3- Heel strike (A), 4- Loading response after the double support interval (B). The heatmap shows that the subjects affected with PD level 2 have a weaker heal strike followed by a weaker balance in double support, where this feature is marked by the model by 96% f1-score.
**3)**: PD Severity Level 2.5: 5- Terminal swing (G), 6- Heel strike (A). This shows that the subject has weaker foot landing or flat foot landing after the balance is compromised by the single support.
**4)**: PD Severity Level 3: 7- Initial swing and mid-swing or toe-off (F), 8- Terminal swing and ready for the heel strike (G).
Here the balance is compromised by weak GRF resulting from unstable body posture and implies high risk of falling. This conclusion is based on linking the stages of PD in [68] (description of how the stage of PD is affecting the body poster during gait using visual observation) to the events that are highlighted by the model for a certain PD severity.
The above markers for classification align with the observations in literature that PD-induced gait GRF deterioration affects the body balance and posture. The latter are with the closest relevance to gait events identified by the heat maps in figure 8 as the highest LRP scores, while the other gait events are less significant to the classifications. It is worth mentioning that these markers are identical by 95.5% in 1281 samples, such that the removal of these regions in the 95.5% of samples resulted
Figure 22: A deconvolution decomposition method applied to explain the parallel DCNN model prediction for PD severity ratings. SA of gait spatiotemporal signals is in black plot and model decomposition using deconvolution for each class in red plot. The deconvolution plot spikes every 10 frames.
in strong decay in the model prediction. The interpretation given above is in very good agreement with the description of the Hoehn and Yahr Scale staging criteria, as follows: "Stage 0 - No signs of disease, Stage 2 - Symptoms on both sides but no impairment of balance, Stage 2.5 - Mild symptoms on both sides, with recovery when the 'pull' test is given (the doctor stands behind the person and asks them to maintain their balance when physically pulled backwards), Stage 3 - Balance impairment, mild to moderate disease, physically independent" [68]. However, the staging criteria do not refer to the gait events adversely influencing the body's postural balance, due to the advancement of disease.
The analysis of LRP score in figure 11 and 12 reveals a consistent spike in the LRP plot for every 10 frames, thus further investigation has been carried out by plotting the model decomposition using deconvolution, Deep Taylor and LRP-SPF for a single stream DCNN model. As shown in figures 22 and 23, the spikes are also consistent in all the plots. Further, the sensors used to record PD and healthy gait in [45],[46],[47] are pressure-sensitive sensors to measure the forces underneath the foot as a function of time at a rate of 100 Hz.
These spikes can be an artefact generated from the data processing, either in the forward pass (classification) or the backward pass (LRP decomposition) due to the pooling years. However, the data is considered reliable, based on its noise resilience as demonstrated in our perturbation analysis.
### iMAGiMAT Data
#### 5.2.1 Classification of Gait Signatures under Cognitive Load
The present study investigates the importance of cognitive load influence for gait inconsistency. We present a comparison of classification performance between 5 types of gait: normal and under cognitive load in 4 different tasks. Deep CNNs not only outperform, unsurprisingly, the classical classifier methods but also achieve an F1-score of 100% (see figure 13 and table 4) for gait signature verification in experiment 1 with 21 healthy adult's data, and 100% prediction of 4 imposters and 17 clients. The learning curve in figure 14 demonstrates the good match of the CNN methodology for gait verification tasks. The network parameters are updated via backpropagation to map gait during training to 21 classes are correctly optimized at the validation stage, which is important for the testing stage to make prediction for gait verification.
Experiment 2 is in essence an extra validation of the adequacy of the spatiotemporal sampling of GRF by the 116 sensors and their fusion, as well as that the classification performance of the trained models. An F1-score of 95% is achieved for test data from an unseen male as well as an unseen female. Although experiment 2 has the character of a sanity check, the results lend support to the value of floor sensor gait data as a biometric.
Experiment 3 is conducted to study the possibility to classify cognitive load on healthy subjects. It has shown that normal gait is classified with a higher true positive rate compared to any of the classes of gait under cognitive load. This experiment also indicates that the achieved true positive rates in predicting normal gait are higher for the CNN model compared to the classical classifiers (see figure 15 and table 6). The learning curve in figure 16 indicates overfitting [69], to imply that the gait patterns under cognitive load diverge among the 21 subjects. Samples obtained under cognitive load samples are hard to fit due to the inconsistency of gait pattern changes among the subjects.
The results from the first three experiments suggest that while the dual-task data obviously contributes to the high F1-scores in experiments 1 and 2, it results in substantially degraded true positive rates in experiment 3. However, experiment 4 shows that when classifications are within a single subject the performance is notably better: for 16 subjects (out of 21) the gait under cognitive load the F1 score ranges between 80 to 100%, with the remaining 5 subjects the range being between 69% to 77%.
These observations can be discussed in the light of humans having a natural gait pattern evolved over millions of years; however, changes in gait when experiencing cognitive load at any particular instance are specific to the individual, expressing their response to the impaired ability to process cognitive information. In experiment 5, we use binary classifications (see table 6.2) to distinguish normal gait from gait under the 4 variants of cognitive load. The best classification results are obtained when the model learns normal or dual-task gait features for a single subject. This implies that although learned gait features under cognitive load may not be readily portable across subjects, they are consistent for each individual and can contribute substantially for correct subject classifications; however, the accuracy drops if more subjects are involved.
#### Interpretation of Classifications
Figures 18, 19, 20 and 21 provide the link between the LRP relevance scores ("heat map") and the time sequence of the calculated _SA_ signal in a single gait cycle window. The LRP score maxima are suitable pointers to the parts of the gait cycle which are most relevant for the classifications. For accurate heat maps of a specific gait class the model's true positive prediction in the confusion matrix must be close to 100% for most of the testing samples, which points to the results from experiment 1 (figures 18 and 19), experiment 3 for normal gait heat maps - in figure 20 and experiment 5 for a single subject predicted gait under the 4 variants of cognitive load - in figure 21. Focusing just on one complete gait period (two steps) is justified by the fact that on multiple repetitive occasions each subject will initiate a gait cycle (see full description
Figure 23: A Deep Taylor decomposition method applied to explain the parallel DCNN model prediction for PD severity ratings. Similar to figure 4.11 and 4.12 the Deep Taylor decomposition plot spikes every 10 frames.
of the gait cycle figure 4) by performing a heel strike, strictly followed by other gait events described in figure 17 and ending in a toe off.
Figure 24: LRP heatmaps validation by perturbation technique for experiment 1. Information with the highest relevance scores is progressively removed and the test samples are re-predicted. Steeper initial decrease indicates better identification of gait events with most weight in the classifications. a) shows the model predictions in 30 steps based on removing relevance scores using LRP Sequential Preset a Flat (LRP-SPF) and random removal of information. b) shows the model performance after 300 steps of information removal.
**iii.**: Gait while texting in smart phone: the transition from foot swing to opposite Heel strike is significant for distinguishing texting from normal walking.
**iv.**: Gait while talking: the transition from double support to Toe-off is important to distinguishing talking from normal walking.
Overall, the LRP analysis indicates that subjects' normal gait is characterized by loading response, while the other cognitive load gait classes is classified by landing or lifting the feet on/from the surface of the iMAGiMAT system. For subject verification there are many second relevant scores are used to predict the identity of the subject based on gait signature.
Figure 24 shows the assessment of the validity of the LRP heatmaps for subjects' identification using cognitive load. Here we apply the removal of region based on both LRP Sequential Preset a Flat (LRP-SPF) MoRF and random region removal and re-predicting gait class. As shown in figure 24 (a) the model prediction strongly decays using the LRP for the removal of information compared to the removal of random information. Figure 24 (b) shows the model performance over 300 steps. It can be seen that the model reaches lowest performance accuracy where the gait classes have to take a random prediction. Furthermore, it can be inferred from figure 24 that the model is effective in finding the most relevant region to identify subjects and the LRP is consistent over the test samples.
## 6 Conclusion
To conclude, this research work highlights the effectiveness of deep learning models in accurately classifying gait deterioration in Parkinson's disease patients. The models surpass previous methods that rely on manual feature extraction and are also capable of withstanding perturbation noise, making them highly robust. The LRP analysis confirms that body balance is a critical aspect for diagnosing PD [68], with higher levels of the disease affecting a patient's ability to walk without the risk of falling. The identification of relevant gait cycle events can also aid clinical practitioners in diagnostics, either visually or through quantitative parameters derived from observations. The methodology proposed in this article has the potential to contribute to developing a strategy for personalized longitudinal monitoring of the progression of PD severity. Additionally, the study shows that floor sensors can be used to capture changes in an individual's unique gait signature due to cognitive load, providing potential for biometric and security applications. In healthcare, gait data from floor sensors can contribute to the detection of Parkinson's disease onset and fall risks. The future direction of this research may involve the inobtrusive sampling of subjects' gait under routine conditions over intervals spanning periods of physical and mental changes due to aging, which could contribute to earlier detection of disease onset. Overall, this study demonstrates the vast potential of deep learning models and Explainable AI in the field of gait analysis, which could significantly improve clinical practice and patient outcomes.
|
2303.13916 | Self-Supervised Reversed Image Signal Processing via Reference-Guided
Dynamic Parameter Selection | Unprocessed sensor outputs (RAW images) potentially improve both low-level
and high-level computer vision algorithms, but the lack of large-scale RAW
image datasets is a barrier to research. Thus, reversed Image Signal Processing
(ISP) which converts existing RGB images into RAW images has been studied.
However, most existing methods require camera-specific metadata or paired RGB
and RAW images to model the conversion, and they are not always available. In
addition, there are issues in handling diverse ISPs and recovering global
illumination. To tackle these limitations, we propose a self-supervised
reversed ISP method that does not require metadata and paired images. The
proposed method converts a RGB image into a RAW-like image taken in the same
environment with the same sensor as a reference RAW image by dynamically
selecting parameters of the reversed ISP pipeline based on the reference RAW
image. The parameter selection is trained via pseudo paired data created from
unpaired RGB and RAW images. We show that the proposed method is able to learn
various reversed ISPs with comparable accuracy to other state-of-the-art
supervised methods and convert unknown RGB images from COCO and Flickr1M to
target RAW-like images more accurately in terms of pixel distribution. We also
demonstrate that our generated RAW images improve performance on real RAW image
object detection task. | Junji Otsuka, Masakazu Yoshimura, Takeshi Ohashi | 2023-03-24T11:12:05Z | http://arxiv.org/abs/2303.13916v1 | # Self-Supervised Reversed Image Signal Processing via Reference-Guided Dynamic Parameter Selection
###### Abstract
Unprocessed sensor outputs (RAW images) potentially improve both low-level and high-level computer vision algorithms, but the lack of large-scale RAW image datasets is a barrier to research. Thus, reversed Image Signal Processing (ISP) which converts existing RGB images into RAW images has been studied. However, most existing methods require camera-specific metadata or paired RGB and RAW images to model the conversion, and they are not always available. In addition, there are issues in handling diverse ISPs and recovering global illumination. To tackle these limitations, we propose a self-supervised reversed ISP method that does not require metadata and paired images. The proposed method converts a RGB image into a RAW-like image taken in the same environment with the same sensor as a reference RAW image by dynamically selecting parameters of the reversed ISP pipeline based on the reference RAW image. The parameter selection is trained via pseudo paired data created from unpaired RGB and RAW images. We show that the proposed method is able to learn various reversed ISPs with comparable accuracy to other state-of-the-art supervised methods and convert unknown RGB images from COCO and Flickr1M to target RAW-like images more accurately in terms of pixel distribution. We also demonstrate that our generated RAW images improve performance on real RAW image object detection task.
## 1 Introduction
In general, a sensor RAW image taken by a digital camera is converted into the standard RGB (sRGB) format through an in-camera ISP [20]. Traditional ISPs are essentially optimized to generate compressed and human perceptually pleasant RGB images. Due to the ease of use, numerous RGB images flood on the Internet, and their availability underpin recent advance in machine learning-based computer vision technologies. On the other hand, RAW images contain all the captured information, and the relationship between ambient light, pixel intensity, and noise distribution in RAW domain is much simpler than that in RGB domain [50]. Therefore, utilizing RAW images directly for downstream tasks potentially achieves greater performance than RGB image-based methods in both low-level and high-level computer vision tasks. In fact, recent studies have shown that RAW image-based image recognition [36, 15, 49] and image processing [3, 53, 54, 28] achieved higher performance than RGB image-based methods. The use of RAW images is expected to improve performance especially in difficult scenes such as extremely dark or blurry scenes that should be covered in practical application. RAW images are also used in research that optimizes existing ISPs for downstream tasks [40, 37, 13, 38, 46, 42, 52] or develops an accurate DNN-based ISP [31, 44, 17]. However, they are hard-to-see and not suitable for daily use. Therefore, it is difficult to obtain enough RAW images for research purposes. In particular, the scarcity of annotated RAW data has been a barrier to machine learning-based approaches. Hence, several reversed ISP methods that convert existing large-scale RGB datasets into pseudo RAW datasets have been studied [3, 53, 7, 47, 8, 22, 2].
The reversed ISP methods can be divided into model-based methods [22, 3] and learning-based methods [53, 7, 47, 2, 8]. UPI [3] is a typical model-based method that defines a series of simple and invertible ISP blocks whose parameters are determined using camera metadata such as white balance gains and color correction matrixes. On the other hand, learning-based methods learn RGB-to-RAW conversion directly from paired RGB and RAW images using a fully DNN based model [53, 2, 47, 8] or a combination of hand-crafted ISP blocks and shallow CNN models [7, 8]. Learning-based methods are able to achieve more accurate RAW reconstruction than model-based methods.
These methods are pioneers of reversed ISP research and valuable to overcome the shortage of RAW images. However, there are some limitations. First, camera metadata or paired RAW and RGB data is required. Metadata is often not accessible for camera users, and shooting RGB and RAW images simultaneously is not executable in all cases.
Second, most existing methods assume a single specific ISP pipeline and do not handle RGB images processed by unknown ISP well. This causes misaligned color and brightness distribution when applied to arbitrary RGB datasets. Finally, it is hard to reproduce characteristics of global illumination of a target RAW image from an input RGB image since the effect of global illumination is generally canceled in RGB images by ISPs. Therefore, the existing methods tend to produce different RAW image distribution from the target distribution.
To tackle these problems, in this paper, we present an Self-supervised Reversed ISP method called SRISP that does not require metadata and paired data for training. Same as MBISPLD [7], SRISP has multiple parameter dictionaries of a reversed ISP pipeline composed of classical ISP functions and shallow CNNs to cover various kinds of sensors and environments. Then, the parameters are selected by other shallow CNNs to achieve correct mapping. Unlike MBISPLD, the proposed method (1) only needs source RGB images and unpaired target RAW images to train it with the help of proposed two types of pseudo image pairs and (2) achieves diverse RGB-to-RAW mapping including illumination effects by conditioning the selection module with global features of reference target RAW and source RGB images. Our main contributions are:
* Self-supervised reversed ISP learning based on two types of pseudo paired data generated from unpaired target RAW and source RGB images using a randomized traditional ISP and novel self-supervision based on Mean Teacher [45].
* Dynamic parameter selection of reversed ISP blocks using global features of a reference RAW image, which is able to reproduce the target RAW characteristic including global illumination.
* Demonstrate that our method is able to map existing RGB datasets (COCO [29] and Flickr1M [35]) to target RAW datasets (MIT-Adobe FiveK [5], SIDD [1], and LOD Dataset [15]) more accurately than other state-of-the-art methods in terms of pixel distribution.
* Additional experiments that show our learned model contributes to the accuracy improvement in RAW object detection on LOD Dataset.
## 2 Relate Work
### Reversed ISP
The learning-based methods are further divided into fully DNN-based methods [53, 2, 47, 8, 25, 10, 56, 21] and hybrid-methods [7, 8]. CycleISP [53] and InvISP [47] are state-of-the-art fully DNN-based methods. CycleISP models RGB-to-RAW and RAW-to-RGB mappings using two DNN branches that are jointly fine-tuned to achieve cycle consistency. InvISP learns a reversible ISP using normalizing flow [24, 14] to produce an invertible RGB image to the original RAW image. While the fully DNN-based methods are expressive, there are issues of interpretability and controllability. On the other hand, MBISPLD [7] employs a hybrid approach combining UPI-like classical reversible ISP blocks and shallow CNNs. Each ISP block has learnable parameters optimized with RGB and RAW image pairs. As for the white balance and color correction blocks, multiple candidate parameters (parameter dictionary) are learned and dynamically selected by shallow CNNs based on intermediate images at inference time. They also use shallow CNNs as the learnable lens shading correction and tone mapping. MBISPLD achieved state-of-the-art RAW reconstruction accuracy while maintaining the interpretability. Our method is an extension of MBISPLD, which is composed of classic ISP blocks and shallow CNNs. The main differences are the parameter selection module and the self-supervised learning method. Our method selects optimal parameters of ISP blocks based on global features of source and reference images and is end-to-end trainable with unpaired RGB and RAW images.
### ISP Optimization and Control
Recently, several methods have been proposed to optimize parameters of a classic ISP to improve the performance of downstream tasks or perceptual image quality [40, 37, 13, 38, 46, 42, 52]. For example, in [46], the differentiable proxy that mimics the behavior of a non-differentiable ISP function using DNN is trained, and the ISP parameters are optimized based on the proxy directly using gradient descent to maximize performance of several downstream tasks. Similarly, Covariance Matrix Adaptation Evolution Strategy [12] is used to optimize black-box ISP parameters [37]. In addition, ReconfigISP [4] optimizes both the combination of ISP blocks and their parameters with a neural architecture search method [30]. Unlike these static ISP optimization, several methods [39, 49, 36, 9] dynamically control ISP parameters such as the digital gain, white balance, denoiser, and tone mapping to enhance the downstream performance. These studies show that even a model-based ISP with limited expressiveness achieves high performance by optimizing or controlling its parameters.
### Pseudo Labeling
In the field of image recognition, pseudo-labeling that treats outputs of a specific model (teacher) as ground truth data is widely used when the real ground truth data is not available or noisy. In particular, Mean Teacher (MT) [45], which uses an Exponential Moving Average (EMA) model of past training step models as a teacher, has been shown to be effective in self-supervised learning, semi-supervised learning, and domain adaptation in recent years
[26, 27, 32, 33]. MT supresses the error of the pseudo-labels by the temporal model ansamble and is expected to achieve stable training. In this study, we propose a MT-based pseudo-paired data generation method for better training.
## 3 Method
Figure 1 shows an overview of our proposed approach. Let us denote \(\mathcal{X}\) as RGB image domain and \(\mathcal{Y}\) as RAW image domain. Our goal is to find the reversed mapping \(f^{-1}:\mathcal{X}\rightarrow\mathcal{Y}\). Similar to UPI [3] and MBISPLD [7], we modeled the mapping function by a series of differentiable and reversible ISP blocks: Global Gain (GG) \(f_{gg}\), White Balance (WB) \(f_{wb}\), Color Correction (CC) \(f_{cc}\), Gamma Correction (GC) \(f_{gc}\), and Tone Mapping (TM) \(f_{tm}\). Note that bilinear demosicing is applied before \(f_{gg}\) as preprocessing because it is deterministic processing, and the demosaiced image is treated as a RAW image in this paper. The RGB-to-RAW mapping \(f^{-1}\) is defined as follows:
\[f^{-1}=f_{gg}^{-1}\circ f_{wb}^{-1}\circ f_{cc}^{-1}\circ f_{gc}^{-1}\circ f_{ tm}^{-1}. \tag{1}\]
The \(i\)-th ISP block has a parameter dictionary \(D_{\theta_{i}}\) with \(K\) parameter candidates \(\left\{\theta_{i,k}\right\}_{k=1\ldots K}\). The final parameter \(\theta_{i}\) of the \(i\)-th block is determined by a Dynamic Parameter Selector (DPS) \(f_{DPS}\) based on the parameter dictionary so that a given RGB image \(x\) is converted into a corresponding RAW image \(y\). However, in general, if the functions and parameters of the forward mapping are unknown, the inversion estimation is ill-posed as there can be many RAW images corresponding to an input RGB image. It is hard to estimate what true illumination was and how the RGB image was processed from only the input RGB image since thier clues are essentially removed by a forward ISP. Therefore, we solve this problem by giving a reference RAW image \(y_{r}\) to the DPS. Then, the proposed method is able to convert \(x\) processed by an arbitrary ISP into a \(y_{r}\)-like RAW image \(y\). Fothermore, the parameter dictionaries and the DPS are trained using unpaired RGB and RAW images.
We implemented GG, WB, CC, and GC based on UPI. Due to space limitation, we detail them in the supplementary material. The key point is, while MBISPLD unifies CG into WB with a single parameter dictionary, we designed them separately with independent dictionaries to enhance their flexibility. As for TM, we introduced new Dynamic Tone Mapping (DTM) described in Section 3.2.
### Dynamic Parameter Selector
Following MBISPLD, we designed our DPS to estimate the parameter \(\theta_{i}\) of the \(i\)-th ISP block as a weighted average of the parameter dictionary \(D_{\theta_{i}}\) as follows:
\[\theta_{i}=\Sigma_{k=1}^{K}\,w_{i,k}\theta_{i,k}. \tag{2}\]
Unlike MBISPLD, our model attempts to map an input RGB image \(x\) to a RAW-like image \(y\) similar to reference RAW image \(y_{r}\). To this end, the weights \(w_{i}=\left\{w_{i,k}\right\}_{k=1\ldots K}\) are determined by Reference-guided Weight Estimator (RWE) \(f_{RWE}\) based on 1D global features \(g_{x}=h_{x}\left(x\right)\), \(g_{r}=h_{r}\left(y_{r}\right)\), and \(g_{i}=h_{i}\left(\widetilde{x}_{i}\right)\):
\[w_{i}=f_{RWE}\left(g_{x},g_{r},g_{i}\right), \tag{3}\]
where \(h_{x}\), \(h_{r}\), and \(h_{i}\) are shallow CNNs followed by the global average pooling, and \(\widetilde{x}_{i}\) is intermidiate features of the \(i\)-th ISP block. For the blocks except for the DTM, following MBISPLD, \(\widetilde{x}_{i}\) is the concatenated output image processed by the ISP block using the \(K\) parameter candidates. As for the DTM, the input image \(x\) is used as \(\widetilde{x}_{i}\) to avoid heavy computation. As shown in Figure 2, the \(f_{RWE}\) simply fuses the three global features using affine layers (\(f_{x\to r}\), \(f_{prj,i}\), and \(f_{head,i}\)) and element-wise sum operations, and outputs \(w_{i}\) with a softmax operation. The \(f_{x\to r}\) is shared for all ISP blocks and generates the fused global feature \(g_{x\to r}\) of \(g_{x}\) and \(g_{r}\). The \(g_{x\to r}\) is expected to represent general information how to convert \(x\) to \(y_{r}\). The \(f_{prj,i}\) and \(f_{head,i}\) are trained for each parameter dictionary and select suitable parameters based on \(g_{x\to r}\) and \(g_{i}\). Thanks to \(g_{x\to r}\), our method is able to reproduce features of the reference RAW image and generate diverse pseudo RAW images using \(y_{r}\) randomly sampled from the target RAW data. Note that a full-resolution image is firstly resized to \(256\times 256\), and ISP parameters are determined based on the resized image to reduce a computational cost. Then, the full-resolution image is processed using the parameters.
### Dynamic Tone Mapping
The tone mapping maps one color to another color to render a perceptually pleasant image, and the mapping function can be highly complex. To model the complex mapping, we employed a shallow CNN as TM similar to MBISPLD:
\[f_{tm}\left(x,\theta_{1}\right)=\phi_{tm}\left(x,\left\{\theta_{tm,l}\right\}_ {l=1\ldots 4}\right), \tag{4}\]
where \(\phi_{tm}\) is a 4-layer CNN with 32 channel dimension and the \(\theta_{tm,l}\) weights for the \(l\)-th layer. In MBISPLD, the \(\phi_{tm}\) consists of only \(1\times 1\) convolutions and statically optimized. This limited structure helps to stabilize training, but we extended it to cover more diverse mapping. The proposed DTM uses \(3\times 3\) dynamic convolutions [48, 6] whose kernels are dynamically determined by the DPS. In the DTM, the parameter dictionary \(D_{\theta_{tm}}\) is learned for each \(\theta_{tm,l}\). That is, \(D_{\theta_{1}}=\left\{D_{\theta_{tm,l}}\right\}_{l=1\ldots 4}\) and each \(D_{\theta_{tm,l}}\) includes \(K\) candidate parameters for the \(l\)-th layer. The \(\theta_{tm,l}\) is determined as follows:
\[\theta_{tm,l}=f_{DPS}\left(g_{x},g_{r},g_{1},D_{\theta_{tm,l}}\right), \tag{5}\]
where \(f_{DPS}\) expresses (2) and (3). The DTM has different parameter dictionaries for forward and reversed passes since this function is not invertible. Note that the original dynamic convolution selects its parameter based on input features of each convolution. On the other hand, our DTM determines the weights using global features of the input RGB and the reference RAW images.
### Pseudo Pair Training
The proposed self-supervised learning using unpaired RGB and RAW images is realized by combining two types of pseudo RGB and RAW image pairs and our reference-guided DPS. The two types of pseudo-pairs are (1) \(\mathrm{PP}_{rand}\): the real-RAW \(y\) and pseudo-RGB \(\hat{x}\) pair generated by a random ISP and (2) \(\mathrm{PP}_{MT}\): the real-RGB \(x\) and pseudo-RAW \(\hat{y}\) pair generated by self-supervision based on Mean Teacher (MT) [45]. These pseudo-pairs do not represent the correspondence between real RGB and RAW images. Therefore, to learn fixed reversed pipeline using these pairs results in poor generalization for real RGB and RAW pairs. Moreover, MT converges to a meaningless solution when there is no correct label because MT basically just reduces noises of the labels. However, by combined with the reference-guided DPS, these pseudo-pairs act as correct supervision. That is, in terms of dynamic parameter selection, our model is able to learn how to determine the mapping parameters for the given pair based on the reference, even if the pair is not a true pair. Furthermore, the learned parameter selection works with an unknown true RGB and RAW image pair.
The first pseudo pair, \(\mathrm{PP}_{rand}\), is defined as:
\[(\hat{x},y)=\left(\mathrm{ISP}_{rand}\left(y\right),y\right), \tag{6}\]
where \(\mathrm{ISP}_{rand}\) is a simple forward ISP pipeline same as UPI [3]. Unlike UPI, the parameters of GG, CC, and GM are randomly determined independent of the target mapping, and a simple Gray-world algorithm [11] is used as WB. The implementation details are in the supplementary material. Although \(\mathrm{ISP}_{rand}\) is different from real camera ISP, it is able to produce perceptually acceptable RGB images. By learning to reproduce the real RAW image \(y\) from this pseudo-RGB image \(\hat{x}\) as show in Figure 3, the proposed method learns the basic procedure of the RGB-to-RAW mapping. Unfortunately, RGB images generated by \(\mathrm{ISP}_{rand}\) do not cover true diverse distribution of RGB images and it may degrade RAW reconstruction quality when the input is real RGB image. Therefore, the second pseudo-pair is needed.
The second pseudo-pair, \(\mathrm{PP}_{MT}\), is defined as:
\[(x,\hat{y})=\left(x,f_{teacher}^{-1}\left(x,y_{r},D_{\theta_{teacher}}\right) \right), \tag{7}\]
where \(f_{teacher}^{-1}\) and \(D_{\theta_{teacher}}\) are MTs of \(f^{-1}\) and \(D_{\theta}\), respectively. We input a real RGB image \(x\) and reference real RAW image \(y_{r}\) randomly sampled from the RAW dataset into the MT and obtain an output RAW-like image \(\hat{y}\). Note that \(x\) and \(y_{r}\) are unpaired. Figure 4 shows how to train the model using this pseudo pair. The key point here is to give \(\hat{y}\) as the reference \(y_{r}\) to the student model. This enables
Figure 1: Our SRISP framework. An input RGB image and a reference RAW image are converted into 1D global features, and ISP parameters are dynamically determined based on them. To model complex mappings, a new reference-based dynamic CNN is incorporated as Dynamic Tone Mapping. Two types of psedo-paired RGB and RAW images are used for stable training.
Figure 2: Reference-guided Weight Estimator \(f_{RWE}\) determines ISP parameter selection weights based on 1D global features calculated from an input image, a reference image, and intermidiate features of the \(i\)-th ISP block.
the student model to learn the RGB-to-RAW mapping without any contradiction even when the MT produces unnatural pseudo RAW images. On the other hand, if the real RAW image \(y\) set to \(y_{r}\) and \(\hat{y}\) is used only for student loss calculation as in the standard MT, the mismatch between \(y_{r}\) and the target \(\hat{y}\) leads to a wrong mapping. Note that this self-supervised learning is necessary to be combined with the \(\mathrm{PP}_{rand}\) because the MT itself does not have a power to get close to the true mapping. The \(\mathrm{PP}_{rand}\) enables the model to learn how to perform basic conversions to true RAW images, and the \(\mathrm{PP}_{MT}\) enables the model to learn conversions from a variety of true RGB images. By combining these two with the reference-guided DPS, the proposed method achieves the un-paired learning.
### Losses
Our goal is to build reversed ISP pipeline \(f^{-1}\) with its forward pipeline \(f\) is also trained as a constraint. Hence, the following bi-directional loss function is used:
\[L_{bi}\left(x,y\right)=\left|f_{gc}\left(y\right)-f_{gc}\left(f^{-1}\left(x \right)\right)\right|+\left|x-f\left(y\right)\right|, \tag{8}\]
where \(f_{gc}\) is a gamma transformation with \(\gamma=2.2^{-1}\) introduced to encourage learning of dark areas as well as bright area. A similar idea was employed in [53]. We first process \(f^{-1}\), and the same weights for the parameter selection are used for \(f\). The final loss is a weighted sum of the losses for the first and second pseudo-pairs:
\[L=L_{bi}\left(\hat{x},y\right)+\alpha L_{bi}\left(x,\hat{y}\right), \tag{9}\]
where \(\alpha\) is a weight parameter and is set to 0.3 in this paper. The second term is not used in the first 15 epochs because outputs of the MT are not very meaningful.
## 4 Experiments
### Datasets
We evaluated our method on three RAW image datasets and two RGB image datasets widely used in verious research including high-level computer vision studies. **MIT-Adobe-FiveK Dataset**[5]. We used the same train and test set used in [47] for Canon EOS 5D (777 images) and Nikon D700 (590 images). The LibRaw library was used to render RGB images from RAW images. **SIDD**[1]. This dataset provides 320 RGB and RAW image pairs captured by five smartphone cameras under different lighting conditons for training and 1280 patches for validation. **LOD Dataset**[15]. LOD Dataset contains low-light and normal-light RAW image pairs of eight object categories. We used normal-light images with 1830 training and 400 test images. We converted the original images to the DNG format by the Adobe DNG Converter and used full-size thumbnail images as RGB images. **COCO Dataset**[29]. Large-scale real world object images in RGB format are provided. We used 2017 Train images (118K images) for the \(\mathrm{PP}_{MT}\) training and 2017 Val images (5K) for unpaired evaluation. **Flickr 1 Million Dataset**[35]. This dataset also provides diverse real world RGB images. We randomly sampled 200K images for the \(\mathrm{PP}_{MT}\) training and 5K images for unpaired evaluation from 1M images.
### Implementation Details
We used the following settings for all experiments. The image encoder \(h_{r}\) and \(h_{x}\) were 5-layer CNNs with \(3\times 3\) convolutions of stride 2 followed by ReLU activation except for the last layer. The channel sizes were {32, 64, 128, 128, 128}. The \(h_{i}\) had slightly different structure, i.e., 4-layer CNN with {32, 32, 32, 32} channel size. The global average pooling was applied to the outputs of \(h_{r}\), \(h_{x}\), and \(h_{i}\). The parameter dictionary size \(K\) was 5. The length of global feature vector \(g_{r}\), \(g_{x}\), \(g_{x\to r}\) and \(g_{i}\) were 128, 128, 128, and 32, respectively. The network was trained for 800 epochs from scratch using Adam optimizer [23]. The initial learning rate was \(10^{-4}\) with decay of 0.1 after 250 and 500 epochs. The mini-batch size was 24. As for our method,
Figure 4: Mean Teacher model for creating the real RGB and pseudo RAW pair for training.
Figure 3: Randomized traditional ISP for creating the real RAW and pseudo RGB pair for training.
16 images were the first pseudo pairs and 8 images were the second pseudo pairs. The EMA decay of MT was 0.999. We used whole images for training rather than cropped patches [47]. In the training, the input images were resized into \(256\times 256\), and random flip and rotation were applied. All RAW images were normalized into [0, 1] using the black-level and white-level and applied the bilinear demosaicing for both training and evaluation.
### Results
We compared our method against several state-of-the-art methods: **UPI**[3], a model based invertible ISP. We used the official parameters determined by metadata of Darmstadt Noise Dataset [41]. **CycleISP**[53], a DNN-based reversed ISP method. We utilized their pre-trained rgb2raw joint model trained with MIT-Adobe-FiveK Dataset and SIDD. The final mosaicing function was removed for our evaluation. **U-Net**[8], a simple U-Net [43] based method used in [8] as a baseline. We removed the final interpolation layer for mosaicing. **MBISPLD**[7], a hybrid method of the model-based and learning-based approach. We implemented this method by ourselves because there was no public code at that time. Our implemented MBISPLD consisted of unified Gain&WB, CC, GM with a single parameter, and TM with static 1x1 convolutions. Note that we did not use the Mosaicing and Lens-shading blocks as with our method. As for U-Net and MBISPLD, we trained them from scratch for each dataset using real RGB and RAW pairs with the same training settings of our method.
We also evaluated several variations of our method. We trained our model with the real image pairs same as U-Net and MBISPLD instead of the \(\mathrm{PP}_{rand}\). We denote it as "Real" and the original setting as "Pseudo" training. "Ours w/o \(\mathrm{PP}_{MT}\)" is the model trained without the \(\mathrm{PP}_{MT}\). "Ours (Flickr/COCO)" is the model trained with the \(\mathrm{PP}_{MT}\) generated from each RAW dataset and Flickr or COCO. "Ours (All)" is the model trained with all RAW and RGB datasets.
#### 4.3.1 RAW Image Reconstruction
It is difficult to evaluate how well the input RGB image is mapped to the RAW image similar to the reference RAW image quantitatively because the perfect ground truth cannot be created in principle. Hence, in this evaluation, we divided each RAW and RGB image in half, and used the left RAW image as a reference \(y_{ref}\), the right RGB image as an input \(x\), and the right RAW image as a ground truth \(y\) with the assumption that the left and right region has the same characteristics of the sensing device and lighting. Table 1 shows reconstruction results on each dataset in terms of PSNR [dB] and the mean Angular Error (AE) [\({}^{\circ}\)] [16] between the predicted color and the true color. All our method achieved lower AE, which indicates more accurate reproduction of the global illumination, than the other methods. It is also shown that our proposed \(\mathrm{PP}_{MT}\) (Flickr/COCO/All) reduced AE when only pseudo image pairs (Pseudo) were used. This is because the influence of the difference between the pseudo RGB images generated by \(\mathrm{ISP}_{rand}\) and real RGB images was reduced by \(\mathrm{PP}_{MT}\). Furthermore, it is surprising that ours (All) achieved the comparable PSNR and lower AE compared to the other methods learned with the real pairs. We consider our method benefited from the data volume of multiple datasets thanks to the flexible pipeline based on the proposed DPS. In the setting using the real image pairs (Real), our methods further outperformed the other methods. It was hard for the other methods to solve the one-to-many mapping. On the other hand, ours was able to solve it by reformulating it as one-to-one mapping using the reference image.
#### 4.3.2 Robustness to Unknown ISPs & Sensors
Table 2 shows the results of evaluating each method learned in Table 1 against MIT-Adobe FiveK dataset. We chose RGB images generated by unknown ISPs manually tuned by expert C per image as input RGB images. The proposed
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Nikon D700} & \multicolumn{2}{c|}{Canon EOS 5D} & \multicolumn{2}{c}{SIDD} & \multicolumn{2}{c}{LOD} \\ \cline{3-10} & & AE\(\downarrow\) & PSNR\(\uparrow\) & AE\(\downarrow\) & PSNR\(\uparrow\) & AE\(\downarrow\) & PSNR\(\uparrow\) & AE\(\downarrow\) & PSNR\(\uparrow\) \\ \hline \multirow{6}{*}{Real} & UPI [3] & 7.80 & 27.93 & 7.35 & 32.97 & 8.82 & 36.29 & 8.40 & 30.69 \\ & CycleISP [53] & 8.80 & 30.51 & 9.80 & 32.14 & 9.49 & 42.07 & 10.35 & 22.33 \\ \hline \multirow{6}{*}{Pseud} & U-Net [8] & 4.83 & 38.01 & 4.73 & 41.53 & 7.76 & 45.44 & 6.06 & 38.94 \\ & MBISPLD [7] & 4.72 & 38.49 & 4.72 & 41.38 & 7.60 & 45.56 & 6.22 & 37.74 \\ \cline{1-1} & Ours w/o \(\mathrm{PP}_{MT}\) & **2.53** & 43.05 & **2.84** & **45.92** & **4.77** & **49.20** & **4.62** & **40.20** \\ \cline{1-1} & Ours (Flickr) & 2.56 & 43.13 & 2.93 & 45.54 & 4.92 & 48.72 & 4.64 & **40.20** \\ \cline{1-1} & Ours (COCO) & 2.57 & **43.32** & 2.90 & 45.64 & 4.96 & 48.85 & 4.64 & 39.66 \\ \hline \multirow{6}{*}{Pseudo} & Ours w/o \(\mathrm{PP}_{MT}\) & 4.55 & 34.67 & 4.00 & 37.89 & 6.86 & 42.79 & 5.05 & 35.17 \\ \cline{1-1} & Ours (Flickr) & 3.81 & 35.51 & 3.72 & 37.97 & 5.84 & 43.68 & 4.91 & 34.64 \\ \cline{1-1} & Ours (COCO) & 3.61 & 35.52 & 3.59 & 38.36 & 6.00 & 43.51 & 4.90 & 34.96 \\ \cline{1-1} & Ours (All) & 3.02 & 38.80 & 3.20 & 41.18 & 5.23 & 46.23 & 4.91 & 34.89 \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative RAW reconstruction results among our methods and other methods. The characters in parentheses of ours denotes the dataset used for the \(\mathrm{PP}_{MT}\). βRealβ and βPseudoβ indicate whether the real paired data or the \(\mathrm{PP}_{rand}\) was used for training.
method reconstructed them with higher accuracy than the other methods. The proposed models learned by the pseudo pairs were more accurate than those learned by the real pairs. This indicates that the training without the assumption of a single pipeline is a key to realize the generalized parameter control. For a similar evaluation, Table 3 shows the results of applying the model learned on each dataset to another dataset. Note that FiveK, SIDD, and LOD used different ISPs for rendering RGB images. Our method generalized well to arbitrary ISPs and sensors, although other methods were only accurate for the learned ISP.
#### 4.3.3 Unpaired Evaluation
We evaluated whether each method was able to convert the actual RGB datasets (i.e., Flickr and COCO) to RAW-like datasets. Since there is no ground truth, we compared the distribution of pixel values between all reference and generated RAW images. Specifically, we used Lab Histogram Intersections (HI) that are used to compare the color marginal distributions of images in the Lab color space [18]. Table 4 reports the average histogram intersection over all channels of all pixels. The proposed method using the two types of pseudo pairs produced the closest distribution to the reference compared to the other methods. The score of ours (All) was worse on SIDD compared to ours learned for each dataset. The SIDD differs from other datasets in that it has fewer images and contains significantly dark RAW images. Those samples are minor in the mixed dataset, and it might have caused the degradation of ours (All). However, ours (All) still achieved better reconstruction results compared to the other methods.
#### 4.3.4 Ablation Studies
Table 5 shows the effectiveness of our proposed modules, i.e., DPS, GG, DTM, and \(\mathrm{PP}_{MT}\), evaluated on the Canon EOS 5D. Each module contributed to the performance. In particular, the effect of DPS was significant (-5.87\({}^{\circ}\), +8.27dB), followed by DTM (-0.55\({}^{\circ}\), +1.02dB) and GG (-0.04\({}^{\circ}\), +1.14dB). The \(\mathrm{PP}_{MT}\) mainly contributed to generalization to unknown ISPs. On the other hand, all metrics were degraded if we replaced \(\mathrm{PP}_{MT}\) with \(\mathrm{PP}_{MT}^{-}\) or SL. From the result, we concluded that the proposed \(\mathrm{PP}_{MT}\) effectively realized learning on the unpaired data.
#### 4.3.5 Quantitative Results
We show qualitative comparisons against the other methods in Figure 5. The first and the second row show the results for the normal Canon EOS 5D and the Expert C image, respectively. The small images on the results of our method are the given reference images. The proposed method produced RAW-like images that have less color or brightness misalignment than the other methods. Although MBISPLD learned with the real pairs of Canon EOS 5D, it failed to reproduce the ambient light color that is removed in the input RGB image. The Expert C image was manually adjusted to be bright, so the other methods resulted in producing brighter images than the real RAW image. On the other hand, our method produced more GT-like RAW image thanks to the reference guidance.
#### 4.3.6 RAW Image Object Detection
We used the LOD RAW-like images converted from the COCO's RGB images by each method to learn 8 class object detection on the LOD dataset. Following [15], CenterNet [55] model pre-trained with COCO's RGB images was fine-tuned using the LOD RAW-like images. The details are in the supplementary material. Table 6 shows the detection accuracy (mAP) of the trained model on the real LOD RAW images for normal-light condition. MBISPLD with the dictionary augmentation [7] (+DA) was also evaluated. Ours achieved the greatest accuracy improvement although all methods improved the pre-trained model. Our method successfully generated pseudo-RAW images effective for training in the down-stream task without the paired RGB and RAW images or metadata.
\begin{table}
\begin{tabular}{c|l|l|l|l|l} \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Nikon Expert C} & \multicolumn{2}{c}{Canon Expert C} \\ \cline{3-6} & & AE\({}_{\downarrow}\) & PSNR\(\uparrow\) & AE\({}_{\downarrow}\) & PSNR\(\uparrow\) \\ \hline \multirow{3}{*}{Nixon} & UPI [3] & 10.73 & 26.31 & 11.01 & 27.12 \\ & CycleISP [53] & 11.01 & 21.73 & 11.31 & 22.11 \\ \hline \multirow{3}{*}{Real} & U-Net [8] & 8.28 & 19.81 & 9.08 & 20.35 \\ & MBISPLD [7] & 7.61 & 19.36 & 8.64 & 20.26 \\ & Ours w/o \(\mathrm{PP}_{MT}\) & 9.62 & 23.47 & 7.15 & 28.65 \\ \hline \multirow{3}{*}{Pseudo} & Ours w/o \(\mathrm{PP}_{MT}\) & 6.22 & 31.20 & 5.39 & 33.35 \\ & Ours (Flickr) & 5.43 & **32.12** & 5.06 & 33.61 \\ \cline{1-1} & Ours (COCO) & 5.36 & 31.77 & 4.99 & 33.22 \\ \cline{1-1} & Ours (All) & **4.68** & 31.53 & **4.53** & **33.67** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative RAW reconstruction results on the images processed by the unknown ISP tuned by an expert [5].
\begin{table}
\begin{tabular}{c l|l|l|l|l|l} \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{Test PSNR\(\uparrow\)} \\ \cline{3-6} & & Nikon & Canon & SIDD & LOD \\ \hline \multirow{3}{*}{Nikon} & R & U-Net & 38.01 & **39.62** & 33.84 & 19.00 \\ & R & MBISPLD & **38.38** & 39.18 & 33.60 & 18.63 \\ & P & Ours (Flickr) & 35.31 & 37.77 & **42.81** & **34.11** \\ \hline \multirow{3}{*}{Canon} & R & U-Net & 36.93 & **41.53** & 35.02 & 19.52 \\ & R & MBISPLD & **37.39** & 41.35 & 34.77 & 19.27 \\ & P & Ours (Flickr) & 35.35 & 37.97 & **43.19** & **34.43** \\ \hline \multirow{3}{*}{SIDD} & R & U-Net & 29.98 & 32.16 & 45.44 & 21.46 \\ & R & MBISPLD & 30.74 & 32.85 & **45.56** & 21.05 \\ \cline{1-1} & P & Ours (Flickr) & **34.70** & **37.49** & 43.68 & **32.37** \\ \hline \multirow{3}{*}{LOD} & R & U-Net & 24.08 & 26.05 & 35.32 & **38.94** \\ \cline{1-1} & R & MBISPLD & 24.13 & 26.16 & 35.22 & 37.73 \\ \cline{1-1} & P & Ours (Flickr) & **37.69** & **40.18** & **44.99** & 34.64 \\ \hline \end{tabular}
\end{table}
Table 3: Cross dataset evaluation to test genralization to different cameras. βRβ and βPβ denote the real-pair and the \(\mathrm{PP}_{rand}\) training, respectively.
## 5 Conclusion
In this paper, we have proposed a self-supervised reversed ISP method that does not require metadata and paired images. The proposed method is able to handle diverse RGB-to-RAW mappings by learning how to control the mapping parameters based on a given reference RAW image. Furthermore, the entire pipeline is trainable using unpaired RGB and RAW images. The experiments showed the proposed method successfully produced the target RAW-like images. We hope this approach will contribute to the future progress of the RAW image-based computer vision research.
\begin{table}
\begin{tabular}{l|c} \hline Fine-tuning data & [email protected]:0.95 \\ \hline No fine-tuning & 41.60 \\ UPI [8] & 46.03 \\ MBISPLD [7] & 46.37 \\ MBISPLD+DA [7] & 46.47 \\ Ours (COCO) & **47.17** \\ \hline \end{tabular}
\end{table}
Table 6: Object detection accuracy on LOD Dataset with fine-tuning using RAW-like data converted from COCO Dataset
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Flickr Histogram Intersection\(\uparrow\)} & \multicolumn{3}{c}{COCO Histogram Intersection\(\uparrow\)} \\ \cline{3-10} & & Nikon & Canon & SIDD & LOD & Nikon & Canon & SIDD & LOD \\ \hline \multirow{6}{*}{Real} & UPI [3] & 0.731 & 0.761 & 0.707 & 0.662 & 0.692 & 0.732 & 0.659 & 0.640 \\ & CycleISP [53] & 0.408 & 0.389 & 0.511 & 0.398 & 0.407 & 0.390 & 0.518 & 0.401 \\ \cline{1-1} \cline{2-10} & U-Net [8] & 0.791 & 0.770 & 0.719 & 0.903 & 0.772 & 0.755 & 0.685 & 0.903 \\ & MBISPLD [7] & 0.795 & 0.768 & 0.731 & 0.883 & 0.785 & 0.765 & 0.715 & 0.896 \\ & Ours w/o \(\mathrm{PP}_{MT}\) & 0.715 & 0.822 & 0.688 & 0.899 & 0.686 & 0.817 & 0.651 & 0.899 \\ & Ours (Flickr/COCO) & 0.851 & 0.895 & 0.762 & **0.965** & 0.877 & 0.921 & 0.799 & **0.967** \\ \hline \multirow{6}{*}{Pseudo} & Ours w/o \(\mathrm{PP}_{MT}\) & 0.909 & 0.900 & 0.849 & 0.897 & 0.905 & 0.924 & 0.841 & 0.897 \\ & Ours (Flickr/COCO) & 0.930 & 0.940 & **0.852** & 0.959 & 0.931 & **0.946** & **0.845** & 0.959 \\ \cline{1-1} & Ours (All) & **0.937** & **0.945** & 0.816 & 0.949 & **0.935** & **0.946** & 0.812 & 0.952 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Histogram Intersection (HI) in Lab color space between the generated RAW images and the reference RAW images.
Figure 5: Qualitative RAW reconstruction results for Canon EOS 5D. The input RGB in the first row was processed by Libraw, and that in the second row was processed by the unknow ISP (expert tuning). The small images on ours are the reference images.
\begin{table}
\begin{tabular}{l l l|c|c|c|c|c} \hline \hline \multicolumn{3}{c|}{Module} & \multicolumn{3}{c|}{Canon EOS 5D} & Flickr \\ \hline DPS & GG & DTM & UT & AE\(\downarrow\) & PSNR\(\uparrow\) & HI\(\uparrow\) \\ \hline \multirow{3}{*}{\(\checkmark\)} & & & & 10.46 & 27.47 & 0.815 \\ & & & & 4.59 & 35.74 & 0.898 \\ \cline{1-1} \cline{2-6} |
2310.05094 | Shear-Induced Phase Behavior and Topological Defects in Two-Dimensional
Crystals | We investigate through numerical simulations how a two-dimensional crystal
yields and flows under an applied shear. We focus over a range that allows us
to both address the response in the limit of an infinitesimal shear rate and
describe the phase behavior of the system at a finite shear rate. In doing so,
we carefully discuss the role of the topological defects and of the finite-size
effects. We map out the whole phase diagram of the flowing steady state in the
plane formed by temperature and shear rate. Shear-induced melting of the
two-dimensional crystal is found to proceed in two steps: first, the solid
loses long-range bond-orientational order and flows, even for an infinitesimal
shear rate (in the thermodynamic limit). The resulting flowing hexatic phase
then melts to a flowing, rather isotropic, liquid at a finite shear rate that
depends on temperature. Finally, at a high shear rate, a third regime
corresponding to a strongly anisotropic string-like flowing phase appears. | Federico Ghimenti, Misaki Ozawa, Giulio Biroli, Gilles Tarjus | 2023-10-08T09:50:20Z | http://arxiv.org/abs/2310.05094v1 | # Shear-Induced Phase Behavior and Topological Defects in Two-Dimensional Crystals
###### Abstract
We investigate through numerical simulations how a two-dimensional crystal yields and flows under an applied shear. We focus over a range that allows us to both address the response in the limit of an infinitesimal shear rate and describe the phase behavior of the system at a finite shear rate. In doing so, we carefully discuss the role of the topological defects and of the finite-size effects. We map out the whole phase diagram of the flowing steady state in the plane formed by temperature and shear rate. Shear-induced melting of the two-dimensional crystal is found to proceed in two steps: first, the solid loses long-range bond-orientational order and flows, even for an infinitesimal shear rate (in the thermodynamic limit). The resulting flowing hexatic phase then melts to a flowing, rather isotropic, liquid at a finite shear rate that depends on temperature. Finally, at a high shear rate, a third regime corresponding to a strongly anisotropic string-like flowing phase appears.
## I Introduction
How do crystals flow under an applied shear? This question can be viewed from two different perspectives. Alternatively, one may envisage the onset of flow as an instance of a _yielding transition_ between an elastically responding rigid solid and a plastically flowing phase [1]. This pertains to a broad field of research within mechanics, soft-condensed matter and statistical physics which involves a very wide range of materials from granular media, foams, and a whole variety of so-called yield-stress fluids to all kinds of harder solids such as glasses and to crystalline materials [2; 3]. One is then concerned with the mechanisms inducing plasticity, the properties of the flow, the existence and the value of the yield stress, the nature of the yielding transition, and all means to control the way the solids yield without breaking too soon. One may also consider the phenomenon in a more specific way as a _shear-induced melting transition_ associated with some symmetry restoration and enquire how this transition proceeds and differs (or not) from the melting of the quiescent crystal in equilibrium [4].
Plasticity in crystals is known to be due to the presence of defects in the structure, above all topological defects in the form of dislocations. In many real systems they are present in a rather large quantity and, having been trapped in the solid during its preparation, they are out of equilibrium. Here instead we are interested in starting with _perfect equilibrium crystals_, which, as a result, only contain thermal topological defects compatible with the fixed nonzero temperature. We focus on the steady state reached by imposing a constant shear (strain) rate and do not address transient effects that may give a different angle on the yielding transition. Furthermore, we consider a two-dimensional crystal, as for instance experimentally studied in colloidal suspensions [5; 6], hexagonal columnar liquid crystals [7], complex plasmas [8], and for which more analytical work is possible in the context of the KTNHY theory of melting [9; 10; 11; 12; 13]. In two dimensions the crystal has only quasi-long-range translational (crystalline) order but long-range bond-orientational order. (Note that here and below we use for convenience the terminology "crystal" even in two dimensions where there is no long-range translational order; this is an abuse of language but should not lead to any confusion.) Melting in equilibrium may take place through two distinct transitions that are associated with the unbinding of bound topological defects and are separated by an intermediate "hexatic" phase. The crystal-to-hexatic transition corresponds to the appearance of free dislocations, and the resulting hexatic phase only has quasi-long-range bond
orientational order. The hexatic-to-liquid transition corresponds to the unbinding of the dislocations into free disclinations which therefore also break the quasi-long-range order and fully restore translational and bond-orientational invariance.
Our goal is to investigate how a two-dimensional crystal yields and flows under an applied shear over a range of rates that allows us to both address the response in the limit of an infinitesimal shear rate and describe the phase behavior of the system at finite rate. It has been theoretically established [14; 15; 16] that even a perfect crystal flows for an infinitesimal shear so that the notion of yield stress is only a time-dependent property which should vanish for a large, yet finite, observation time (even in the thermodynamic limit). A viscosity can then be defined but it diverges in a singular manner for a vanishing shear rate. We give numerical evidence for these predictions and discuss the mechanism by which this takes place in two-dimensional crystals. For larger shear rates we provide a description of the shear-induced melting and of the properties of the phases that are observed in a steady state.
## II Model, method, and phase diagram
We numerically study a model of dense monodisperse colloidal crystals under simple shear in two dimensions. We consider the situation where hydrodynamic interactions and inertial effects can be neglected and we perform a Brownian (overdamped Langevin) dynamics for the position \({\bf r}_{i}=(x_{i},y_{i})\) of each particle under a constant and uniform applied strain rate \(\dot{\gamma}\)[17]:
\[\zeta\frac{{\rm d}{\bf r}_{i}}{{\rm d}t}=-\sum_{j\neq i}\frac{\partial v({\bf r }_{i}-{\bf r}_{j})}{\partial{\bf r}_{i}}+\dot{\gamma}{\bf e}_{x}y_{i}+{\bf f }_{i}, \tag{1}\]
with \(v({\bf r})=\frac{\epsilon}{2}(1-|{\bf r}|/d)^{2}\theta(d-|{\bf r}|)\) a purely repulsive soft potential, where \(d\) is the particle diameter and \(\theta(x)\) is the step function. The thermal bath is described through the stochastic force \({\bf f}_{i}=(f_{x,i},f_{y,i})\), which is a Gaussian white noise with zero mean and correlations given by \(\langle f_{\alpha,i}(t)f_{\beta,j}(t^{\prime})\rangle=2k_{B}T\zeta\delta(t-t^ {\prime})\delta_{ij}\delta_{\alpha\beta}\), where \(\langle\cdots\rangle\) is a statistical average, \(T\) is the temperature of the bath, \(k_{B}\) is the Boltzmann constant, and \(\alpha,\beta=x,y\). We measure lengths in units of the diameter \(d\), times in units of \(\tau_{0}=\zeta d^{2}/\epsilon\), and temperature in units of \(\epsilon/k_{B}\).
We study \(N\) harmonic soft disks in a rectangular box with area \(A=L_{x}L_{y}\), where \(L_{x}\) is the box length along the \(x\)-direction and \(L_{y}=\frac{\sqrt{3}}{2}L_{x}\) is the length along the \(y\)-direction. The ratio is chosen to accommodate the perfect hexagonal structure. The packing fraction \(\phi\) of the system is set to \(\phi=(N/A)\pi d^{2}/4=1.0\), for which the system has been shown to have a first-order hexatic-to-liquid transition [18] at \(T_{m,{\rm hex}}\simeq 0.0062\pm 0.0002\) in thermal equilibrium without applied deformation (\(\dot{\gamma}=0\)) [19]. Although the full equilibrium phase diagram of the model for \(\dot{\gamma}=0\) is not available, we note that the hexatic phase in soft-core potential models always appears in a narrow range of temperature (or density, but for power-law potentials the latter can easily be converted to temperature) which is a few percents of the transition temperature of the hexatic phase to the liquid [19; 20]: we therefore estimate the melting temperature of the solid to the hexatic phase to be \(T_{m,{\rm sol}}\gtrsim 0.0055\).
To implement the uniform simple shear, Lees-Edwards periodic boundary conditions are applied [21], and the equations of motion are integrated through the Euler scheme. We measure the shear stress component of the system, \(\sigma=\sigma_{xy}\), by using the Irving-Kirkwood formula [21]: see Appendix A. In the initial condition, particles are arranged in a hexagonal close-packed structure, which is then subjected to an applied shear at the chosen temperature based on Eq. (1). All the quantities presented in this paper are measured _in the steady state_ (after a long enough simulation time), except otherwise stated. We investigate a wide range of shear rate \(\dot{\gamma}\) and temperature \(T\), which covers most of the relevant physics of two-dimensional (\(2d\)) crystal flows and we study \(N=900\), \(3600\), \(14400\), and \(57600\) to check the finite-size effects.
Note that we consider a Brownian (overdamped Langevin) dynamics which is appropriate for colloidal suspensions and is different from the previous simulation studies of sheared two-dimensional crystals that used a nonequilibrium molecular dynamics algorithm (SLLOD) [22; 23]. In the latter case there is an issue concerning the way the system is thermostated (kinetic or a configurational thermostat), which may influence some of the results [23]. This specific problem is absent in our Brownian dynamics simulations where temperature is introduced through a white noise. For completeness we have also carried out SLLOD dynamics simulations: the results are discussed in Appendix B.
The phase diagram of the simulated model in the non-equilibrium steady state is summarized in Fig. 1(a).
As the temperature \(T\) and the shear rate \(\dot{\gamma}\) are varied, the system can be found in three different regimes. _Regime I:_ At small \(\dot{\gamma}>0\) and small \(T\), we observe a plastic flow with the nucleation of free dislocations. Crystalline positional quasi-long-range order is then broken but hexatic quasi-long-range order persists. This is a flowing hexatic phase. A representative snapshot is shown in Fig. 1(b). Theories of \(2d\) crystals under shear [24; 25] can be applied in this regime, especially in the limit of infinitesimal \(\dot{\gamma}\) where they help discussing if and how a perfect crystal flows [14; 15; 16]. _Regime II:_ As \(\dot{\gamma}\) or \(T\) is increased, there is a transition to a regime where the dislocations are unbound and free disclinations are nucleated. Thus, both positional and bond-orientational correlations have a short-ranged spatial decay (see a snapshot in Fig. 1(c)). This regime is a flowing liquid which appears rather isotropic. _Regime III:_ When \(\dot{\gamma}\) is further increased, the imposed shear rate dominates the dynamics and we find a cross-over to a string-like flow, in which the particle motion mostly follow lanes
in the direction of the imposed shear. This can be seen in the snapshot shown in Fig. 1(d) and in the associated inset, where some representative particle trajectories are displayed. In this regime the system is strongly anisotropic.
In the subsequent sections, we provide a detailed characterization of the three regimes.
## III Do two-dimensional crystals flow under an infinitesimal shear rate?
### Theoretical arguments
The fact that an infinitesimal shear stress destroys a solid phase by making it flow was theoretically established in full generality in Ref. [14]. The main idea is that a shear stress deforms a solid, thus inducing an extensive increase of the energy of the system. Such an excess energy can be relaxed at any finite temperature by _nucleating_ droplets of the undeformed solid within the deformed solid state. Applying this metastability-nucleation argument one can conclude that an infinitesimal shear stress always destabilizes a solid state. The drawback of this treatment is that it provides a possible mechanism for flow but not necessarily the most efficient one. Sengupta, Sollich, and coworkers [15; 16] have recently built on this approach. They have used thermodynamic arguments and predicted the presence of a nearby first-order transition between two crystals with the same symmetry but different mechanical response to evaluate the effective stress at which a perfect crystal typically yields, _i.e._, has its first plastic event, as a function of the shear rate. They have focused on the transient behavior in the limit \(\dot{\gamma}\to 0\). Here, we are more interested in the steady-state regime and in the specific mechanisms at play in \(2d\) crystalline solids.
Figure 1: (a): Phase diagram of a sheared two-dimensional crystal in its flowing steady state in the plane of the shear rate \(\dot{\gamma}\) and the temperature \(T\). Red dots indicate the phase points corresponding to the snapshots displayed in panels (b-d). In Regime I, we observe a plastic flow with nucleated free dislocations and hexatic quasi-long-range order (QLRO). A representative snapshot is shown in (b) for \(T=0.003\) and \(\dot{\gamma}=2\times 10^{-4}\). Blue, white, and red particles have 5, 6, and 7 neighbors, respectively, and a pair of red and blue particles form a dislocation. In Regime II, the dislocations are unbound and free disclinations, shown as isolated red and blue particles, are nucleated. Concomitantly, bond-orientational order has a short-ranged, exponential, spatial decay and the system is in a flowing liquid phase. A representative snapshot is given in (c) for \(T=0.003\) and \(\dot{\gamma}=1\times 10^{-2}\). One can see an isolated disclination with 7 neighbors, as indicated by a circle. In Regime III, we observe a string-like flow in which particles mostly move along lanes following the direction of shear. The system is then strongly anisotropic. The corresponding snapshot is shown in (d) for \(T=0.003\) and \(\dot{\gamma}=4\times 10^{-1}\) and an inset illustrates representative particle trajectories in the bulk of the system over a strain change \(\Delta\gamma=1.2\).
In the case of a \(2d\) crystal, the arguments can be made more explicit by pinpointing the underlying mechanism that gives rise to the instability of the solid state [24; 25]. The starting point is provided by the study of dislocations - the defects destroying quasi-long-range positional order - in the presence of shear stress. We here focus on the physics along the glide direction (shear direction) which is a more dominant (faster) process than the physics along the climb direction (perpendicular to the shear direction). In a \(2d\) crystal without shear there are no free dislocations. The reason is that a pair formed by a dislocation and an anti-dislocation (_i.e._, a dislocation of opposite Burgers vector) at a distance \(r\) is subjected to an effective attraction through a potential \(U_{0}(r)\) (without shear). This potential increases logarithmically at large \(r\) as \(U_{0}(r)=\frac{K\alpha_{0}^{2}}{4\pi}\mathrm{ln}(r/a_{0})\), where \(a_{0}\) is the inter-particle distance (or lattice constant) and \(K\) an effective elastic constant. In the presence of a shear stress \(\sigma\), the pair of dislocations is submitted to an additional force in the glide direction so that the effective potential becomes:
\[U(r)=U_{0}(r)-a_{0}(r-a_{0})\sigma. \tag{2}\]
Even for a very small stress \(\sigma\), the potential now favors unbinding of the dislocations as the linear term prevails on the logarithmic attraction: \(U(r)\) diverges to minus infinity for \(r\to\infty\). The competition between logarithmic attraction and linear repulsion leads to a finite energy barrier \(\Delta U=U(r_{c})-U(a_{0})\) with \(r_{c}=Ka_{0}/(4\pi\sigma)\), thus making unbinding at nonzero temperature a thermally activated process. By computing the barrier and assuming an Arrhenius-type law one can obtain at leading order of the rate \(R\) per unit time and unit area for the dissociation of a pair of dislocations and the ensuing formation of free dislocations [24; 25],
\[R\sim\frac{D_{||}}{a_{0}^{4}}\left(\frac{\sigma a_{0}^{2}}{k_{B}T}\right)^{ \frac{Ka_{0}^{2}}{4\pi k_{B}T}}e^{-2E_{c}/k_{B}T}\,, \tag{3}\]
where \(D_{||}\) is the diffusion constant in the glide direction and \(E_{c}\) a microscopic energy scale. The important (and leading) term in this expression is associated to the power-law dependence in \(\sigma\).
Due to this mechanism, at any nonzero temperature and for an arbitrary small shear stress, a finite (albeit very small) density of free dislocations \(\rho_{\mathrm{disl}}\) is produced, thus destroying the quasi-long-range positional order. The rate equation for \(\rho_{\mathrm{disl}}\) is written by
\[\frac{\partial\rho_{\mathrm{disl}}}{\partial t}=R-\langle v\rangle r_{c}\rho_{ \mathrm{disl}}^{2}, \tag{4}\]
where \(\langle v\rangle\) is the mean velocity of free dislocations in the glide direction, driven by shear stress \(\sigma\). The second term in Eq. (4) treats the recombination process approximatly [24]. At the steady-state, \(\rho_{\mathrm{disl}}\) is obtained by
\[\rho_{\mathrm{disl}}=\sqrt{\frac{R}{\langle v\rangle r_{c}}}. \tag{5}\]
Free dislocations are expected to show a Brownian motion under an external force by shear, and hence, using the Einstein relation, \(\langle v\rangle\) is given by
\[\langle v\rangle=a_{0}\sigma D_{||}/(k_{B}T). \tag{6}\]
A moving dislocation also leads to deformation of the solid. The associated strain rate is proportional to the density of dislocations [26],
\[\dot{\gamma}\sim\rho_{\mathrm{disl}}\langle v\rangle. \tag{7}\]
One combines Eqs. (3,5,6,7) and arrives at a relation between the strain rate \(\dot{\gamma}\) and shear stress \(\sigma\),
\[\dot{\gamma}\sim D_{||}\left(\frac{\sigma a_{0}^{2}}{k_{B}T}\right)^{\frac{Ka_ {0}^{2}}{8\pi k_{B}T}+1}\,. \tag{8}\]
The viscosity is defined as \(\eta=\sigma/\dot{\gamma}\), and thus one finds
\[\eta\sim\eta_{0}\left(\frac{\sigma a_{0}^{2}}{k_{B}T}\right)^{-\frac{Ka_{0}^{ 2}}{8\pi k_{B}T}}\,, \tag{9}\]
where \(\eta_{0}\) is a constant with dimension of viscosity. The two expressions in Eqs. (8,9) can be combined to give
\[\log\left(\frac{\eta}{\eta_{0}}\right)\sim-\frac{1}{1+(8\pi k_{B}T)/(Ka_{0}^{ 2})}\log\dot{\gamma}+\mathrm{O}(1). \tag{10}\]
These equations show that an infinitesimal shear stress indeed leads to plastic flow of a crystal and to a very large but finite viscosity. The behavior of the viscosity is however singular. It diverges when \(\sigma\to 0\) or \(\dot{\gamma}\to 0\), contrary to what happens for a liquid in which a finite value of the viscosity is reached when \(\sigma\to 0\).
### Numerical results
We first measure the averaged shear stress \(\overline{\sigma}\), where the overline denotes an average over time (or strain \(\gamma\)) and over independent trajectories in the steady state, as a function of the imposed shear rate \(\dot{\gamma}\). The outcome is displayed on a log-log plot in Fig. 2(a) for more than three orders of magnitude of \(\dot{\gamma}\) and a wide range of temperature from \(T=0.0001\) to \(0.0080\) that covers from the solid to the liquid phases found at \(\dot{\gamma}=0\) (see above).
The flow curves at the lowest temperatures, \(T=0.0001\) and \(0.0010\), show a plateau at the smallest values of \(\dot{\gamma}\) which indicates an apparent nonzero yield stress within our simulation time window. However, for the intermediate temperatures, \(T=0.0030\) and \(0.0050\), which are still below the estimated \(T_{m,\mathrm{sol}}\) and thus correspond to a solid phase when \(\dot{\gamma}=0\), one clearly observes a steady decay of \(\overline{\sigma}\) with decreasing \(\dot{\gamma}\), as better seen in the zoomed-in plot of Fig. 2(b). Below some crossover shear-rate value, this decay is roughly linear on the log-log plot with a slope that decreases as \(T\) decreases. This is compatible with the theoretical prediction in Eq. (8), which
implies that \(\log\overline{\sigma}\sim[1+Ka_{0}^{2}/(8\pi k_{B}T)]^{-1}\log\dot{\gamma}\) (but the data is not good enough to provide a meaningful extraction of the parameters), and supports the absence of a nonzero yield stress in the limit \(\dot{\gamma}\to 0\). As \(T\) is increased further, \(\overline{\sigma}\) decreases rapidly with decreasing \(\dot{\gamma}\): one then enters the Newtonian fluid regime with no yield stress, as shown for instance in Fig. 2(a) for \(T=0.0080\).
To obtain a complementary picture we also plot the effective viscosity \(\eta=\overline{\sigma}/\dot{\gamma}\) in Fig. 3(a). At low and intermediate temperatures, \(T=0.0001-0.0050\), the data is well described by a power-law divergence at small \(\dot{\gamma}\), \(\eta\sim\dot{\gamma}^{-\alpha}\). As a consequence of the behavior of \(\overline{\sigma}\) just described, we find that \(\alpha=1\) for the two lowest temperatures because of the apparent nonzero plateau found in \(\overline{\sigma}\) within the simulation range, but it slightly deviates from 1 for the two intermediate temperatures in agreement with a vanishing yield stress, and as expected from eq.(10). At the highest temperatures (\(T=0.0080\)), \(\eta\) saturates toward a finite value, as expected for a Newtonian fluid. (At high shear rates the system displays shear thinning with a viscosity that decreases with increasing \(\dot{\gamma}\) at all temperatures.) All the above results are illustrated for \(N=14400\) but they weakly depend on system size: see Appendix C. We also confirmed the absence of the yield stress and divergence of the viscosity in the SLLOD dynamics (see Appendix B).
According to the theoretical arguments recalled in the previous subsection, the plastic flow of a \(2d\) crystal is driven by the nucleation of free dislocations induced by the stress (or the shear rate) and corresponding to the unbinding of dislocation/anti-dislocation pairs. The motion of the free dislocations relaxes the shear stress and it is more specifically predicted that the effective viscosity is inversely proportional to the density of free dislocations, \(\eta=\sigma/\dot{\gamma}\sim\rho_{\rm disl}^{-1}\) by using Eqs. (7,6). This is what leads to Eqs. (9,10). To more directly test the relation between the viscosity \(\eta\) and the density of free dislocations \(\rho_{\rm disl}\), we have determined the latter numerically, as explained in Appendix D. We show in Fig. 3(b) a log-log plot of \(\eta\) as a function of \(\rho_{\rm disl}\). We find that data at different temperatures roughly collapse, and, although not perfect, a behavior compatible with \(\eta\sim\rho_{\rm disl}^{-1}\) at high \(\eta\) (or low \(\dot{\gamma}\)) is observed. This provides evidence that the mechanism for the divergence of the viscosity when \(\dot{\gamma}\to 0\) is indeed the
Figure 2: Flow curves for a crystal of \(N=14400\) particles under uniform simple shear. (a): Log-log plot of the averaged shear stress \(\overline{\sigma}\) versus the shear rate \(\dot{\gamma}\) for several temperatures. (b): Zoom-in plot of panel (a).
Figure 3: (a): Log-log plot of the effective viscosity \(\eta=\overline{\sigma}/\dot{\gamma}\) as a function of the shear rate \(\dot{\gamma}\) for the same data as in Fig. 2(a). The dashed straight line shows the dependence \(\eta\sim\dot{\gamma}^{-1}\). (b): Log-log plot of the effective viscosity as a function of the density of dislocations \(\rho_{\rm disl}\). The dashed line corresponds to \(\eta\sim\rho_{\rm disl}^{-1}\).
rarefaction of nucleated free dislocations. At lower \(\eta\) or higher \(\dot{\gamma}\), the data show a nonmonotonic dependence and the theoretical arguments no longer apply, as expected.
## IV Regime I: flowing hexatic phase
### Evidence for a hexatic phase and a shear-induced transition to a liquid phase
We have seen that the crystaline solid at \(\dot{\gamma}\to 0\) yields and flows as soon as an infinitesimal shear rate is imposed due to the nucleation of free dislocations. These free dislocations also disrupt the positional quasi-long-range order. Shear-induced melting of the crystal therefore take place as soon as \(\dot{\gamma}\neq 0\). The question that remains is whether the flowing phase is a liquid with exponentially decaying translational and bond-orientational spatial correlations or an intermediate hexatic phase retaining quasi-long-range bond-orientational order.
We characterize the structural properties of the flowing phase by using the local 6-fold bond-orientational local order parameter,
\[\phi_{6,j}=\frac{1}{n_{j}}\sum_{k=1}^{n_{j}}e^{6i\theta_{jk}}, \tag{11}\]
where the sum is over the \(n_{j}\) neighbors of particle \(j\) that are determined through a Voronoi tessellation and \(\theta_{jk}\) is the angle between the vector joining particle \(j\) with particle \(k\) and the (arbitrarily chosen) \(x\)-axis. From \(\phi_{6,j}\) we compute the volume-averaged bond-orientational order parameter \(\psi_{6}=(1/N)\sum_{j=1}^{N}\phi_{6,j}\) and the 6-fold bond-orientational spatial correlation function \(g_{6}(r)\): see Appendix F for more details.
We display in Fig. 4(a) the averaged square modulus of the bond-orientational order parameter \(\overline{|\psi_{6}|^{2}}\) versus \(\dot{\gamma}\) for various temperatures and system sizes. For all temperatures in the solid and hexatic phases for the quiescent system (\(\dot{\gamma}=0\)), _i.e._, for \(T<T_{\rm m,hex}\approx 0.0062\), one finds that \(\overline{|\psi_{6}|^{2}}\) decreases, first slowly and then in a quite rapid manner, as the shear rate increases and reaches a minimum before rising up again. However, one has to be careful about finite-size effects. Except for below \(T_{\rm m,sol}\) with \(\dot{\gamma}=0\) one indeed expects that \(\overline{|\psi_{6}|^{2}}=0\) in the thermodynamic limit when the solid flows and free dislocations appear. As in the equilibrium hexatic phase, we expect that only quasi-long-range bond orientational order can be present. One then anticipates a dependence on the linear system size of the form \(\overline{|\psi_{6}|^{2}}\sim L^{-\eta_{6}}\). Assuming that this flowing hexatic phase shares the same properties of its equilibrium counterpart one would then expect \(\eta_{6}\) to be a temperature dependent anomalous dimension such that \(\eta_{6}\leq 0.25\)[27]. (Here, we make no difference between \(L_{x}\) and \(L_{y}\) because we have chosen them proportional to each other.) On the other hand in an isotropic liquid phase with only short-range order, \(\overline{|\psi_{6}|^{2}}\) should decrease much more rapidly with system size, possibly as \(L^{-1}\) because the boundaries break the isotropy of space.
We indeed observe that at the smallest \(\dot{\gamma}\), below some value that appears to decrease as temperature increases (but still stays below \(T_{m,\rm sol}\)), very little change of \(\overline{|\psi_{6}|^{2}}\) takes place for the system sizes under study whereas at and around the minimum of \(\overline{|\psi_{6}|^{2}}\) a visible decrease is found. As shown in Fig. 13 of Appendix C, the minimum, \(\min_{\dot{\gamma}}\{|\overline{|\psi_{6}|^{2}}\}\), always decreases more rapidly than \(L^{-1/4}\) (and more so as \(T\) increases because the system sizes as probably too small to reach the asymptotic regime at the lowest temperatures). For \(T=0.0062\), which is around \(T_{m,\rm hex}\), the finite-size effects is strong even at low \(\dot{\gamma}\) and for the highest temperature that always corresponds to a liquid phase \(\overline{|\psi_{6}|^{2}}\) is always zero, at least up to a shear rate \(\dot{\gamma}\sim 10^{-1}-10^{0}\). The data therefore indicate that a transition from a flowing hexatic phase to a liquid phase occurs at a shear rate that decreases as the temperature
Figure 4: (a): Averaged square modulus of the bond-orientational order parameter, \(\overline{|\psi_{6}|^{2}}\), as a function of shear rate for various temperatures and system sizes. Triangles (with dotted-line), diamonds (dashed-line), and circles (solid-line) correspond to data for \(N=900\), \(3600\), and \(14400\), respectively. (b): Spatial decay of the bond-orientational correlation function \(g_{6}(r)\) for \(T=0.0030\), \(N=14400\), and a wide range of \(\dot{\gamma}\). The grey dashed line represents the bound imposed on a power-law decay by the KTHNY theory, \(g_{6}(r)\sim r^{-1/4}\).
increases: This is the transition line between regimes I and II shown in Fig. 1(a).
The above results are also confirmed by looking at the bond-orientational correlation function \(g_{6}(r)\). In Fig. 4(b), we illustrate the outcome for \(T=0.003\) and a wide range of shear rates, but the results for all temperatures are given in Appendix F. For the lowest rates \(g_{6}(r)\) decays very slowly, as a power law \(g_{6}(r)\sim r^{-\eta_{6}}\). The slope of the power law increases with \(\dot{\gamma}\) and reaches the upper bound predicted by the KTHNY theory of the hexatic phase, _i.e._, \(\eta_{6}=0.25\), for some value slightly above \(2\times 10^{3}\). This suggests that the non-equilibirum transition at which the hexatic order is lost is in the same universality class of its equilibrium counterpart. For larger values, above \(\dot{\gamma}=4\times 10^{3}\), \(g_{6}(r)\) decays quickly with an exponential rather than a power-law form. The passage from a power-law decay to an exponential decay is characteristic of a transition from quasi-long-range order to no order. This locates the transition between regimes I and II. Note that when \(\dot{\gamma}\) increases further, typically above \(10^{-1}\), \(g_{6}(r)\) reaches a nonzero plateau at large distances suggesting the appearance of long-range bond-orientational order, but this will be discussed in the next section concerning regime III.
The disappearance of quasi-long-range bond-orientational order is due to the unbinding of dislocations and to the resulting appearance of free disclinations. This can be tested by identifying and characterizing the latter: see Appendix D. In Fig. 5, we report for various temperatures and values of the shear rate the probability \(p_{\rm disc}\) of finding at least one disclination in the sample during the plastic flow. It is zero when the system is in Regime I, which corresponds to a flowing hexatic phase with no free disclinations. At a rather well defined \(\dot{\gamma}\) the probability jumps to a value of 1 (or nearly 1 for the lowest temperatures) and the system is now in a (flowing) liquid phase. The onset of the jump corresponds to the boundary between regimes I and II shown in Fig. 1(a).
By studying the 6-fold bond-orientational order and the emergence of free disclinations (which are defects in this order) we have identified a transition between Regime I, which can be described as a flowing hexatic phase, and Regime II, which corresponds to a flowing liquid phase. This is in line with the findings of previous numerical simulations [22; 23] and experiments [5; 7] on \(2d\) sheared crystals. However, we are not able to determine if the transition is continuous or first-order-like (as argued by Ref. [23]).This aspect requires further investigations with huge computational efforts.
### Rotating crystals
In Regime I where quasi-long-range bond orientational order is present we have also studied the dynamics of the system in the steady state at fixed shear rate \(\dot{\gamma}\). We have monitored the evolution with strain \(\gamma\) (which parametrizes time) of several quantities. As previously observed in a simulation [22] and an experimental [5] study of a sheared \(2d\) crystal, we find evidence for a coherent rotation of hexagonal crystalline domains. Their size scales like the system size and, as argued above and further below, the phenomenon should therefore be taken as a finite-size effect that would likely not persist in this form in the thermodynamic limit.
We first consider the (instantaneous, _i.e._, not time averaged) 6-fold bond-orientational order parameter, whose real part \(\Re\{\psi_{6}\}\) as a function of \(\gamma\), as shown in Fig. 6. One can see a clear oscillating behavior between a positive maximum value and a negative minimum one. The period \(\gamma^{*}\) of the oscillations can be estimated from a simple argument. Consider a hexagonal lattice that coherently rotates in a periodic box when the box is sheared at a rate \(\dot{\gamma}\). The corresponding bond-orientational order parameter \(\psi_{6}\) then periodically oscillates with a period \(\tau^{*}\) which is such that \(\tau^{*}\dot{\gamma}/2=\pi/3\). As by definition \(\gamma^{*}=\dot{\gamma}\tau^{*}\), this immediately gives
\[\gamma^{*}=\frac{2\pi}{3}\approx 2, \tag{12}\]
which indeed captures well the oscillation period shown in Fig. 6.
The rotation can also be directly seen by looking at the evolution of a given sample: real-space snapshots are displayed in the top panels of Fig. 7. Particles are colored according to the value of the real part of the local bond-orientational order parameter \(\phi_{6,j}\). When \(\Re\{\phi_{6,j}\}=1\), the local environment of a particle is that of a perfect hexagonal triangular lattice with direction parallel to the \(x\)-axis, while when \(\Re\{\phi_{6,j}\}=-1\), the orientation of the surrounding environment is rotated by an angle of \(\pi/2\). The periodic appearance of red (large positive \(\Re\{\phi_{6,j}\}\)) and blue (large negative \(\Re\{\phi_{6,j}\}\)) regions indicates that the solid flows with a coherent rotation.
Another signature of coherently rotating crystalline domains is obtained by considering the instantaneous
Figure 5: Probability to find at least one free disclination in the system, \(p_{\rm disc}\), as a function of \(\dot{\gamma}\) and \(T\) for \(N=14400\) particles.
static structure factor \(S_{\gamma}(\mathbf{k})\) measured from each snapshot [22; 5]. It is defined as
\[S_{\gamma}(\mathbf{k})=\frac{1}{N}\sum_{j,k=1}^{N}e^{i\mathbf{k}\cdot(\mathbf{r}_ {j}-\mathbf{r}_{k})}, \tag{13}\]
where \(\mathbf{k}=(k_{x},k_{y})=(2\pi n_{x}/L_{x},2\pi n_{y}/L_{y})\), with \(n_{x},n_{y}\) integers, consistently with the imposed periodic boundary condition. In the solid phase in thermal equilibrium, this function shows six peaks in the \((k_{x},k_{y})\) plane that are located on the vertices of a regular hexagon. In the bottom panels of Fig. 7 one can see that the 6-fold pattern rotates while the deformation proceeds, indicating that the local environment of each particle is coherently rotated during the flow. As already mentioned such a crystal rotation has been observed in two-dimensional colloid experiments [5] and a SLLOD molecular-dynamics simulation [22]. It was also recently predicted as a consequence of dislocation nucleation in a mesoscopic athermal model [28].
Several comments are in order. First, the oscillations are not quite symmetric between the vicinity of the maxima of \(\Re\{\psi_{6}\}\) and that of the minima (see Fig. 6). The rotation is faster and the absolute value is smaller near the minima, which corresponds to the situation where the crystal-like domains are oriented perpendicularly to the shear direction (see also the experimental result in Ref. [5]). Second, the overall coherence of crystal rotation does not mean that the particles themselves rotate coherently as they can escape the crystalline structure and be replaced by other ones. Finally, we recall once again that a rotating crystal, characterized by a nonzero bond-orientational order parameter, even an instantaneous one, is likely a finite-size effect.
Interestingly, we observe an oscillating behavior also in the instantaneous value of the bond-orientational correlation function, \(g_{6,\gamma}(r)\), as shown in Fig. 8. This correlation function passes from an increasingly steep power-law decay to an exponential one, coming back to the power-law decay at the end of one period. This suggests that the flow of the rotating solid proceeds through a transient melting of the sample. This is similar to what was found experimentally on sheared colloids [5]. The average value of the correlation function across one oscillation period nevertheless displays a power-law decay (see the dashed line in Fig. 8), suggesting that only quasi-long-range bond-orientational order is present in instantaneous configurations in the thermodynamic limit.
## V Crossover to string-like flow
The isotropic flowing liquid phase (Regime II) appears rather narrow at low temperature and widens as \(T\) is increased, as seen from Figs. 1(a) and 4(a). Indeed, upon further increase of \(\dot{\gamma}\), the imposed shear dominates the dynamics of the system and one finds a crossover to a situation in which particles in the steady state flow along bands parallel to the shear direction. This leads to a string-like flow (Regime III), as seen in the real-space snapshot of Fig. 1(d). The effect of an increased shear rate on the ability of particles to diffuse in the direction perpendicular to the shear is presented in Fig. 9, where we plot the mean square displacement in the \(y\) direction as a function of strain for a fixed temperature \(T=0.0030\) and two different shear rates. While the mean square displacement grows linearly for the small shear rate (which corresponds to the flowing hexatic phase of Regime I) as expected for a diffusive motion, it is virtually constant for the large shear rate corresponding to the string-like flow of Regime III.
Several signatures of the new regime are found in the structure. One can see from Fig. 4(a) that the averaged square modulus of the bond-orientational order parameter starts to increase again to nonzero values (with virtually no system-size dependence). Accordingly, the bond-orientational correlation function reaches a nonzero plateau at large distances: see Fig. 4(b). One can also look at the radial distribution function (averaged over all directions) \(g(r)\). It is plotted for \(T=0.0030\) for several \(\dot{\gamma}\) covering all three regimes in Fig. 10. For the smallest \(\dot{\gamma}\), \(g(r)\) quickly decays to one, as expected from the lack of positional order in Regimes I and II. However, for \(\dot{\gamma}\gtrsim 8\times 10^{-2}\), a series of ripples appear, which persist up to the system size. More data are presented in Appendix E, which allows us to estimate the crossover line between regimes II and III as a function of temperature. The obtained phase boundary is shown in Fig. 1(a).
Note that the ripples in \(g(r)\) do not imply positional order characteristic of a crystal. It instead signals that the flow is organized in parallel bands along the shear direction. Beyond the real-space snapshots, this is supported by the study of the transverse static structure factor that probes the ordering of the particles in the di
Figure 6: Real part of the bond-orientational order parameter \(\Re\{\psi_{6}\}\) obtained in a single trajectory as a function of the shear strain \(\gamma\) for a fixed shear rate \(\dot{\gamma}=1\times 10^{-3}\) and temperature \(T=0.0030\) (corresponding to Regime I). Different system sizes \(N\) are shown.
rection orthogonal to the flow. As illustrated in Fig. 20 of Appendix G, this clearly shows an organization of the particles in bands of width roughly equal to the particle size, in agreement with the visualization provide by Fig. 1(d).
The regime of string-like flow is highly anisotropic. This is what explains the nonzero value of the 6-fold bond-orientational order parameter presented in Fig. 4(a). This is confirmed by the study of another bond-orientational order parameter, _e.g._, that associated with cubic (4-fold) symmetry,
\[\psi_{4}=\frac{1}{N}\sum_{j=1}^{N}\frac{1}{n_{j}}\sum_{k=1}^{n_{j}}e^{4i\theta_ {jk}}. \tag{14}\]
We plot in Fig. 11 the averaged square modulus of \(\psi_{4}\) as a function of the shear rate \(\dot{\gamma}\) for several system sizes and a temperature \(T=0.0030\). One can clearly see that the flowing system ceases to be isotropic (even if there might be a shear-induced small distortion of the structure [29; 30] possibly associated with the boundaries and leading
Figure 8: Instantaneous value of the bond-orientational correlation function \(g_{6,\gamma}(r)\) for several values of the strain \(\gamma\) (solid colored lines) and its value averaged over a period (black dashed line) for a system of \(N=14400\) particles at \(T=0.0030\) and \(\dot{\gamma}=10^{-3}\) (Regime I).
Figure 7: Crystal-like rotation as seen from real-space snapshots (top panels) and the associated instantaneous static structure factor \(S_{\gamma}(\mathbf{k})\) (bottom panels) for strain values \(\gamma=9.0,9.7,10.1\), and \(10.5\) (from left to right) which correspond to the maximum, the decreasing section, the minimum and the increasing section of the oscillation shown in Fig. 6. The snapshots are colored according to the value of the real part of the local bond-orientational order parameter, \(\Re\{\phi_{6,\dot{\gamma}}\}\). The system size is \(N=3600\), the temperature \(T=0.0030\), and the shear rate \(\dot{\gamma}=1\times 10^{-3}\) (Regime I).
Figure 9: Mean square displacement \(\Delta y^{2}(\gamma)\) along the direction perpendicular to the shear for one trajectory in the steady state as a function of the strain \(\gamma\) for two different shear rates, \(\dot{\gamma}=10^{-3}\) (a) and \(\dot{\gamma}=0.8\) (b), at a temperature \(T=0.0030\). \(\gamma\) is measured from a configuration in the steady state. The top panel corresponds to Regime I and the bottom one to Regime III.
to the small finite-size effect seen in the figure) around \(\dot{\gamma}\sim 10^{-1}\), which corresponds to the beginning of Regime III (see Fig. 1(a)).
The existence of a string-like regime of flow has also been reported in a \(2d\) colloid experiment at higher shear rate [5]. On the other hand, it has not been found in molecular dynamics simulations up to rates for the order of \(10^{-1}\)[22; 23]. Inertial effects which are absent in colloidal systems and in our Brownian dynamics simulations therefore appear to suppress the string-like organization of the flow at high shear rate.
## VI Conclusion
We have given a unified description of a two-dimensional crystal under a constant shear rate, starting from the detailed account of how a perfect equilibrium solid yields and flows when an infinitesimal shear rate is imposed and then mapping out the whole phase diagram of the flowing steady state in the plane formed by temperature and shear rate. In doing so, we have carefully discussed the role of the topological defects (dislocations and disclinations) and of the finite-size effects.
Shear-induced melting of the \(2d\) crystal proceeds in two steps: the solid loses long-range bond-orientational order and flows for an infinitesimal shear rate (in the thermodynamic limit) and the resulting flowing hexatic phase then melts to a flowing (rather isotropic) liquid at a finite shear rate that depends on temperature. Finally, at high shear rate, a third regime corresponding to a strongly anisotropic string-like flowing phase appears. We note that contrary to what has been suggested [5] the phase diagram does not seem to be controlled by a single dimensionless parameter such as the Peclet number, which for Brownian dynamics is simply proportional to \(\dot{\gamma}/T\). Indeed, one can see from Fig. 1(a) that a large \(\dot{\gamma}\) and a small \(T\) do not have the same effect so that for the same ratio the system can be found in any of the three regimes.
What remains to be done in two dimensions is a precise characterization of the nature of the transition from the flowing hexatic to the flowing liquid. This would require using much larger system sizes to check whether the transition is continuous or rather first-order-like with a coexistence between the two different flowing phases [18; 20]. In case of a continuous transition, it is important to determine whether the universality class is the same one of the equilibrium case. Beyond this, an obvious extension is to investigate yielding and shear melting of three-dimensional crystals (for a review, see Ref. [30]) which have been theoretically shown to flow at infinitesimal shear rate in the thermodynamic limit [14; 15; 16] but for which no intermediate hexatic-like phase exists in equilibrium. Finally, it would be interesting to study how the flow properties of crystals identified in this paper change and converge to the rheology of amorphous materials [31] when introducing size polydispersity systematically [32; 33] or whether the connection made between the mechanical properties of dense active matter and sheared amorphous solids [34] carries over to crystalline phases.
###### Acknowledgements.
We thank J. Sethna for discussions. This work was supported by the Simons Foundation Grant No. 454935 (G.B.).
Figure 11: Averaged square modulus of the 4-fold bond-orientational order parameter, \(\overline{|\psi_{4}|^{2}}\), as a function of shear rate for a temperature \(T=0.0030\) and several system sizes. Triangles, diamonds, and circles correspond to data for \(N=900\), \(3600\), and \(14400\), respectively.
Figure 10: Radial distribution function \(g(r)\) for a system of \(N=14400\) particles at a temperature \(T=0.0030\) and for several shear rates \(\dot{\gamma}\) covering the three regimes of flow. The data for different \(\dot{\gamma}\) are shifted along the \(y\)-axis for clarity.
## Appendix A Shear stress measurement
We measure the \(xy\) component of the stress tensor denoted as \(\sigma\) by using the Irving-Kirkwood formula [35] for the overdamped Brownian Dynamics,
\[\sigma=-\frac{1}{A}\sum_{i,j}x_{ij}\left(\frac{\partial v(\mathbf{r}_{ij})}{ \partial\mathbf{r}_{ij}}\right)_{y}, \tag{10}\]
where \(A=L_{x}L_{y}\) is the area of the system, \(x_{ij}=x_{i}-x_{j}\), with \(x_{i}\) the position of particle \(i\) along the \(x\)-axis (according to the minimum image convention), and \(-(\partial v/\partial\mathbf{r}_{ij})_{y}\) is the \(y\) component of the force exerted by particle \(j\) onto particle \(i\). Note that when evaluating the distance \(\mathbf{r}_{ij}\) we take into account the periodic boundary condition and the minimum image convention. We recall that \(x\) is the direction of the imposed shear.
When we use the SLLOD dynamics (see Appendix B for details), the shear stress \(\sigma_{\text{SLLOD}}\) contains an extra term due to momentum flow:
\[\sigma_{\text{SLLOD}}=\sigma+\frac{1}{A}\sum_{i}\frac{p_{x,i}p_{y,i}}{m}, \tag{11}\]
where \(\mathbf{p}_{i}=(p_{x,i},p_{y,i})\) is the momentum of particle \(i\) (see Eq. (12)).
## Appendix B Results from nonequilibrium SLLOD molecular dynamics simulations
In order to confirm the genericness of the conclusions in the main text, in particular, the absence of a yield stress and the divergence of the effective viscosity when \(\dot{\gamma}\to 0\), we have also used the SLLOD dynamics as an alternative to the Brownian dynamics. We follow the implementation developed in Ref. [36].
We first explain the implementation of the thermostat in the nonequilibrium simulations. The imposed shear field leads the system to overheat and, therefore, a thermostat mechanism is needed. A general prescription for the development of a thermostat is as follows [37]: One defines a "heat bath" coordinate, say \(\zeta\), which is coupled to the equations of motion. Such a dynamics must sample the system in a chosen state or ensemble. This condition determines the form of the coupling between the thermostat and the particles. The choice of the coupling is not unique. In particular, when the thermostat is applied out of equilibrium, some choices can introduce a bias toward certain regimes with respect to others (for a discussion relevant to the present problem, see Ref. [23]). In this paper, we use for simplicity a configurational thermostat [38]. The configurational temperature, labeled \(T_{\text{conf}}\), is measured from the configuration of the particles in real space and their interactions:
\[k_{B}T_{\text{conf}}=\frac{\sum_{i}\left(\frac{\partial U}{\partial\mathbf{r} _{i}}\right)^{2}}{\sum_{i}\frac{\partial^{2}U}{\partial\mathbf{r}_{i}^{2}}}, \tag{12}\]
where \(U\) is the total potential energy of the system. The equation of motion for the SLLOD dynamics coupled with the configurational thermostat are as follows [36]:
\[\dot{\mathbf{r}}_{i} =\frac{\mathbf{p}_{i}}{m}+\dot{\gamma}\left(y_{i}-\frac{L_{y}}{2 }\right)\mathbf{e}_{x}-\zeta\frac{\partial U}{\partial\mathbf{r}_{i}} \tag{13}\] \[\dot{\mathbf{p}}_{i} =-\frac{\partial U}{\partial\mathbf{r}_{i}}-\dot{\gamma}p_{y,i} \mathbf{e}_{x}\] \[\dot{\zeta} =\frac{F_{\zeta}}{M_{\zeta}}\] \[F_{\zeta} =\sum_{i=1}^{N}\left(\frac{\partial U}{\partial\mathbf{r}_{i}} \right)^{2}-k_{B}T\sum_{i=1}^{N}\frac{\partial^{2}U}{\partial\mathbf{r}_{i}^{2}}\]
Here, \(\zeta\) is the coordinate of the thermostat, \(F_{\zeta}\) the force governing its evolution, and \(M_{\zeta}\) its "mass". A velocity
Figure 12: (a) Flow curve of the two-dimensional crystal undergoing the SLLOD dynamics at \(T=0.0001\) for a wide range of the strain rate for \(N=3600\) and \(10000\). (b) Corresponding effective viscosity. The black dashed line represents the divergence of the viscosity as a power law, \(\eta\sim\dot{\gamma}^{-1}\).
Verlet-like integration scheme [36] has been implemented:
\[\mathbf{r}_{i}(t+\Delta t)= \mathbf{r}_{i}(t)+\Delta t\left(\frac{\mathbf{p}_{i}(t)}{m}+\dot{ \gamma}\left(y_{i}-\frac{L_{y}}{2}\right)\mathbf{e}_{x}\right)\] \[+\Delta t\left(\zeta(t)+\frac{\Delta t}{2m}\right)\mathbf{F}_{i}(t)\] \[\mathbf{p}_{i}(t+\Delta t)= \mathbf{p}_{i}(t)+\frac{\Delta t}{2}\left(\mathbf{F}_{i}(t)+ \mathbf{F}_{i}(t+\Delta t)\right)\] \[+\frac{\Delta t\dot{\gamma}}{2}\left(p_{y,i}(t+\Delta t)+p_{y,i} (t)\right)\mathbf{e}_{x}\] \[\zeta(t+\Delta t)= \zeta(t)+\frac{\Delta t}{2M_{\zeta}}\left(F_{\zeta}(t)+F_{\zeta }(t+\Delta t)\right),\]
where \(\mathbf{F}_{i}=-\frac{\partial U}{\partial\mathbf{r}_{i}}\) is the force acting on particle \(i\) due to the interaction with the other particles. Time is measured in units of \(\tau_{0}=\sqrt{\frac{md^{2}}{\epsilon}}\). We report results obtained through the SLLOD dynamics for systems of \(N=3600\) and \(10000\) particles at \(T=0.0001\). Using a time step \(\Delta t=0.01\) and a thermostat mass \(M_{\zeta}=0.1\). We have chosen the units of mass \(m\) such that \(\tau_{0}=1\).
Figure 12(a) shows the flow curves, \(\overline{\sigma}\) as a function of \(\dot{\gamma}\). We see no evidence of a yield stress as the average stress appear to keep decreasing at the lowest shear rates. The decrease of \(\overline{\sigma}\) with \(\dot{\gamma}\) is enhanced by the presence of inertia with respect to Brownian Dynamics. The corresponding viscosity plot is shown in Fig. 12(b). We see a power-law divergence of \(\eta\) approaching \(\dot{\gamma}\to 0\). These results are consistent with those obtained with the Brownian dynamics and presented in the main text.
## Appendix C System size dependence
In this Appendix, we report results on the different system sizes investigated by the Brownian dynamics.
Figure 13 displays the variation with the system size \(N\) of the minimum over \(\dot{\gamma}\) of \(\overline{|\psi_{6}|^{2}}\) (shown in Fig. 4(a) of the main text) for several temperatures. As discussed in the main text, the decrease with \(N\), shown here on a log-log plot, is always more rapid than \(L^{-1/4}\), which is the limiting behavior for a hexatic phase. One can observe that the slope associated with the apparent power law is steeper as the temperature increases.
We also plot the flow curves and the corresponding viscosity for different system sizes, \(N=900\), \(3600\), and \(14400\), in Fig. 14. We do not find any significant system-size dependence in these quantities.
## Appendix D Identification of dislocations and disclinations
Disclinations and dislocations are point topological defects in two dimensions: disclinations are defects in the bond-orientational order and dislocations in the positional order.
The starting point to identify disclinations is to perform a Voronoi tessalation of the given configuration of particles (snapshot). From the construction we count the number of neighbors of each particle. At low temperatures most particles have 6 neighbors (the average number of neighbors is constrained to be 6 in \(2d\) Euclidean space) and some have 5 or 7 neighbors. Particles with a number of neighbors different than 6 correspond to disclination defects. The defect organization is illustrated in Fig. 1(b-d) of the main text. We have checked that the concentration of disclinations corresponding to particles with more than 7 neighbors and less than 5 neighbors are negligible in the conditions that we study.
Dislocations are dipoles formed by two disclinations of opposite topological charge. They can be identified with a pair of adjacent 5-fold and 7-fold coordinated particles. In practice, however, dislocations can be condensed, forming clusters, _e.g._, grain boundaries, and 5- and 7-fold particles can also appear close to each other at vacancies [39]. In order to detect truly isolated dislocations and disclinations, we introduce a cutoff radius \(r_{\text{cut}}\). If no 5- or 7-fold coordinated particle is found within a distance \(r_{\text{cut}}\) from a putative dislocation (respectively, disclination), this dislocation (resp., disclination) is considered as isolated or free. The cutoff distance \(r_{\text{cut}}\) is separately chosen for dislocations and disclinations, as described below.
For the identification of free disclinations, a natural cutoff \(r_{\text{cut}}\) is the first minimum of the radial distribution function (see below for its definition), which can be taken as a characterizing the notion of adjacency for two particles. We thus set \(r_{\text{cut}}=1.5\). We have checked that the results do not change significantly when varying \(r_{\text{cut}}\) from 1.0 to 2.0. In Fig. 5 of the main text, we show the probability \(p_{\text{disc}}\) of finding at least one free disclination in a given configuration. At lower and intermediate temper
Figure 13: System-size dependence of \(\min_{\dot{\gamma}}\{\overline{|\psi_{6}|^{2}}\}\), the minimum value over \(\dot{\gamma}\) reached by \(\overline{|\psi_{6}|^{2}}\) in Fig. 4(a), for various temperatures below the putative \(T_{m,\text{sol}}\). The dashed and dotted lines indicates a \(L^{-1/4}\) and a \(L^{-1}\) dependence, respectively.
atures (\(T=0.0001-0.0050\)) and low \(\dot{\gamma}\), \(p_{\rm disc}\) is zero since all disclinations are bound in dislocations, while \(p_{\rm disc}\) very rapidly increase at some larger \(\dot{\gamma}\) to reach a value close to 1. We limit the display of data to \(\dot{\gamma}\leq 2\times 10^{-2}\) since, at higher shear rates, the concentration of defects is large and the identification of the isolated disclinations becomes meaningless.
For defining free dislocations, we choose a cutoff distance \(r_{\rm cut}=2.5\), close to the second minimum of the radial distribution function (_i.e._, beyond the second coordination shell around a given particle). Figure 15 shows the resulting density of free dislocations, \(\rho_{\rm disl}\), for various values of \(\dot{\gamma}\) and \(T\). At lower and intermediate temperatures (\(T=0.0001-0.0050\)), \(\rho_{\rm disl}\) roughly linearly increases with \(\dot{\gamma}\) for low \(\dot{\gamma}\), as argued in Eq. (7) [26]. We limit the display of data to \(\dot{\gamma}\leq 10^{-2}\) because for higher \(\dot{\gamma}\), the concentration of the defects is so large that identifying isolated dislocations becomes difficult and meaningless. As \(T\) is increased, \(\rho_{\rm disl}\) increases, and the dependence on the shear rate saturates. The measured \(\rho_{\rm disl}\) is used in Fig. 3(b) of the main text. We have also varied \(r_{\rm cut}\) from 1.0 to 2.5 and confirmed that \(\rho_{\rm disl}\) is insensitive to \(r_{\rm cut}\) in Regime I, thereby showing that the relation between the viscosity and the density of free dislocations in Fig. 3(b) is robust.
We also report the system size dependence of the viscosity \(\eta\) versus dislocation density \(\rho_{\rm disl}\) curve in Fig. 16. We see that finite size effects suppress the dislocation density at \(N=900\). Yet, these effects do not appear when comparing data for \(N=3600\) and \(N=14400\), consolidating our conclusions in the main text.
## Appendix E Radial distribution function
The radial distribution function, \(g(r)\), is computed according to
\[g(r)=\frac{A}{2\pi r\Delta rN(N-1)}\sum_{i,j,(i\neq j)}^{N}\overline{\int_{r} ^{r+\Delta r}\delta(r^{\prime}-|\mathbf{r}_{ij}|)dr^{\prime}}, \tag{10}\]
Figure 16: Viscosity \(\eta\) of the system as a function of the dislocation density \(\rho_{\rm disl}\) for various system sizes. Triangles (with dotted-line), diamonds (dashed-line), and circles (solid-line) correspond to data for \(N=900\), \(3600\), and \(14400\), respectively.
Figure 14: Flow curves obtained from the Brownian dynamics for the averaged shear stress \(\overline{\sigma}\) (a) and the effective viscosity \(\eta\) (b) for several system sizes \(N\). Triangles (with dotted-line), diamonds (dahsed-line), and circles (solid-line) correspond to data for \(N=900\), \(3600\), and \(14400\), respectively.
where \(\delta(x)\) is the Dirac delta function, \(A=L_{x}L_{y}\) is the area of the system, and \(\Delta r\) is the width of the bin used in the numerical evaluation. We take \(\Delta r\approx 0.16\) for \(N=3600\), \(\Delta r\approx 0.25\) for \(N=14400\), and \(\Delta r\approx 0.28\) for \(N=57600\). The overline denotes the average over time and trajectories in the steady state.
In Fig. 17 we show \(g(r)\) for all the temperatures investigated and some representative values of the shear rate \(\dot{\gamma}\). The onset \(\dot{\gamma}\) corresponding to the appearance of system-spanning ripples is used for the phase boundary between Regime II and III in Fig. 1(a).
## Appendix F Bond-orientational order parameter and its spatial correlations
We study the local 6-fold bond-orientational order parameter for each particle \(j\),
\[\phi_{6,j}=\frac{1}{n_{j}}\sum_{k=1}^{n_{j}}e^{6i\theta_{jk}}, \tag{10}\]
where the sum is over the \(n_{j}\) neighbors of particle \(j\) that are determined through a Voronoi tessellation and \(\theta_{jk}\) is the angle characterizing the vector (the "bond") joining particles \(j\) and \(k\), which is determined through the relation \(\cos\theta_{jk}=\mathbf{\hat{r}}_{jk}\cdot\mathbf{e}_{x}\), with \(\mathbf{\hat{r}}_{jk}=\frac{\mathbf{r}_{k}-\mathbf{r}_{j}}{|\mathbf{r}_{k}- \mathbf{r}_{j}|}\) a vector of unit norm joining particle \(j\) with particle \(k\) and the \(x\)-axis is arbitrarily chosen.
From this local order parameter, one can define the volume-averaged bond-orientational order parameter,
\[\psi_{6}=\frac{1}{N}\sum_{j=1}^{N}\phi_{6,j}. \tag{11}\]
When the system has a perfect hexagonal structure, \(|\psi_{6}|=1\), while in a disordered liquid, \(|\psi_{6}|\) is nearly zero. We also define the 6-fold bond-orientational spatial correlation function,
\[\begin{split} g_{6}(r)&=\frac{A}{2\pi r\Delta rN(N -1)g(r)}\\ &\times\sum_{i,j,(i\neq j)}^{N}\overline{\int_{r}^{r+\Delta r} \phi_{6,i}\phi_{6,j}^{*}\delta(r^{\prime}-|\mathbf{r}_{ij}|)dr^{\prime}}, \end{split} \tag{12}\]
where \(\Delta r\) is defined as in the previous section and the correlation function is conventionally normalized by the radial (isotropic) distribution function \(g(r)\) to remove some of the effects coming from local positional ordering. \(\overline{\cdots}\) denotes an average over time (or strain) and independent trajectories once the steady state has been reached.
In Fig. 18, we show the log-log plots of \(g_{6}(r)\) for two system sizes and all values of \(T\) and \(\dot{\gamma}\) considered in this study. At low temperatures, below the melting temperature \(T_{m,\mathrm{sol}}\approx 0.0055-0.0060\), and small shear rates, \(g_{6}(r)\) has a power law decay, \(g_{6}(r)\sim r^{-\eta_{6}}\) with \(\eta_{6}\leq 0.25\), establishing the presence of hexatic quasi-long-range order (Regime I). For higher values of \(\dot{\gamma}\), \(g_{6}(r)\) decays faster than the KTHNY bound (Regime II). Upon raising \(\dot{\gamma}\) even further but still at low temperatures, \(g_{6}(r)\) displays small plateau, with some ripples, signaling a new flow regime. Figure 18 also shows the absence of significant finite-size effects as the curves for the two system sizes essentially coincide, except for the lowest values of \(\dot{\gamma}\): then, the power-law decay of \(g_{6}(r)\) seems to saturate for the smaller system size; this effect disappears when the system size increases, suggesting that it is a finite-size effect.
Additionally, we have performed simulations for a larger system of \(N=57600\) particles in the vicinity of the Regime I-II transition to see the orientational correlation function \(g_{6}(r)\) at a longer distance. The resulting plots are compared with the ones obtained for \(N=14400\)
Figure 17: Radial distribution function \(g(r)\) for systems with \(N=3600\) (dashed curve) and \(N=14400\) (solid curve) particles for various values of \(T\) and \(\dot{\gamma}\). \(g(r)\)βs are shifted vertically by hand for clarity.
particles in Fig. 19. The results show little deviation between the two system sizes, except the trend that the smaller systems reach the plateau earlier at the hexatic quasi-long-range order regime (Regime I), as expected in generic spatial correlation functions. We note that the final plateau is also observed in the liquid regime without showing the system size dependence. This observation suggests that the plateau in the liquid regime is a genuine consequence of the anisotropy of the system, even in the thermodynamic limit.
## Appendix G Transverse structure factor and string-like regime
In this Appendix, we present more supporting evidence for the description of Regime III as a string-like flow in which particle motion is organized in parallel bands.
We show in Fig. 20 the transverse structure factor computed for modes perpendicular to the direction \(x\) of the shear flow,
\[S_{\rm T}(k_{y})=\frac{1}{N}\overline{\sum_{j,k=1}^{N}e^{ik_{y}(y_{j}-y_{k})}}, \tag{10}\]
with \(k_{y}=2\pi n_{y}/L_{y}\), \(n_{y}\) being an integer.
As the shear rate increase (at low enough temperature), \(S_{\rm T}(k_{y})\) develops sharp primary and secondary peaks whose magnitude grows until it becomes of order \(N\). This signals the appearance of string-like ordering induced by the flow (see the snapshot in Fig. 1(d)). The position of the first and second peak correspond respectively to \(\frac{2\pi}{c_{0,y}}\) and \(\frac{4\pi}{c_{0,y}}\), with \(c_{0,y}\) the distance along the \(y\) direction between the centers of the particles located in two adjacent rows on a triangular lattice. \(c_{0,y}\) is related to the lattice constant \(c_{0}\) by the relation \(c_{0,y}=\frac{\sqrt{3}}{2}c_{0}\).
Figure 19: Orientational correlation function \(g_{6}(r)\) for a system size of \(N=14400\) (solid-lines) and \(N=57600\) (dash-dotted-lines) in the vincinity of the transition between Regime I and II for several temperatures.
Figure 18: 6-fold bond-orientational correlation function, \(g_{6}(r)\), for a system of \(N=3600\) (dashed curves) and \(N=14400\) (solid curves) particles. The gray dashed straight lines in the background represent the upper bound imposed on the exponent \(\eta_{6}\) of the power-law decay for a hexatic phase by the KTHNY theory, \(g_{6}(r)\sim r^{-1/4}\). |
2307.00523 | Disentangling Hype from Practicality: On Realistically Achieving Quantum
Advantage | Quantum computers offer a new paradigm of computing with the potential to
vastly outperform any imagineable classical computer. This has caused a gold
rush towards new quantum algorithms and hardware. In light of the growing
expectations and hype surrounding quantum computing we ask the question which
are the promising applications to realize quantum advantage. We argue that
small data problems and quantum algorithms with super-quadratic speedups are
essential to make quantum computers useful in practice. With these guidelines
one can separate promising applications for quantum computing from those where
classical solutions should be pursued. While most of the proposed quantum
algorithms and applications do not achieve the necessary speedups to be
considered practical, we already see a huge potential in material science and
chemistry. We expect further applications to be developed based on our
guidelines. | Torsten Hoefler, Thomas Haener, Matthias Troyer | 2023-07-02T09:14:32Z | http://arxiv.org/abs/2307.00523v1 | # Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage
###### Abstract
Quantum computers offer a new paradigm of computing with the potential to vastly outperform any imagineable classical computer. This has caused a gold rush towards new quantum algorithms and hardware. In light of the growing expectations and hype surrounding quantum computing we ask the question which are the promising applications to realize quantum advantage. We argue that small data problems and quantum algorithms with super-quadratic speedups are essential to make quantum computers useful in practice. With these guidelines one can separate promising applications for quantum computing from those where classical solutions should be pursued. While most of the proposed quantum algorithms and applications do not achieve the necessary speedups to be considered practical, we already see a huge potential in material science and chemistry. We expect further applications to be developed based on our guidelines.
Operating on fundamentally different principles than conventional computers, quantum computers promise to solve a variety of important problems that seemed forever intractable on classical computers. Leveraging the quantum foundations of nature, the time to solve certain problems on quantum computers grows more slowly with the size of the problem than on classical computers--this is called _quantum speedup_. Going beyond quantum supremacy [2], which was the demonstration of a quantum computer outperforming a classical one for an artificial problem, an important question is finding meaningful applications (of academic or commercial interest) that can realistically be solved faster on a quantum computer than on a classical one. We call this a practical quantum advantage, or _quantum practicality_ for short.
Figure 1: **Quantum speedup**: The time needed to solve certain problems with quantum algorithms increases more slowly than that of any known classical algorithm as the problem size N increases. To be practical, however, we need more than an asymptotic speedup: the crossover time where quantum advantage gets realized needs to be reasonably short and the crossover problem size not too large. (For illustration, the time axis is scaled such that the quantum algorithm is a straight line.)
There is a maze of hard problems that have been suggested to profit from quantum acceleration: from cryptanalysis, chemistry and materials science, to optimization, big data, machine learning, database search, drug design and protein folding, fluid dynamics and weather prediction. But which of these applications realistically offer a potential quantum advantage in practice? For this, we cannot only rely on asymptotic speedups but must consider the constants involved. Being optimistic in our outlook for quantum computers, we will identify clear guidelines for quantum practicality and use them to classify which of the many proposed applications for quantum computing show promise and which ones would require significant algorithmic improvements to become practically-relevant.
To establish reliable guidelines, or lower bounds for the required speedup of a quantum computer, we err on the side of being optimistic for quantum and overly pessimistic for classical computing. Despite our overly-optimistic assumptions, our analysis will show that a wide range of often-cited applications is unlikely to result in a practical quantum advantage without _significant_ algorithmic improvements. We compare the performance of only a single classical chip that is fabricated today similar to the one used in the NVIDIA A100 GPU which fits around 54 billion transistors [5] with an optimistic assumption for a hypothetical quantum computer that may be available in the next decades with 10,000 error-corrected logical qubits, 10 \(\mu\)s gate time for logical operations, the ability to simultaneously perform gate operations on all qubits and all-to-all connectivity for fault tolerant two-qubit gates.1
Footnote 1: Note that no quantum error correction scheme exists today that allows simultaneous execution of gates and all-to-all connectivity without at least a \(O(\sqrt{N})\) slowdown for \(N\) qubits.
_I/O bandwidth._ We first consider the fundamental I/O bottleneck that limits quantum computers in their interaction with the classical world, which determines bounds for data input and output bandwidths. Scalable implementations of quantum random access memory (QRAM [9, 10]) demand a fault-tolerant error corrected implementation and the bandwidth is then fundamentally limited by the number of quantum gate operations or measurements that can be performed per unit time. We assume only a single gate operation per input bit. For our optimistic future quantum computer the resulting rate is 10,000 times smaller than for an existing classical chip (see Table 1). We immediately see that any problem that is limited by accessing classical data, such as search problems in databases, will be solved faster by classical computers. Similarly, a potentially exponential quantum speedup in linear algebra problems [13], vanishes when the matrix has to be loaded from classical data, or when the full solution vector should be read out. More generally, quantum computers will be practical for _"big compute" problems on small data_, not big data problems.
\begin{table}
\begin{tabular}{l r r r} & **GPU** & **ASIC** & **Future Quantum** \\ \hline
**I/O bandwidth** & 10,000 Gbit/s & 10,000 Gbit/s & 1 Gbit/s \\ \hline
**Operation throughput** & & & \\
16-bit floating point & 195 Top/s & 550 Top/s & 10.5 kop/s \\
32-bit integer & 9.75 Top/s & 215 Top/s & 0.83 kop/s \\ binary (boolean logical) & 4,992 Top/s & 77,000 Top/s & 235 kop/s \\ \hline \end{tabular}
\end{table}
Table 1. **Performance comparison**. We compare the peak performance of a single classical chip that can be manufactured today (similar to an NVIDIA A100 GPU, or an ASIC with a similar number of transistors) with a future quantum computer with 10,000 error-corrected logical qubits, 10\(\mu\)s gate time for logical operations and all-to-all connectivity. We consider an estimate of the I/O bandwidth (namely the number of operations per second) and three types of operations: logical binary operations, 16-bit floating point, 32-bit integer or fixed-point arithmetic multiply add operations.
_Crossover scale._ With quantum speedup, asymptotically fewer operations will be needed on a quantum computer than on a classical computer. Due to the high operational complexity and slower gate operations, however, each operation on a quantum computer will be slower than a corresponding classical one. As sketched in Figure 1, classical computers will thus always be faster for small problems and quantum advantage is realized beyond a problem-dependent crossover scale where the gain due to quantum speedup overcomes the constant slowdown of the quantum computer. To have real practical impact, the crossover time needs to be short, not more than weeks. Constants matter in determining the utility for applications, as with any runtime estimate in computing.
_Compute performance._ To model performance, we employ the well-known work-depth model from classical parallel computing to determine upper bounds of classical silicon-based computations and an extension for quantum computations. In this model, the work is the total number of operations and applies to both classical and quantum executions. In Table 1 we provide concrete examples using three types of operations: logical operations, 16-bit floating point, and 32-bit integer or fixed-point arithmetic operations for numerical modeling. For the quantum costs, we consider only the most expensive parts in our estimates, again benefiting quantum computers: For arithmetic, we count just the dominant cost of multiplications, assuming that additions are free. Furthermore, for floating point multiplication, we consider only the cost of the multiplication of the mantissa (10 bits in fp16). We ignore all further overheads incurred by the quantum algorithm due to reversible computations, as well as the significant cost of mapping to a specific hardware architecture with limited qubit connectivity.
_Crossover times for classical and quantum computation._ To estimate lower bounds for the crossover times, we next consider that while both classical and quantum computers have to evaluate the same functions (usually called oracles) that describe a problem, quantum computers require fewer evaluations thereof due to quantum speedup. At the root of many quantum acceleration proposals lies a quadratic quantum speedup, including the well-known _Grover algorithm_[11, 12]. For such an algorithm, a problem that needs \(X\) function calls on a quantum computer requires quadratically more, namely on the order of \(X^{2}\) calls on a classical computer. To overcome the large constant performance difference between a quantum computer and a classical computer, which Table 1 shows to be more than a factor of \(10^{10}\), a large number of function calls \(X\gg 10^{10}\) is needed for the quantum speedup to deliver a practical advantage. In Table 2, we estimate upper bounds for the complexity of the function that will lead to a cross-over time of \(10^{6}\) seconds, or roughly two weeks.
\begin{table}
\begin{tabular}{l r r r}
**Operation type** & **quadratic speedup** & **cubic speedup** & **quartic speedup** \\ \hline
16-bit floating point & 0.2 & 45,800 & 2,800,000 \\
32-bit integer & 0.003 & 1,630 & 130,000 \\ Binary (logical) & 68 & 12,500,000 & 712,000,000 \\ \hline \end{tabular}
\end{table}
Table 2. **Crossover operation counts for quantum algorithms with quadratic, cubic, and quartic speedups**. We determine the number of operations that can be afforded per function call (see Figure 1) for a quantum computer to show an advantage over a classical computer using a quantum algorithm with quadratic, cubic, and quartic quantum speedup. The number of oracle calls required to reach the crossover point with a quadratic, cubic, and quartic speedup is computed using the relative runtimes of a single oracle evaluation, and the total runtime of \(10^{6}\) seconds is then used to compute how many basic operations can be afforded in each oracle call. Since we make optimistic assumptions for a future quantum computer, we ignore overheads of reversible arithmetic for quantum computing and limit the classical computer to a single chip that can be manufactured today. The actual crossover operation counts will be significantly smaller. A similar analysis for quantum algorithms with exponential speedups yields promising operation budgets for all datatypes.
We see that with quadratic speedup even a single floating point or integer operation leads to crossover times of several months. Furthermore, at most 68 binary logical operations can be afforded to stay within our desired crossover time of two weeks, which is too low for any non-trivial application. Keeping in mind that these estimates are pessimistic for classical computation (a single of today's classical chips) and overly optimistic for quantum computing (only considering the multiplication of the mantissa and assuming all-to-all qubit connectivity), we come to the clear conclusion that quadratic speedups are insufficient for practical quantum advantage. The numbers look better for cubic or quartic speedups where thousands or millions of operations may be feasible, and we hence conclude, similarly to Babbush et al. [3], that at least cubic or quartic speedups are required for a practical quantum advantage, and taking into account.
As a result of our overly-optimistic assumptions in favor of quantum computing, these conclusions will remain valid even with significant advances in quantum technology of multiple orders of magnitude.
_Practical and impractical applications._ We can now use the above considerations to discuss several classes of applications where our fundamental bounds draw a line for quantum practicality. The most likely problems to allow for a practical quantum advantage are those with exponential quantum speedup. This includes the simulation of quantum systems for problems in chemistry, materials science, and quantum physics, as well as cryptanalysis using Shor's algorithm [16]. The solution of linear systems of equations for highly structured problems [13] also has an exponential speedup, but the I/O limitations discussed above will limit the practicality and undo this advantage if the matrix has to be loaded from memory instead if being computed based on limited data or knowledge of the full solution is required (as opposed to just some limited information obtained by sampling the solution).
Equally importantly, we identify likely dead ends in the maze of applications. A large range of problem areas with quadratic quantum speedups, such as many current machine learning training approaches, accelerating drug design and protein folding with Grover's algorithm, speeding up Monte Carlo simulations through quantum walks, as well as more traditional scientific computing simulations including the solution of many non-linear systems of equations, such as fluid dynamics in the turbulent regime, weather, and climate simulations will not achieve quantum advantage with current quantum algorithms in the foreseeable future. We also conclude that the identified I/O limits constrain the performance of quantum computing for big data problems, unstructured linear systems, and database search based on Grover's algorithm such that a speedup is unlikely in those cases. Furthermore, Aaronson et al. [1] show that the achievable quantum speedup of unstructured black-box algorithms is limited to \(\mathcal{O}(N^{4})\). This implies that any algorithm achieving higher speedup must exploit structure in the problem it solves.
These considerations help with separating hype from practicality in the search for quantum applications and can guide algorithmic developments. Specifically, our analysis shows that 1) it is necessary for the community to focus on super-quadratic speedups, ideally exponential speedups and 2) one needs to carefully consider I/O bottlenecks when deriving algorithms to exploit quantum computation best. Therefore, _the most promising candidates for quantum practicality are small-data problems with exponential speedup_. Specific examples where this is the case are quantum problems in chemistry and materials science [6], which we identify as the most promising application. We recommend to use precise requirements models [4] to get more reliable and realistic (less optimistic) estimates in cases where our rough guidelines indicate a potential practical quantum advantage.
## Methods
Here we provide more details for how we obtained the numbers above. We compare our quantum computer with a single microprocessor chip similar to the one used in the NVIDIA A100 GPU [5]. The A100 chip is around \(850mm^{2}\) in size and manufactured in TSMC's 7nm N7 silicon process. A100 shows that such a chip fits around 54.2 billion transistors and can operator at a cycle time of around 0.7ns.
### Determining peak operation throughputs
In Table 1, we provide concrete examples using three types of operations: logical operations, 16-bit floating point, and 32-bit integer arithmetic operations for numerical modeling. Other datatypes could be modeled using our methodology as well.
#### Classical NVIDIA A100
According to its datasheet, NVIDIA's A100 GPU, a SIMT-style von Neumann load store architecture, delivers 312 tera-operations per second (Top/s) with half precision floating point (fp16) through tensor cores and 78 Top/s through the normal processing pipeline. NVIDIA assumes a 50/50 mix of addition and multiplication operations and thus, we divide the number by two, yielding 195 Top/s fp16 performance. The datasheet states 19.5 Top/s for 32-bit integer operations, again assuming a 50/50 mix of addition and multiplication, leading to an effective 9.75 Top/s. The binary tensor core performance is listed as 4,992 Top/s with a limited set of instructions.
#### Classical Special Purpose Asic
Our main analysis assumes that we build a special-purpose ASIC using a similar technology. If we were to fill the equivalent chip-space of an A100 with a specialized circuit, we would use existing execution units, for which the size is typically measured in gate equivalents (GE). A 16-bit floating point unit (FPU) with addition and multiplication functions requires approximately 7 kGE, a 32-bit integer unit requires 18 kGE [15], and we assume 50 GE for a simple binary operation. All units include operand buffer registers and support a set of programmable instructions. We note that simple addition or multiplication circuits would be significantly cheaper. If we assume a transistor-to-gate ratio of 10 [14] and that 50% of the total chip area is used for control logic of a dataflow ASIC with the required buffering, we can fit \(54.2B/(7k\cdot 10\cdot 2)=387k\) fp16 units. Similarly, we can fit \(54.2B/(18k\cdot 10\cdot 2)=151k\) int32, or \(54.2B/(50\cdot 10\cdot 2)=54.2M\) bin2 units on our hypothetical chip. Assuming a cycle time of 0.7ns, this leads to a total operation rate of 0.55 fp16, 0.22 int32, and 77.4 bin Pop/s for an application-specific ASIC with the A100's technology and budget. The ASIC thus leads to a raw speedup between roughly 2 and 15x over a programmable circuit. Thus, on classical silicon, the performance ranges roughly between \(10^{13}\) and \(10^{16}\) op/s for binary, int32, and fp16 types.
#### Hypothetical future quantum computer
To determine the costs of N-bit multiplication on a quantum computer, we choose the controlled adder from Gidney [7] and implement the multiplication using N single-bit controlled adders, each requiring \(2N\) CCZ magic states. These states are produced in so called "magic state factories" that are implemented on the physical chip. While the resulting multiplier is entirely sequential, we found that this construction allows for more units to be placed on one chip than for a low-depth adder and/or for a tree-like reduction of partial products since (1) the number of CCZ states is lower (and thus fewer magic state factories are required) and (2) the number of work-qubits is lower. The resulting multiplier has a CCZ-depth and count of \(2N^{2}\) using \(5N-1\) qubits (\(2N\) input, \(2N-1\) output, \(N\) ancilla for the addition).
To compute the space overhead due to CCZ factories, we first use the analysis of Gidney and Fowler [8] to compute the number of physical qubits per factory when aiming for circuits (programs) using \(\approx 10^{8}\) CCZ magic states with physical gate errors of \(10^{-3}\). We approximate the overhead in terms of logical qubits by dividing the physical space overhead by \(2d^{2}\), where we choose the error-correcting code distance \(d=31\) to be the same as the distance used for the second level of distillation [8]. Thus we divide Gidney and Fowler's 147,904 physical qubits per factory (for details consult the ancillary spreadsheet (field B40) of Gidney and Fowler) by \(2d^{2}=2\cdot 31^{2}\) and get an equivalent space of 77 logical qubits per factory.
For the multiplier of the 10-bit mantissa of an fp16 floating point number, we need \(2\cdot 10^{2}=200\) CCZ states and \(5\cdot 10=50\) qubits. Since each factory takes 5.5 cycles [8] and we can pipeline the production of CCZ states, we assume 5.5 factories per multiplication unit such that multipliers don't wait for magic state production on average. Thus, each multipler requires 200 cycles and \(5N+5.5\cdot 77=50+5.5\cdot 77=473.5\) qubits. With a total of 10,000 logical qubits, we can implement 21 10-bit multipliers on our hypothetical quantum chip. With \(10\mu\)
cycle time, the 200 cycle latency, we get the final rate of less than \(10^{5}cycle/s/(20ocycle/op)\cdot 21=10.5kop/s\). For int32 (N=32), the calculation is equivalent. For binary, we assume two input and one output qubit for the (binary) adder (Toffoli gate) which does not need ancillas. The final results are summarized in Table 1.
### A note on parallelism
We assumed massively parallel execution of the oracle on both the classical and quantum computer (i.e., oracles with a depth of one). If the oracle does not admit such parallelization, e.g., if depth = work in the worst case scenario, then the comparison becomes more favorable towards the quantum computer. One could model this scenario by allowing the classical computer to only perform one operation per cycle. With a 2 GHz clock frequency, this would mean a slowdown of about 100,000 times for fp16 on the GPU. In this _extremely unrealistic_ algorithmic worst case, the oracle would still have to consist of only several thousands of fp16 operations with a quadratic speedup. However, we note that in practice, most oracles have low depth and parallelization across a single chip is achievable, which is what we assumed in the main text.
### Determining maximum operation counts per oracle call
In Table 2, we list the maximum number of operations of a certain type that can be run to achieve a quantum speedup within a runtime of \(10^{6}\) seconds (a little more than two weeks). The maximum number of classical operations that can be performed with a single classical chip in \(10^{6}\) seconds would be: 0.55 fp16, 0.22 int32, and 77.4 bin Zop. Similarly, assuming the rates from Table 1, for a quantum chip: 7, 4, 2,350 Gop, respectively.
We now assume that all calculations are used in oracle calls on the quantum computer and we ignore all further costs on the quantum machine. We start by modeling algorithms that provide polynomial \(X^{k}\) speedup, for small constants \(k\). For example, for Grover's algorithms [12], \(k=2\). It is clear that quantum computers are asymptotically faster (in the number of oracle queries) for any \(k>1\). However, we are interested to find the oracle complexity (i.e., the number of operations required to evaluate it) for which a quantum computer is faster than a classical computer within the time-window of \(10^{6}\) seconds.
Let the number of operations required to evaluate a single oracle call be \(M\) and let the number of required invocations be \(N\). It takes a classical computer time \(T_{c}=N^{k}\cdot M\cdot t_{c}\), whereas a quantum computer solves the same problem in time \(T_{q}=N\cdot M\cdot t_{q}\), where \(t_{c}\) and \(t_{q}\) denote the time to evaluate an operation on a classical and on a quantum computer, respectively. By demanding that the quantum computer should solve the problem faster than the classical computer and within \(10^{6}\) seconds, we find
\[\sqrt[k-1]{\frac{t_{q}}{t_{c}}}\leq N\leq\frac{10^{6}}{t_{q}\cdot M},\]
which allows us to compute the maximal number of basic operations per oracle evaluation such that the quantum computer still achieves a practical speedup:
\[M\leq 10^{6}\cdot\sqrt[k-1]{\frac{t_{c}}{t_{q}^{k}}}.\]
### Determining I/O bandwidth
We use the I/O bandwidth specified in NVIDIA's A100 datasheet for our classical chips. For the quantum computer, we assume that one quantum gate is required per bit of I/O. Using all 10,000 qubits for reading/writing, this yields an estimate of the I/O bandwidth \(B\approx\frac{10,000}{10^{-5}}=1\) Gbit/s.
## Acknowledgments
We thank Luca Benini for helpful discussions about ASIC and processor design and related overheads and Wim van Dam and all anonymous reviewers for comments that improved an earlier draft of this work.
|
2302.05285 | Elastic neutrino-atom scattering as a probe of neutrino millicharge and
magnetic moment | Neutrino scattering on atomic systems at low-energy transfer is a powerful
tool for searching the neutrino electromagnetic interactions. The regime of
coherent elastic neutrino-atom scattering, i.e., when the atom recoils as a
pointlike particle, can be effectively fulfilled in the case of tritium
antineutrinos. We present theoretical calculations for coherent elastic
neutrino-atom scattering processes on such targets as the H, $^2$H, $^3$He, and
$^4$He %, and $^{12}$C atoms. We show how the atomic effects and neutrino
electromagnetic properties, namely the neutrino millicharge and magnetic
moment, may manifest themselves in the atomic-recoil spectra. Our results can
be used in planning the experiments on coherent elastic neutrino-atom
scattering (in particular, with superfluid He-4). | Georgy Donchenko, Konstantin Kouzakov, Alexander Studenikin | 2023-02-10T14:47:51Z | http://arxiv.org/abs/2302.05285v1 | # Elastic neutrino-atom scattering as a probe of neutrino millicharge and magnetic moment
###### Abstract
Neutrino scattering on atomic systems at low-energy transfer is a powerful tool for searching the neutrino electromagnetic interactions. The regime of coherent elastic neutrino-atom scattering, i.e., when the atom recoils as a pointlike particle, can be effectively fulfilled in the case of tritium antineutrinos. We present theoretical calculations for coherent elastic neutrino-atom scattering processes on such targets as the H, \({}^{2}\)H, \({}^{3}\)He, and \({}^{4}\)He atoms. We show how the atomic effects and neutrino electromagnetic properties, namely the neutrino millicharge and magnetic moment, may manifest themselves in the atomic-recoil spectra. Our results can be used in planning the experiments on coherent elastic neutrino-atom scattering (in particular, with superfluid He-4).
**_The 41st International Conference on High Energy physics, ***_
*** 6-13 July 2022 ***_
*** _Bologna, Italy ***_
Introduction
The search for light particles of dark matter requires the detectors that are sensitive to low recoil energies (\(\lesssim\)100 meV). This can be achieved, for example, by using a superfluid He-4 target [1]. Another possible application of the superfluid He-4 detector could be the study of the low-energy neutrino scattering, in particular of the coherent elastic neutrino-atom scattering (CE\(\nu\)AS) [2, 3, 4, 5] that has not been observed so far. Below we inspect the sensitivity of the CE\(\nu\)AS processes on light atomic systems to such neutrino electromagnetic properties as millicharge \(e_{\nu}\) and magnetic moment \(\mu_{\nu}\)[6]. For this purpose we account for the indicated neutrino properties in the CE\(\nu\)AS cross section and present the corresponding numerical results.
## 2 Effects of neutrino millicharge and magnetic moment in CE\(\nu\)AS
We consider an elastic neutrino-atom collision in the following kinematical regime:
\[E_{\nu}\ll m,\qquad T\leq\frac{2E_{\nu}^{2}}{m}\ll E_{\nu},\qquad E_{\nu}\ll \frac{1}{R_{\rm nuc}},\]
where \(E_{\nu}\) is the neutrino energy, \(T\) is the energy transfer, \(m\) is the atomic mass, and \(R_{\rm nuc}\) is the nuclear radius.
According to [2, 3, 4, 7, 8, 9], the CE\(\nu\)AS differential cross section is given by
\[\frac{d\sigma}{dT}=\frac{d\sigma^{(w,e_{\nu})}}{dT}+\frac{d\sigma^{(\mu_{\nu} )}}{dT}. \tag{1}\]
Here the weak interaction and neutrino millicharge contribution is
\[\frac{d\sigma^{(w,e_{\nu})}}{dT}=\frac{G_{F}^{2}m}{\pi}\left[C_{V}^{2}\left(1 -\frac{mT}{2E_{\nu}^{2}}\right)+C_{A}^{2}\left(1+\frac{mT}{2E_{\nu}^{2}} \right)\right]\,, \tag{2}\]
with
\[C_{V} = Z\left(\frac{1}{2}-2\sin^{2}\theta_{W}\right)-\frac{1}{2}N+Z \left(\mp\frac{1}{2}+2\sin^{2}\theta_{W}\right)F_{\rm el}(q^{2})+\frac{\sqrt{ 2}\pi\alpha Ze_{\nu}}{G_{F}mT}[1-F_{\rm el}(q^{2})],\] \[C_{A}^{2} = (C_{A}^{\rm nuc})^{2}+\frac{1}{4}\sum_{n,l}\left[\left(L_{+}^{nl }-L_{-}^{nl}\right)F_{\rm el}^{nl}(q^{2})\right]^{2},\] \[(C_{A}^{\rm nuc})^{2} = \frac{g_{A}^{2}}{4}\left[(Z_{+}-Z_{-})-(N_{+}-N_{-})\right]^{2},\]
where \(q\) is the momentum transfer, with \(q^{2}=2mT\), the plus (minus) stands for \(\nu=\nu_{e}\) (\(\nu=\nu_{\mu,\tau}\)), and \(Z\) (\(N\)) is the number of protons (neutrons) in the atomic nucleus. \(F_{\rm el}(q^{2})\) is the Fourier transform of the electron density, \(g_{A}=1.25\), \(Z_{\pm}\) and \(N_{\pm}\) are the numbers of protons and neutrons (electrons) with spin parallel (\(+\)) or antiparallel (\(-\)) to the nucleus spin (the total electron spin). \(L_{\pm}^{nl}\) is the number of electrons in the \(nl\) atomic orbital with spin parallel (\(+\)) or antiparallel (\(-\)) to the electron spin, and \(F_{\rm el}^{nl}(q^{2})\) is the Fourier transform of the \(nl\) electron density. The neutrino millicharge \(e_{\nu}\) is in units of \(e\).
Figure 1: Differential cross sections for the CEvAS processes within Standard Model (\(e_{\nu}=0\) and \(\mu_{\nu}=0\) ) and with account for the neutrino millicharge (\(e_{\nu}=\pm 10^{-15}e\)) and magnetic moment (\(\mu_{\nu}=10^{-12}\mu_{B}\)).
The neutrino magnetic moment contribution is
\[\frac{d\sigma^{(\mu_{\nu})}}{dT}=\frac{\pi\alpha^{2}Z^{2}}{m_{e}^{2}}|\mu_{\nu}|^{ 2}\left(\frac{1}{T}-\frac{1}{E_{\nu}}\right)\left[1-F_{\rm el}(q^{2})\right]^{2}, \tag{3}\]
where the neutrino magnetic moment \(\mu_{\nu}\) is in units of \(\mu_{B}\). In contrast to the case of neutrino millicharge, the neutrino magnetic moment interaction flips the neutrino helicity, and therefore it does not interfere with the weak interaction channel.
In Fig. 1 we present the numerical results for the differential cross section (1) in the case of an electron antineutrino with \(E_{\nu}=10\) keV that is typical for the tritium neutrino source. It can be seen that the atomic recoil spectra in CE\(\nu\)AS processes on the H, \({}^{2}\)H, \({}^{3}\)He, and \({}^{4}\)He atomic systems are very sensitive to the neutrino millicharge and magnetic moment. Measuring these spectra may allow us to test the \(e_{\nu}\) and \(\mu_{\nu}\) values at a level of \(10^{-15}e\) and \(10^{-12}\mu_{B}\), respectively, or even below that level.
The obtained results will be used in the search for the electromagnetic properties of neutrinos in the experiment involving an intense tritium neutrino source and a superfluid \({}^{4}\)He target. This experiment is currently being prepared in the framework of the research program of the National Center for Physics and Mathematics in Sarov, Russia.
## Acknowledgments
The work is supported by the Russian Science Foundation under grant No. 22-22-00384. G.D. acknowledges the support from the National Center for Physics and Mathematics (Project "Study of coherent elastic neutrino-atom and -nucleus scattering and neutrino electromagnetic properties using a high-intensity tritium neutrino source").
|
2306.01514 | Zeeman and Orbital Driven Phase Transitions in Planar Josephson
Junctions | We perform supercurrent and tunneling spectroscopy measurements on
gate-tunable InAs/Al Josephson junctions (JJs) in an in-plane magnetic field,
and report on phase shifts in the current-phase relation measured with respect
to an absolute phase reference. The impact of orbital effects is investigated
by studying multiple devices with different superconducting lead sizes. At low
fields, we observe gate-dependent phase shifts of up to ${\varphi_{0}=0.5\pi}$
which are consistent with a Zeeman field coupling to highly-transmissive
Andreev bound states via Rashba spin-orbit interaction. A distinct phase shift
emerges at larger fields, concomitant with a switching current minimum and the
closing and reopening of the superconducting gap. These signatures of an
induced phase transition, which might resemble a topological transition, scale
with the superconducting lead size, demonstrating the crucial role of orbital
effects. Our results elucidate the interplay of Zeeman, spin-orbit and orbital
effects in InAs/Al JJs, giving new understanding to phase transitions in hybrid
JJs and their applications in quantum computing and superconducting
electronics. | D. Z. Haxell, M. Coraiola, D. Sabonis, M. Hinderling, S. C. ten Kate, E. Cheah, F. Krizek, R. Schott, W. Wegscheider, F. Nichele | 2023-06-02T13:03:11Z | http://arxiv.org/abs/2306.01514v2 | # Zeeman and Orbital Driven Phase Transitions in Planar Josephson Junctions
###### Abstract
We perform supercurrent and tunneling spectroscopy measurements on gate-tunable InAs/Al Josephson junctions (JJs) in an in-plane magnetic field, and report on phase shifts in the current-phase relation measured with respect to an absolute phase reference. The impact of orbital effects is investigated by studying multiple devices with different superconducting lead sizes. At low fields, we observe gate-dependent phase shifts of up to \(\varphi_{0}=0.5\pi\) which are consistent with a Zeeman field coupling to highly-transmissive Andreev bound states via Rashba spin-orbit interaction. A distinct phase shift emerges at larger fields, concomitant with a switching current minimum and the closing and reopening of the superconducting gap. These signatures of an induced phase transition, which might resemble a topological transition, scale with the superconducting lead size, demonstrating the crucial role of orbital effects. Our results elucidate the interplay of Zeeman, spin-orbit and orbital effects in InAs/Al JJs, giving
new understanding to phase transitions in hybrid JJs and their applications in quantum computing and superconducting electronics.
_Keywords:_ Hybrid materials, superconductor-semiconductor, phase transitions, orbital effect, spin-orbit interaction, 2DEG, \(\varphi\)-junction
Josephson junctions (JJs) defined in hybrid superconductor-semiconductor materials are the subject of intense investigation as building blocks of gate-tunable superconducting [1, 2, 3, 4, 5] and Andreev [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] qubits, along with transistors [19, 20, 21, 22], mixers [23] and rectifiers [24] for superconducting electronics. Additional functionalities are enabled by the interplay between spin-orbit interaction and external magnetic fields, including spin-dependent [25, 26] and non-reciprocal supercurrents [27, 28, 29], topological phase transitions [30, 31, 32, 33, 34] and anomalous shifts in the ground state [35, 36, 37, 38, 39, 40, 41, 42]. The latter constitute a shift in the energy minimum away from a phase difference \(\varphi=0\) across the JJ, to \(0<\varphi<\pi\) by breaking of time-reversal symmetry [43, 44, 45, 46, 47] or to \(\varphi=\pi\) by a Zeeman-induced phase transition [48, 49, 46].
Epitaxially-grown InAs/Al heterostructures [50, 51] are a promising platform to realize these complex devices, due to their high electron mobility, excellent superconducting properties [52, 53] and prospect of scalability. To date, tunneling spectroscopy experiments of planar InAs/Al JJs have revealed the onset of zero-energy states at large in-plane magnetic fields [32, 33], and more refined devices [54] have since shown zero-energy states accompanied by closure and re-opening of the superconducting gap, consistent with a topological transition. Supercurrent measurements in superconducting quantum interference devices (SQUIDs) demonstrated gate-tunable phase shifts in small magnetic fields [41], as well as large phase jumps at larger fields [34] accompanied by a minimum in the supercurrent amplitude, also consistent with a topological transition [30]. However, several questions remain on the behavior of planar JJs subject to in-plane magnetic fields. For instance, Ref. [41] reported anomalous phase shifts at small magnetic fields which were considerably larger than theoretical expectations [44]. Additionally, orbital effects can resemble the behavior expected from a topological transition [30, 55]: a mag
netic flux threading the cross-section underneath the superconducting leads can produce non-monotonic switching currents [32, 56] together with closure and reopening of the induced superconducting gap. In this context, it is crucial to understand the mechanisms underlying phase shifts in planar JJs in an in-plane magnetic field, to fully harness their properties in quantum computation and superconducting electronics applications.
In this work, we present a comprehensive investigation of planar SQUIDs in in-plane magnetic fields. An advanced device geometry allowed simultaneous measurements of the Andreev bound state (ABS) spectrum of a planar JJ and its current-phase relation (CPR), including anomalous phase shifts relative to an absolute phase reference. The role of orbital effects was studied by measuring several devices with varying size of the superconducting leads. For small in-plane magnetic fields oriented perpendicular to the current flow in the JJ, that is along the direction of the Rashba spin-orbit field, we observed phase shifts in the CPR which depended linearly on magnetic field and varied strongly with gate voltage, similar to Ref. [41]. For simplicity, we define this as a Type A phase shift. Spectroscopic measurements demonstrated that Type A phase shifts in the CPR were highly correlated with phase shifts of ballistic ABSs in the JJs, but were found to be independent on the size of the superconducting contacts. Upon further increase in magnetic field, we observed a rapid increase of the anomalous phase shift, which did not depend on gate voltage but was instead strongly correlated with the length of the superconducting contacts, indicating an orbital origin. We define this as a Type B phase shift. Strikingly, Type B phase shifts were accompanied by both a local minimum in the amplitude of the CPR and a closure and reopening of the superconducting gap, which might resemble a topological transition. We discuss similarities and differences of our observations with respect to previous work. Our results establish a new baseline understanding of InAs/Al JJs subject to in-plane magnetic fields, and guide towards a more complete understanding of anomalous phase shifts and topological transitions in planar JJs.
## Results and Discussion
Experiments were performed on six devices. Figure 1(a) shows a false-colored scanning electron micrograph of Device 1, the principal device under study, which consisted of a planar SQUID fabricated in a heterostructure of InAs (pink) and epitaxial Al (blue) [50, 51]. The device was covered by a HfO\({}_{2}\) dielectric layer, onto which Au gate electrodes (yellow) were deposited. The superconducting loop, defined in the epitaxial Al, contained a superconductor-normal semiconductor-superconductor (SNS) JJ and a narrow Al constriction. The SNS junction had length \(L=80\) nm, width \(W=2.5\)\(\mu\)m and Al leads of length \(L_{\rm SC}=250\) nm. The constriction had width \(W_{\rm cons.}=130\) nm, chosen to limit the switching current of the planar SQUID, while still being much larger than that of the SNS junction. This asymmetric configuration resulted in a phase drop across the SNS junction of \(\varphi\approx 2\pi(\Phi/\Phi_{0})\), where a flux \(\Phi=AB_{\perp}\) threaded the area \(A=10.2\) (\(\mu\)m)\({}^{2}\) enclosed by the SQUID loop (\(\Phi_{0}=h/2\)e is the superconducting flux quantum). Differently from previous work [32, 34, 41, 53], where two InAs JJs were used, the Al constriction cannot introduce anomalous phase shifts in an in-plane magnetic field due to the absence of spin-orbit and orbital effects. A superconducting probe was integrated close to one end of the SNS junction, comprising a contact of epitaxial Al separated from the SNS junction by a tunnel barrier defined in the InAs. The transparency of the tunnel barrier was controlled by the gate voltages \(V_{\rm T,L}\) and \(V_{\rm T,R}\), applied to the left and right tunnel gates respectively. The carrier density in the SNS junction was controlled via a top-gate voltage \(V_{\rm TG}\). An additional gate was kept at \(V_{\rm Probe}=0\) throughout. Devices 2 to 5 were similar to Device 1 except for \(L_{\rm SC}\), resulting in different orbital coupling to in-plane magnetic fields [see Fig. 1(b)]. Each measurement presented here was acquired in parallel with measurements of a Reference Device fabricated on the same chip, which consisted of a SQUID with two Al constrictions of different widths [see Fig. 1(c)]. Parallel conduction in the InAs surrounding Reference Devices was prevented by setting a global gate to \(V_{\rm Global}=-1.5\) V.
Switching currents \(I\) were measured using fast current ramps and voltage triggers. A ramped current \(I_{\rm DC}\) was injected into the SQUID loop while monitoring the voltage
Figure 1: Device under study and current-biased measurements in an in-plane magnetic field \(B_{\parallel}\). (a) False-colored scanning electron micrograph (SEM) of Device 1, the planar superconducting quantum interference device (SQUID), consisting of InAs (pink) and Al (blue). Exposed InAs regions were controlled via electrostatic gates (yellow). (b) Schematic zoom-in of the Josephson junction region (top), with junction length \(L=80\) nm and superconducting lead length \(L_{\mathrm{SC}}=250\) nm indicated. The purple dashed line indicates the position of a schematic cross-section (bottom). An in-plane magnetic field \(B_{\parallel}\) generates a flux \(\Phi_{\parallel}\) between the superconducting leads and the proximitized two-dimensional electron gas (2DEG), with area \(A_{\parallel}=L_{\mathrm{SC}}d\). (c) False-colored SEM of the Reference Device, prior to gate deposition, consisting of two Al constrictions embedded in a superconducting loop. A global gate \(V_{\mathrm{Global}}\) is indicated schematically (yellow). (d) Switching current \(I\) of Device 1 as a function of perpendicular magnetic field \(B_{\perp}\) (blue), at a top-gate voltage \(V_{\mathrm{TG}}=0\) and \(B_{\parallel}=0.1\) T, after removing a background of 37 \(\mu\)A corresponding to the Al constriction. Switching current of the Reference Device \(I_{\mathrm{ref.}}\) (grey) at the same \(B_{\parallel}\), after subtracting the average \(\langle I_{\mathrm{ref.}}\rangle\). The zero-current position for Device 1 (Reference Device) is indicated by the circle (triangle). (e) Averaged half-amplitude of a SQUID oscillation \(\langle\Delta I/2\rangle\) as a function of in-plane magnetic field \(B_{\parallel}\), for different top gate voltages \(V_{\mathrm{TG}}\) (colors). A minimum in \(\langle\Delta I/2\rangle\) occurred at \(B_{\parallel}=B_{\parallel}^{\phi}\) (turquoise arrows). (f) Shift in perpendicular magnetic field \(B_{0}\) of Device 1 (circles) and Reference Device (triangles), as a function of \(B_{\parallel}\). Deviation of Device 1 from the Reference Device is highlighted in orange for \(|B_{\parallel}|\lesssim 0.4\) T and green for \(|B_{\parallel}|\gtrsim 0.4\) T. (g) Perpendicular field shift \(\Delta B_{0}\) for small \(B_{\parallel}\) for each \(V_{\mathrm{TG}}\) (circles), with a linear fit (lines) of gradient \(\beta\). Data is plotted relative to \(V_{\mathrm{TG}}=-1.6\) V. (h) Perpendicular field shift \(\Delta B_{0}\) for in-plane fields \(B_{\mathrm{t}}\) applied along the transverse direction.
across the device with an oscilloscope. The switching current was defined as the value of \(I_{\rm DC}\) at which \(V_{2}\) exceeded a threshold. Particular care was taken to inject the current \(I_{\rm DC}\) by symmetrically biasing the measurement circuit, to prevent significant voltage build-up between SQUID and gates. Each CPR data point shown here was obtained by averaging over 32 data points measured with \(I_{\rm DC}>0\) and 32 with \(I_{\rm DC}<0\). This procedure allowed us to improve the experimental accuracy, limit the effect of the broad switching current distributions typical of planar devices [57] and cancel trivial phase shifts originating from the kinetic inductance of the loop [58]. The CPR of the SNS junction was obtained by subtracting the switching current of the Al constriction \(I_{\rm Al}\) from that of the SQUID loop, which had a value between 30 and 45 \(\mu\)A for all devices. Tunneling conductance measurements were performed by low-frequency lock-in techniques. A voltage bias \(V_{\rm SD}+V_{\rm AC}\) was sourced at the tunneling probe and the resulting AC current \(I_{1}\) and voltage \(V_{1}\) gave the differential conductance \(G\equiv I_{1}/V_{1}\). Global magnetic fields were applied via a three-axis vector magnet, nominally along the directions \(B_{\perp}\), \(B_{\parallel}\) and \(B_{\rm t}\) as indicated in Fig. 1(a). Further details on electronic measurements and on the procedures used to accurately align the chip to the external magnetic field are presented in the Supporting Information.
Figure 1(d) shows the CPR of Device 1 at \(V_{\rm TG}=0\) (blue line, left axis) and Reference Device (gray line, right axis) at \(B_{\parallel}=0.1\) T. We highlight the maximum switching current \(\Delta I/2\) and a \(B_{\perp}\)-field shift \(B_{0}\), which was measured where the CPR crossed zero with positive slope (circle and triangle for Device 1 and Reference Device, respectively). Figures 1(e) and (f) show \(\Delta I/2\) and \(B_{0}\), respectively, as a function of \(B_{\parallel}\) and for various values of \(V_{\rm TG}\). Black triangles in Fig. 1(e) represent magnetic field shifts measured in the Reference Device. In Fig. 1(e) we plot \(\langle\Delta I/2\rangle\), that is the maximum supercurrent \(\Delta I/2\) averaged over positive and negative \(I_{\rm DC}\). We observe a non-monotonous dependence of \(\langle\Delta I/2\rangle\) as a function of \(B_{\parallel}\), with minima at \(B_{\parallel}=\pm|B_{\parallel}^{\Phi}|=\pm 0.6\) T (see turquoise arrow). The magnetic field shift \(B_{0}\) in Fig. 1(f) shows two distinctive trends. For \(|B_{\parallel}|\lesssim 0.4\) T, \(B_{0}\) shows a systematic deviation with respect to the Reference Device (Type A shift, orange shaded area). Type A
shifts were larger for \(V_{\rm TG}=0\) (purple) than for \(V_{\rm TG}=-1.6\) V (red). For \(|B_{\parallel}|\gtrsim 0.4\) T we observe a more pronounced shift (Type B shift, green shading), without any measurable gate voltage dependence. Notably, at \(B_{\parallel}=\pm B_{\parallel}^{\Phi}\), where the supercurrent was at a minimum, the shift was approximately half a SQUID period, corresponding to a phase shift of \(\sim\pm\pi\). At \(B_{\parallel}=0.9\) T, the magnetic field shift accumulated in Device 1 exceeded one SQUID period. Finally, we note a weak "S"-shaped dependence of \(B_{0}\), both for Device 1 and the Reference Device, which persisted after accurate alignment of the external magnetic field (see Supporting Information). We speculate that the residual trend in \(B_{0}\) originated from flux focusing[59] or a non-linearity of the vector magnet. Figure 1(g) shows \(\Delta B_{0}\), that is \(B_{0}\) as in Fig. 1(f) after subtraction of the data at \(V_{\rm TG}=-1.6\) V, which is the most negative top-gate voltage and follows the trend of the Reference Device for \(|B_{\parallel}|\leq 0.4\) T. At each gate voltage, the field shift (circles) was approximately linear in \(B_{\parallel}\), as highlighted by the linear fits (solid lines). The slope \(\beta\) extracted from the linear fits increased for more positive \(V_{\rm TG}\). Remarkably, no significant phase shift of either Type A or B was observed for in-plane fields \(B_{\rm t}\) applied along the transverse direction, as shown in Fig. 1(h) for Type A shifts (see Supporting Information for further details). The lack of Type A shifts as a function of \(B_{\rm t}\) implies a direction-dependent coupling to the external field, with a coupling strength indicated by \(\beta\).
We now present CPR data obtained from Devices 2, 3 and 4, where \(L_{\rm SC}\) was 400, 350 and 180 nm, respectively. Switching currents \(\Delta I/2\) are shown in Figs. 2(a, c, e) for Devices 2-4 respectively, with field shifts \(B_{0}\) in Figs. 2(b, d, f) for each device (colored markers) alongside those of a Reference Device measured in parallel (black triangles). Devices 2, 3 and 4 showed a qualitatively similar behavior to Device 1, despite having \(B_{\parallel}^{\Phi}=0.4\) T, \(B_{\parallel}^{\Phi}=0.4\) T and \(B_{\parallel}^{\Phi}=0.8\) T, respectively. We repeated the analysis on Type A phase shifts presented in Fig. 1(g) on the data of Fig. 2(b, d, f), and show the extracted \(\beta\) in Fig. 2(g) [see Supporting Information for more details]. As each device operated in a different range of \(V_{\rm TG}\), we compare them by plotting \(\beta\) as a function of \(\Delta V_{\rm TG}\), the top-gate voltage relative to the most
Figure 2: Switching current and perpendicular magnetic field shift for devices with varying \(L_{\rm SC}\). (a) Average oscillation amplitude \(\langle\Delta I/2\rangle\) of Device 2: a planar superconducting quantum interference device (SQUID) with a superconducting lead length of \(L_{\rm SC}=400\) nm, as a function of in-plane magnetic field \(B_{\parallel}\) for different top-gate voltages \(V_{\rm TG}\) (colors). Minima in the oscillation amplitude, \(B_{\parallel}^{\Phi}\), are marked with the blue arrows. (b) Shift in perpendicular magnetic field, \(B_{0}\), of Device 2 (circles) and the Reference Device (triangles), as a function of \(B_{\parallel}\). Deviation of Device 2 from the Reference Device is highlighted in orange for small \(B_{\parallel}\) and green for large \(B_{\parallel}\). (c, d) and (e, f) are the same as (a, b) for Devices 3 and 4, respectively. All devices are identical in design other than the length of the superconducting contacts, which is \(L_{\rm SC}=350\) nm for Device 3 and \(L_{\rm SC}=180\) nm for Device 4. (g) Gradient \(\beta\) of Type A phase shifts at small \(B_{\parallel}\), for Devices 1β4 (circles, squares, triangles and diamonds respectively), plotted against the change in top-gate voltage \(\Delta V_{\rm TG}\) with respect to the minimum value. (h) In-plane magnetic field where the supercurrent is minimum, \(B_{\parallel}^{\Phi}\), as a function of inverse superconducting lead length \(1/L_{\rm SC}\) (blue circles), with a linear fit \(B_{\parallel}^{\Phi}=(\,\Phi_{0}/d)/L_{\rm SC}\) (orange line) giving \(d=15\) nm.
negative value at which oscillations were observed. Despite some scattering for small \(\Delta V_{\rm TG}\), where data analysis is intricate due to the small switching current, we note that \(\beta\) follows a similar trend for all devices. In particular, \(\beta\) increases with \(\Delta V_{\rm TG}\) and does not depend on \(L_{\rm SC}\). Figure 2(h) shows \(B_{\parallel}^{\Phi}\) as a function of the inverse superconducting lead length \(1/L_{\rm SC}\). The data (blue circles) followed a linear trend, fitted by \(B_{\parallel}^{\Phi}=(\Phi_{0}/d)/L_{\rm SC}\) (orange line) describing one flux quantum threading an area \(L_{\rm SC}d\). The result of \(d=15\) nm agrees with the separation of Al and InAs layers, indicating a crucial role of orbital effects in inducing Type B phase shifts.
We now complement CPR measurements with spectroscopic data obtained on Device 1. Figure 3 presents a series of differential conductance maps as a function of \(B_{\perp}\) and \(V_{\rm SD}\), for increasing values of \(B_{\parallel}\). All data were obtained at \(V_{\rm TG}=-1\) V (data at more values of \(V_{\rm TG}\) are reported in the Supporting Information). As the tunneling probe was constituted by a superconducting lead, the differential conductance \(G\) at \(B_{\parallel}=0\) indicates the density of states in the junction up to a bias shift of \(\pm\)e\(\Delta\). Further conductance peaks at zero and high bias are attributed to a residual supercurrent and multiple Andreev reflection through the tunneling probe, respectively. For \(B_{\parallel}\leq 0.2\) T, the conductance demonstrates a conventional spectrum containing multiple Andreev bound states, some of which have transmission approaching unity and an induced superconducting gap of approximately \(180~{}\mu\rm eV\). For \(B_{\parallel}\geq 0.2\) T, a finite density of states at the Fermi level was induced in the lead facing the tunneling probe, resulting in a direct mapping of the density of states in the junction.[59] For \(B_{\parallel}=0.4\) T, phase-dependent conductance features approached zero energy, resulting in a significant decrease of the superconducting gap [Fig. 3(c)]. For \(B_{\parallel}=B_{\parallel}^{\Phi}=0.6\) T [Fig. 3(d)], conductance features oscillated close to \(V_{\rm SD}=0\) with no clear separation between states at positive and negative bias. As \(B_{\parallel}\) was further increased, a gap reopened in the Andreev bound state spectrum, with discrete states around zero energy. Finally, the gap closed for \(B_{\parallel}\geq 1\) T. Conductance features close to \(V_{\rm SD}=0\) in Fig. 3(e) were reminiscent of zero-bias peaks reported for similar devices at high in-plane magnetic fields and understood in terms on topological states.[32, 33]
Figure 3: Tunneling spectroscopy of Andreev bound states as a function of in-plane magnetic field \(B_{\parallel}\). (a-f) Differential conductance \(G\) through the tunneling probe, as a function of source-drain bias voltage \(V_{\rm SD}\) and perpendicular magnetic field \(B_{\perp}\), for increasing values of \(B_{\parallel}\). Measurements were taken at a top-gate voltage of \(V_{\rm TG}=-1\) V, with tunnel-barrier voltages \((V_{\rm T,L},V_{\rm T,R})=(-1.495,-1.65)\) V.
However, zero-bias features of Fig. 3(d) were not robust to small changes in the top-gate voltage \(V_{\rm TG}\) or tunnel gate voltage \(V_{\rm T}\) (see Supporting Information).
Figure 4 compares spectroscopic maps obtained at \(B_{\parallel}=0.2\) T (a-d) and 0.4 T (e-h), for multiple values of \(V_{\rm TG}\). The value of \(B_{\perp}\) at which the ABS energy was closest to the gap was found for each value of \(V_{\rm TG}\), as indicated by the blue circles. This was determined as the \(B_{\perp}\) value where the gradient \(\partial G/\partial B_{\perp}\) was zero, at a fixed bias \(V_{\rm SD}\) and averaged over multiple periods. Blue dashed lines indicate the minimum energy position at \(V_{\rm TG}=-1.4\) V, which is defined as \(B_{\perp}=0\) in Fig. 4(d). For both \(B_{\parallel}=0.2\) T and 0.4 T, a clear deviation of the ABS spectrum took place as a function of \(V_{\rm TG}\). The shift in perpendicular field \(\Delta B_{0}\) measured from the ABS spectrum is summarized in Fig. 4(i) as a function of \(V_{\rm TG}\) for \(B_{\parallel}=0.2\) T (blue) and \(B_{\parallel}=0.4\) T (orange). The Type A shift \(\Delta B_{0}\) obtained from the CPR is plotted on the same axis [squares, dashed lines] and shows remarkable agreement.
After demonstrating the occurrence of two types of anomalous phase shifts taking place in hybrid SQUIDs in in-plane magnetic fields, we now discuss their origin. Type A phase shifts, which were approximately linear in \(B_{\parallel}\) and depended on \(V_{\rm TG}\) [Fig. 1(g)], are associated with spin-orbit-induced anomalous phase shifts,[43, 44, 45, 46, 47] as recently reported in similar devices.[41] As phase shifts were much more pronounced for in-plane fields aligned perpendicular to the current flow direction (\(B_{\parallel}\)) than parallel to it (\(B_{\rm t}\)) [Fig. 1(h)], and were stronger for higher electron density (more positive \(V_{\rm TG}\)[60]), we conclude that spin-orbit interaction in our samples is predominantly of Rashba type.
Type A phase shifts reported here, which are of similar magnitude than in Ref.,[41] are considerably larger than theoretical predictions.[44] Reference[41] proposed that the observed phase offsets could be explained by the contribution of several low-transmission modes. However, here we show that Type A shifts obtained from the CPR matched those from tunneling spectroscopy [Fig. 4], where conductance features at both high and low bias showed a phase shift. Since conductance features at low bias correspond to ABSs with high transmission, we conclude that highly transmissive modes participate in the overall phase shift despite their
Figure 4: Top-gate dependence of the energy minimum at a finite in-plane magnetic field \(B_{\parallel}\). (a-d) Differential conductance \(G\) as a function of bias \(V_{\rm SD}\) and perpendicular magnetic field \(B_{\perp}\), at an in-plane magnetic field of \(B_{\parallel}=0.2\) T. Spectroscopy was performed at a top-gate voltage of \(V_{\rm TG}=\{0,-0.8,-1,-1.4\}\) V, respectively. The blue dashed line indicates the energy minimum at \(V_{\rm TG}=-1.4\) V. Blue markers show the shift of the energy minimum as a function of \(V_{\rm TG}\) relative to \(V_{\rm TG}=-1.4\) V. (e-h) Bias-dependent spectroscopy as in (a-d) at an in-plane magnetic field of \(B_{\parallel}=0.4\) T. (i) Shift in perpendicular magnetic field \(\Delta B_{0}\) relative to \(V_{\rm TG}=-1.4\) V, at an in-plane magnetic field of \(B_{\parallel}=0.2\) T (blue) and \(B_{\parallel}=0.4\) T (orange), obtained from tunneling spectroscopy (circles, solid lines) and current-phase relation (CPR) measurements (squares, dashed lines). The phase shift \(\varphi_{0}/2\pi\equiv\Delta B_{0}/B_{\rm Period}\) is plotted on the right axis.
large Fermi velocity. While this result does not resolve the discrepancy between theoretical predictions and experiments,[41] it rules out diffusive modes with small Fermi velocities as the dominant cause of Type A phase shifts.
Type B phase shifts were concomitant with a reentrant supercurrrent and a closure and reopening of the superconducting gap, independent of top-gate voltage \(V_{\rm TG}\). At \(B_{\parallel}=\pm B_{\parallel}^{\Phi}\), where the supercurrent was at a minimum and the proximitized superconducting gap was suppressed, the phase shift was \(\varphi_{0}\approx\pm\pi\). For \(|B_{\parallel}|>B_{\parallel}^{\Phi}\), a gap reopened in the ABS spectrum and the phase shift increased to above \(2\pi\). A phase shift occurring with a supercurrent minimum and gap closure indicates a \(0-\pi\) transition at \(B_{\parallel}=B_{\parallel}^{\Phi}\), where the minimum ABS energy moves from \(\varphi\approx 0\) to \(\varphi\approx\pi\) due to coupling of the magnetic and superconducting orders by Zeeman interaction.[46, 48, 49] All experimental signatures of Type B shifts were shown to depend on the length \(L_{\rm SC}\), consistent with a flux quantum threading an area \(L_{\rm SC}d\) underneath the superconducting leads. The experimentally obtained value of \(d=15\) nm agrees with the separation between the Al and InAs layers (13.4 nm), up to some flux penetration into each layer. We therefore conclude that orbital effects strongly contributed to inducing Type B phase shifts. Type B shifts were observed for in-plane fields \(B_{\parallel}<1\) T, much lower than the values \(B_{0-\pi}\gtrsim 9\) T expected for InAs/Al heterostructures.[34] We explain this by orbital effects, which were responsible for the induced gap reduction, forcing ABSs to move closer in energy. This enabled ABSs to cross even with small Zeeman splitting. Previous work reported similar phase shifts,[34] where a \(\pi\) jump in the junction phase was accompanied by a minimum in the switching current. However, phase shifts depended on the top-gate voltage, unlike the Type B shifts reported here. This shows that orbital effects alone are not sufficient to explain the results of Ref.[34]
## 4 Conclusions
In conclusion, measurements of the current phase relation and Andreev bound state spectrum in hybrid quantum interference devices showed phase shifts with two distinct characters, referred to as Types A and B. Type A phase shifts are attributed to coupling of the external magnetic field with an internal Rashba spin-orbit field, resulting in a \(\varphi_{0}\)-junction. Highly transmissive bound states were shown to make a significant contribution to the phase shift, which was much larger than expected for a single ballistic channel. The discrepancy might be due to the presence of many transverse modes, which future studies could investigate by varying the width and length of the Josephson junction. Type B shifts were consistent with a \(0-\pi\) transition, where orbital effects in the superconducting leads played a critical role. This suggests that the geometry of the superconducting leads, and their impact on orbital effects, is a key ingredient for realizing \(\pi\)-junctions for superconducting electronics[61, 62] or in interpreting signatures of topological superconductivity.[30]
## 5 Methods
Devices were fabricated from a hybrid superconducting-semiconducting heterostructure grown by molecular beam epitaxy on a semi-insulating InP (001) substrate. The heterostructure consisted of a step-graded InAlAs buffer, onto which an In\({}_{0.75}\)Ga\({}_{0.25}\)As/InAs/In\({}_{0.75}\)Ga\({}_{0.25}\)As quantum well was grown with a termination of two GaAs monolayers. The step-graded metamorphic buffer compensated the lattice mismatch between the InP and InAs, while the GaAs capping layers provided a barrier for In diffusion into the superconducting layer. The 8 nm InAs layer hosted a two-dimensional electron gas (2DEG), buried 13.4 nm below the semiconductor surface, as measured by transmission electron microscopy.[51] A 15 nm layer of Al was deposited onto the semiconductor surface, _in situ_ without breaking vacuum in the growth chamber. Measurements of a gated Hall bar in this material showed a peak mobility of 18000 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) at an electron sheet density of \(8\cdot 10^{11}\) cm\({}^{-2}\). This gave an electron mean
free path of \(l_{\rm e}\gtrsim 260\) nm, implying that all Josephson junctions measured in this work were in the ballistic regime along the length \(L\) of the junction.
The first step in patterning superconducting quantum interference devices (SQUIDs) was to isolate each device from its neighbors by etching large mesa structures. This was done by selectively removing the Al layer with Transene type D, followed by a 380 nm chemical etch into the III-V heterostructure using a \(220:55:3:3\) solution of H\({}_{2}\)O : C\({}_{6}\)H\({}_{8}\)O\({}_{7}:\) H\({}_{3}\)PO\({}_{4}:\) H\({}_{2}\)O\({}_{2}\). The second step was to pattern the Al device features, by wet etching in Transene type D at 50\({}^{\circ}\)C for 4 s. A dielectric layer of Al\({}_{2}\)O\({}_{3}\) (3 nm) and HfO\({}_{2}\) (15 nm) was deposited across the chip by atomic layer deposition, then gate electrodes were defined on top of the dielectric layer by evaporation and lift-off. Fine gate features were defined in a first step consisting of 5 nm Ti and 20 nm Au; a second deposition of Ti (10 nm) and Al (420 nm) connected the gates on top of the mesa structures to bonding pads, which were defined in the same step.
Measurements were performed in a dilution refrigerator with a base temperature at the mixing chamber below 10 mK. Magnetic fields were applied using a three-axis vector magnet, nominally oriented perpendicular to the device (\(B_{\perp}\)) and in the plane of the device (\(B_{\parallel}\), \(B_{\rm t}\)). Magnetic fields applied in the direction parallel to the Rashba spin-orbit field, or equivalently the direction perpendicular to the current flow, are denoted by \(B_{\parallel}\). The in-plane field was rotated by 90 degrees to give \(B_{\rm t}\), perpendicular to the spin-orbit field.
Measurements of the differential conductance were performed with standard lock-in amplifier techniques. An AC voltage \(V_{\rm AC}=3\)\(\mu\)V was applied to the contact of the superconducting probe with frequency 311 Hz, in addition to a DC source-drain voltage \(V_{\rm SD}\). The AC current \(I_{1}\) and DC current \(I_{\rm SD}\) flowing through the probe to ground was measured via a current-to-voltage (I-V) converter. The differential voltage across the tunnel barrier \(V_{1}\) was measured to give the differential conductance \(G\equiv I_{1}/V_{1}\). The transparency of the tunnel barrier was controlled with the gate voltages (\(V_{\rm T,L},\ V_{\rm T,R}\)), which are denoted by \(V_{\rm T}\equiv V_{\rm T,L}=V_{\rm T,R}\) (symmetric configuration). Measurements were performed in the tunneling regime, where \(G\ll G_{0}=2{\rm e}^{2}/h\). A constant bias offset of 43 \(\mu\)V was subtracted from
all datasets, due to a DC offset at the I-V converter. Since the tunnel probe was superconducting, the measured conductance was a convolution of the density of states (DoS) in the probe and the superconductor-normal-superconductor (SNS) junction: \(G=G_{\mathrm{Probe}}*G_{\mathrm{SNS}}\). This amounted to a shift in \(G_{\mathrm{SNS}}\) features by \(\pm\mathrm{e}\Delta^{*}\). For elevated in-plane magnetic fields, the superconducting gap in the tunnel probe was softened, leading to a finite DoS at low energy. This enabled measurements of the DoS in the SNS junction using an effectively normal probe, such that the measured conductance was directly proportional to the DoS in the SNS junction [58, 59]. In addition to conductance peaks at high source-drain bias corresponding to Andreev bound states (ABSs), we can attribute some features in the conductance spectrum to multiple Andreev reflections or to disorder in the tunnel barrier and sub-gap states in the DoS of the tunnel probe [63]. For tunneling spectroscopy measurements at an in-plane magnetic field, a first calibration measurement was performed at each field-value by sweeping the perpendicular field across a range \(>\pm 3\)\(\mathrm{mT}\). The position of zero perpendicular field was determined from spectroscopic features, including the size of the superconducting gap, the shape and peak conductance of high-bias features, and the sharpness of spectral lines. Then, each spectroscopic map was taken across \(>5\) oscillation periods such that spectral features were consistent over the full range.
Current-biased measurements were performed on the same device. Both contacts at the superconducting probe were floated, such that no current flowed through the probe. The tunnel barrier gate voltages, which also covered large areas of the superconducting loop, were set to \(V_{\mathrm{T}}=-1.5\) V to deplete the InAs surrounding the Al features, thereby preventing parallel conduction and forming a well-defined current path. A DC current was applied by symmetrically biasing the SQUID loop, such that the device potential was not raised with respect to the ground. Hence, the nominal voltage applied to gate electrodes was the same as the potential difference between gates and the device. A ramped current signal was applied from a waveform generator at a frequency of 133 Hz. The voltage drop \(V_{2}\) across the loop was measured with an oscilloscope. The switching current, the current at which the SQUID
transitioned from the superconducting to resistive state, was recorded when \(V_{2}\) exceeded a voltage threshold of less than 15 % of the maximum voltage in the resistive state. This measurement was repeated 32 times, and the resulting switching current values were averaged to account for stochastic fluctuations in the switching current.[57] Values of switching current reported in this work were averaged between values obtained for positive and negative bias currents \(I_{\mathrm{DC}}\).
## 4 Associated Content
Supporting Information is available at [URL].
It includes: details on materials and device fabrication; additional details on Reference Device measurements; extraction of the current phase relation and phase shift from switching current measurements; current phase relation measurements in an in-plane magnetic field transverse to the junction axis, along \(B_{\mathrm{t}}\); discussion of the origin of zero bias peaks in tunneling spectroscopy; additional tunneling spectroscopy measurements as a function of transverse in-plane field \(B_{\mathrm{t}}\), at different top-gate voltages \(V_{\mathrm{TG}}\) and in an additional device with large superconducting lead length \(L_{\mathrm{SC}}\); additional measurements of the Type B phase shift in different devices; and a discussion of the kinetic inductance of the superconducting loop. Supporting Information contains additional references.[64, 65, 66, 67]
## 5 Acknowledgments
We are grateful to C. Bruder, W. Riess and H. Riel for helpful discussions. We thank the Cleanroom Operations Team of the Binnig and Rohrer Nanotechnology Center (BRNC) for their help and support. F. N. acknowledges support from the European Research Council (grant number 804273) and the Swiss National Science Foundation (grant number 200021_201082).
## Data Availability
The data that support the findings of this study are available upon reasonable request from the corresponding author.
|
2307.16341 | Infinitesimally Moebius bendable hypersurfaces | Li, Ma and Wang have provided in [\emph{Deformations of hypersurfaces
preserving the M\"obius metric and a reduction theorem}, Adv. Math. 256 (2014),
156--205] a partial classification of the so-called Moebius deformable
hypersurfaces, that is, the umbilic-free Euclidean hypersurfaces $f\colon
M^n\to \mathbb{R}^{n+1}$ that admit non-trivial deformations preserving the
Moebius metric. For $n\geq 5$, the classification was completed by the authors
in \cite{JT2}. In this article we obtain an infinitesimal version of that
classification. Namely, we introduce the notion of an infinitesimal Moebius
variation of an umbilic-free immersion $f\colon M^n\to \mathbb{R}^m$ into
Euclidean space as a one-parameter family of immersions $f_t\colon M^n\to
\mathbb{R}^m$, with $t\in (-\epsilon, \epsilon)$ and $f_0=f$, such that the
Moebius metrics determined by $f_t$ coincide up to the first order. Then we
characterize isometric immersions $f\colon M^n\to \mathbb{R}^m$ of arbitrary
codimension that admit a non-trivial infinitesimal Moebius variation among
those that admit a non-trivial conformal infinitesimal variation, and use such
characterization to classify the umbilic-free Euclidean hypersurfaces of
dimension $n\geq 5$ that admit non-trivial infinitesimal Moebius variations. | M. I. Jimenez, R. Tojeiro | 2023-07-30T23:33:12Z | http://arxiv.org/abs/2307.16341v1 | # Infinitesimally Moebius bendable hypersurfaces
###### Abstract
Li, Ma and Wang have provided in [13] a partial classification of the so-called Moebius deformable hypersurfaces, that is, the umbilic-free Euclidean hypersurfaces \(f\colon M^{n}\to\mathbb{R}^{n+1}\) that admit non-trivial deformations preserving the Moebius metric. For \(n\geq 5\), the classification was completed by the authors in [12]. In this article we obtain an infinitesimal version of that classification. Namely, we introduce the notion of an infinitesimal Moebius variation of an umbilic-free immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) into Euclidean space as a one-parameter family of immersions \(f_{t}\colon M^{n}\to\mathbb{R}^{m}\), with \(t\in(-\epsilon,\epsilon)\) and \(f_{0}=f\), such that the Moebius metrics determined by \(f_{t}\) coincide up to the first order. Then we characterize isometric immersions \(f\colon M^{n}\to\mathbb{R}^{m}\) of arbitrary codimension that admit a non-trivial infinitesimal Moebius variation among those that admit a non-trivial conformal infinitesimal variation, and use such characterization to classify the umbilic-free Euclidean hypersurfaces of dimension \(n\geq 5\) that admit non-trivial infinitesimal Moebius variations.
M. I. Jimenez and R. Tojeiro\({}^{*}\)12
Footnote 1: Corresponding author
2
Footnote 2: This research was initiated while the first author was supported by CAPES-PNPD, Grant 88887.469213/2019-00 and was finished under the support of Fapesp Grant 2022/05321-9. The second author was partially supported by Fapesp grant 2022/16097-2 and CNPq grant 307016/2021-8.
Data availability statement: Not applicable.
_2020 Mathematics Subject Classification:_ 53 B25, 53C40.
_Key words and phrases: Moebius metric, Moebius deformable hypersurface, infinitesimally Moebius bendable hypersurface, infinitesimal Moebius variation, conformal infinitesimal variation, Moebius bending, isothermic surface._
## 1 Introduction
Given an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) of a Riemannian manifold \((M^{n},g)\) into Euclidean space with normal bundle-valued second fundamental form \(\alpha\in\Gamma(\operatorname{Hom}(TM,TM;N_{f}M))\), let \(\phi\in C^{\infty}(M)\) be defined by
\[\phi^{2}=\frac{n}{n-1}(\|\alpha\|^{2}-n\|\mathcal{H}\|^{2}), \tag{1}\]
where \(\mathcal{H}\) is the mean curvature vector field of \(f\) and \(\|\alpha\|^{2}\in C^{\infty}(M)\) is given at any point \(x\in M^{n}\) by
\[\|\alpha(x)\|^{2}=\sum_{i,j=1}^{n}\|\alpha(x)(X_{i},X_{j})\|^{2},\]
in terms of an orthonormal basis \(\{X_{i}\}_{1\leq i\leq n}\) of \(T_{x}M\). Notice that \(\phi\) vanishes precisely at the umbilical points of \(f\). The metric
\[g^{*}=\phi^{2}g,\]
defined on the open subset of non-umbilical points of \(f\), is a Moebius invariant metric called the _Moebius metric_ determined by \(f\). Namely, if \(\tilde{f}=\tau\circ f\) for some Moebius transformation of \(\mathbb{R}^{m}\), then the Moebius metrics of \(f\) and \(\tilde{f}\) coincide.
It is a fundamental fact, proved by Wang in [16], that a hypersurface \(f\colon M^{n}\to\mathbb{R}^{n+1}\) is uniquely determined, up to Moebius transformations of the ambient space, by its Moebius metric and its _Moebius shape operator_\(S=\phi^{-1}(A-HI)\), where \(A\) is the shape operator of \(f\) with respect to a unit normal vector field \(N\) and \(H\) is the corresponding mean curvature function. A similar result holds for submanifolds of arbitrary codimension (see [16] and Section 9.8 of [9]).
Li, Ma and Wang have provided in [13] a partial classification of the hypersurfaces \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 4\), that are not determined, up to Moebius
transformations of \(\mathbb{R}^{n+1}\), only by their Moebius metrics, called _Moebius deformable hypersurfaces_. For \(n\geq 5\), the classification of Moebius deformable hypersurfaces was completed by the authors in [12].
Moebius deformable hypersurfaces belong to the more general class of _conformally deformable_ hypersurfaces, that is, the hypersurfaces \(f\colon M^{n}\to\mathbb{R}^{n+1}\) for which \(M^{n}\) admits an immersion \(\tilde{f}\colon M^{n}\to\mathbb{R}^{n+1}\) such that \(f\) and \(\tilde{f}\) induce conformal metrics on \(M^{n}\) and do not differ by a Moebius transformation. The study of conformally deformable hypersurfaces goes back to Cartan [2] (see also [10] and Chapter 17 of [9]).
Our main goal in this article is to classify the _infinitesimally_ Moebius bendable hypersurfaces, that is, the umbilic-free hypersurfaces \(f\colon M^{n}\to\mathbb{R}^{n+1}\) into Euclidean space that admit a one-parameter family of immersions \(f_{t}\colon M^{n}\to\mathbb{R}^{n+1}\), with \(t\in(-\epsilon,\epsilon)\) and \(f_{0}=f\), whose Moebius metrics coincide with that of \(f\)_up to the first order_, in a sense that is made precise below.
Let \(f\colon M^{n}\to\mathbb{R}^{m}\) be an isometric immersion free of umbilical points. We call a smooth map \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) a _Moebius variation_ of \(f\) if \(f_{t}=F(t,\cdot)\), with \(f_{0}=f\), is an immersion that determines the same Moebius metric for any \(t\in(-\epsilon,\epsilon)\). In other words, if \(g_{t}\) is the metric induced by \(f_{t}\), then \(g_{t}^{*}=g_{0}^{*}\) for all \(t\in I\).
Trivial Moebius variations can be produced by composing \(f\) with the elements of a smooth one-parameter family of Moebius transformations of the Euclidean ambient space. Thus, the results in [12] and [13] give a classification of the umbilic-free hypersurfaces \(f\colon M^{n}\to\mathbb{R}^{n+1}\) of dimension \(n\geq 5\) that admit non-trivial Moebius variations.
We are interested in the umbilic-free isometric immersions \(f\colon M^{n}\to\mathbb{R}^{m}\) that satisfy the weaker condition of admitting non-trivial _infinitesimal_ Moebius variations. By an _infinitesimal Moebius variation_ of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) without umbilical points we mean a smooth map \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) such that the maps \(f_{t}=F(t,\cdot)\), with \(f_{0}=f\), are immersions whose corresponding Moebius metrics coincide _up to the first order_. This means that \(\frac{\partial}{\partial t}|_{t=0}g_{t}^{*}=0\), that is,
\[\frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2}\langle f_{t*}X,f_{t*}Y\rangle )=0\]
for all \(X,Y\in\mathfrak{X}(M)\), where \(\phi_{t}^{2}\) is given by (1) for the immersion \(f_{t}\), \(t\in(-\epsilon,\epsilon)\).
Given a smooth variation \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\), one defines its _variational vector field_ by \(\mathcal{T}=F_{*}\partial/\partial t|_{t=0}\)
When the immersions \(f_{t}=F(t,\cdot)\) are the compositions of \(f\) with the elements of a smooth one-parameter family of Moebius transformations of \(\mathbb{R}^{m}\), the variational vector field \(\mathscr{T}\) is the restriction to \(M^{n}\) of a conformal Killing vector field of \(\mathbb{R}^{m}\). Accordingly, an infinitesimal Moebius variation \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) without umbilical points is said to be _trivial_ if the variational vector field \(\mathscr{T}\) associated with \(F\) is the restriction to \(M^{n}\) of a conformal Killing vector field of \(\mathbb{R}^{m}\). We say that \(f\) is _infinitesimally Moebius bendable_ if it admits an infinitesimal Moebius variation that is non-trivial restricted to any open subset of \(M^{n}\). It is _locally infinitesimally Moebius bendable_ if each point \(x\in M^{n}\) has an open neighborhood \(U\) such that \(f|_{U}\) is infinitesimally Moebius bendable.
In order to state our classification of the umbilic-free infinitesimally Moebius bendable Euclidean hypersurfaces of dimension \(n\geq 5\), we need some further definitions.
First, by a _conformally surface-like hypersurface_\(f\colon M^{n}\to\mathbb{R}^{n+1}\) we mean a hypersurface that differs by a Moebius transformation of \(\mathbb{R}^{n+1}\) from either a cylinder or a rotation hypersurface over a surface in \(\mathbb{R}^{3}\), or from a cylinder over a three-dimensional hypersurface of \(\mathbb{R}^{4}\) that is a cone over a surface in \(\mathbb{S}^{3}\). We say, accordingly, that \(f\) is a conformally surface-like hypersurface _determined by a surface_\(h\colon L^{2}\to\mathbb{Q}^{3}_{\epsilon}\), with \(\epsilon=0,-1\) or \(1\), respectively.
Now we recall how a two-parameter family of hyperspheres in \(\mathbb{R}^{n+1}\) is determined by a surface \(s\colon L^{2}\to\mathbb{S}^{n+2}_{1,1}\) into the Lorentzian sphere
\[\mathbb{S}^{n+2}_{1,1}=\{x\in\mathbb{L}^{n+3}\colon\langle x,x\rangle=1\}\]
in the Lorentz space \(\mathbb{L}^{n+3}\).
Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\) be an oriented hypersurface with respect to a unit normal vector field \(N\). Then the family of hyperspheres \(x\in M^{n}\mapsto S(h(x),r(x))\) with radius \(r(x)\) and center \(h(x)=f(x)+r(x)N(x)\) is enveloped by \(f\). If, in particular, \(1/r\) is the mean curvature of \(f\), it is called the _central sphere congruence_ of \(f\).
Let \(\mathbb{V}^{n+2}\) denote the light cone in \(\mathbb{L}^{n+3}\) and let \(\Psi=\Psi_{v,w,C}\colon\mathbb{R}^{n+1}\to\mathbb{L}^{n+3}\) be the isometric embedding onto
\[\mathbb{E}^{n+1}=\mathbb{E}^{n+1}_{w}=\{u\in\mathbb{V}^{n+2}:\langle u,w \rangle=1\}\subset\mathbb{L}^{n+3}\]
given by
\[\Psi(x)=v+Cx-\frac{1}{2}\|x\|^{2}w,\]
in terms of \(w\in\mathbb{V}^{n+2}\), \(v\in\mathbb{E}^{n+1}\) and a linear isometry \(C\colon\mathbb{R}^{n+1}\to\{v,w\}^{\perp}\). Then the congruence of hyperspheres \(x\in M^{n}\mapsto S(h(x),r(x))\) is determined by the map \(S\colon M^{n}\to\mathbb{S}^{n+2}_{1,1}\) defined by
\[S(x)=\frac{1}{r(x)}\Psi(h(x))+\frac{r(x)}{2}w,\]
for \(\Psi(S(h(x),r(x)))=\mathbb{E}^{n+1}\cap S(x)^{\perp}\) for all \(x\in M^{n}\). The map \(S\) has rank \(0<k<n\), that is, it corresponds to a \(k\)-parameter congruence of hyporespheres, if and only if \(\lambda=1/r\) is a principal curvature of \(f\) with constant multiplicity \(n-k\) (see Section 9.3 of [9] for details). In this case, \(S\) gives rise to a map \(s\colon L^{k}\to\mathbb{S}^{n+2}_{1,1}\) such that \(S\circ\pi=s\), where \(\pi\colon M^{n}\to L^{k}\) is the canonical projection onto the quotient space of leaves of \(\ker(A-\lambda I)\).
Finally, a surface \(h\colon L^{2}\to\mathbb{Q}^{3}_{\epsilon}\) is said to be a _generalized cone_ over a unit-speed curve \(\gamma\colon I\to\mathbb{Q}^{2}_{c}\), \(c\geq\epsilon\), in an umbilical surface \(\mathbb{Q}^{2}_{c}\subset\mathbb{Q}^{3}_{\epsilon}\), if \(L^{2}=I\times J\) is a product of intervals \(I,J\subset\mathbb{R}\) and \(h(s,t)=\exp_{\gamma(s)}tN(s)\) for all \((s,t)\in I\times J\), where \(\exp\) is the exponential map of \(\mathbb{Q}^{3}_{\epsilon}\) and \(N\) is a unit normal vector field to \(\mathbb{Q}^{2}_{c}\) along \(\gamma\). Notice that \(h\) has \(0\) as one of its principal curvatures, with the \(t\)-coordinate curves as the correspondent curvature lines. Generalized cones without totally geodesic points are precisely the isothermic surfaces that have \(0\) as a simple principal curvature. Recall that a surface \(h\colon L^{2}\to\mathbb{Q}^{3}_{\epsilon}\) is _isothermic_ if each non-umbilic point of \(L^{2}\) has an open neighborhood where one can define isothermic (that is, conformal) coordinates whose coordinate curves are lines of curvature of \(h\).
**Theorem 1**.: _Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 5\), be an umbilic-free infinitesimally Moebius bendable hypersurface. Then there exists an open and dense subset \(\mathcal{U}^{*}\) of \(M^{n}\) such that \(f\) is of one of the following types on each connected component \(U\) of \(\mathcal{U}^{*}\):_
1. _a conformally surface-like hypersurface determined by an isothermic surface_ \(h\colon L^{2}\to\mathbb{Q}^{3}_{\epsilon}\)_,_ \(\epsilon\in\{-1,0,1\}\)_._
2. _a hypersurface whose central sphere congruence is determined by a minimal space-like surface_ \(s\colon L^{2}\to\mathbb{S}^{n+2}_{1,1}\)_._
_In particular, \(f\) has a principal curvature with multiplicity \(n-1\) or \(n-2\) at any point of \(M^{n}\), and the first possibility occurs on a connected component \(U\) of \(\mathcal{U}^{*}\) if and only if \(f\) is given on \(U\) as in part \((i)\), with the surface \(h\colon L^{2}\to\mathbb{Q}^{3}_{\epsilon}\)
being a generalized cone over a unit-speed curve \(\gamma\colon J\to\mathbb{Q}_{c}^{2}\) in an umbilical surface \(\mathbb{Q}_{c}^{2}\subset\mathbb{Q}_{\epsilon}^{3}\), \(c\geq\epsilon\)._
_Conversely, any simply connected hypersurface as in \((ii)\) is infinitesimally Moebius bendable, and for any hypersurface as in \((i)\) there exists an open dense subset where \(f\) is locally infinitesimally Moebius bendable._
It follows from Theorem 1 and the main result in [12] that, within the class of hypersurfaces that are not conformally surface-like on any open subset and have a principal curvature of constant multiplicity \(n-2\), the families of those that are either Moebius deformable or infinitesimally Moebius bendable coincide. On the other hand, among conformally surface-like hypersurfaces, the class of infinitesimally Moebius bendable hypersurfaces is strictly larger than that of Moebius deformable hypersurfaces. Indeed, while a surface in the former class is determined by an arbitrary isothermic surface, the elements in the latter are determined by particular isothermic surfaces, namely, Bonnet surfaces admitting isometric deformations preserving the mean curvature.
Our approach to prove Theorem (1) is rather different from those used in both [12] or [13] to classify the Moebius deformable hypersurfaces. It is based on the theory developed in [3] and [5] of the more general notions of _conformal variations_ and _conformal infinitesimal variations_ of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\), which are natural generalizations of the corresponding classical concepts of isometric variations and isometric infinitesimal variations.
A smooth map \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) is a _conformal variation_ of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) if the maps \(f_{t}=F(t,\cdot)\), with \(f_{0}=f\), are conformal immersions for any \(t\in(-\epsilon,\epsilon)\), that is, if there is a positive \(\gamma\in C^{\infty}((-\epsilon,\epsilon)\times M^{n})\), with \(\gamma(0,x)=1\) for all \(x\in M^{n}\), such that
\[\gamma(t,x)\langle f_{t*}X,f_{t*}X\rangle=\langle X,Y\rangle \tag{2}\]
for all \(X,Y\in\mathfrak{X}(M)\), where \(\langle\,,\,\rangle\) stands for the metrics of both \(\mathbb{R}^{m}\) and \(M^{n}\). Thus Moebius variations are particular cases of conformal variations for which \(\gamma(t,x)=\phi_{0}^{-2}(x)\phi_{t}^{2}(x)\) for all \((t,x)\in I\times M^{n}\).
_Conformal infinitesimal variations_ of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) are smooth variations for which (2) holds up to the first order, that is,
\[\frac{\partial}{\partial t}|_{t=0}(\gamma(t,x)\langle f_{t*}X,f_{t*}X\rangle)=0 \tag{3}\]
for all \(X,Y\in\mathfrak{X}(M)\). Eq. (3) implies that the variational vector field \(\mathfrak{T}=F_{*}\partial/\partial t|_{t=0}\) of \(F\) satisfies
\[\langle\tilde{\nabla}_{X}\mathfrak{T},f_{*}Y\rangle+\langle f_{*}X,\tilde{ \nabla}_{Y}\mathfrak{T}\rangle=2\rho\langle X,Y\rangle \tag{4}\]
for all \(X,Y\in\mathfrak{X}(M)\), where \(\rho(x)=-(1/2)\partial\gamma/\partial t(0,x)\). For this reason, a smooth section \(\mathcal{T}\in\Gamma(f^{*}T\mathbb{R}^{m})\) that satisfies (4) is called a _conformal infinitesimal bending_ of \(f\) with conformal factor \(\rho\in C^{\infty}(M)\). In particular, the variational vector field \(\mathcal{T}=F_{*}\partial/\partial t|_{t=0}\) of an infinitesimal Moebius variation, which we call an _an infinitesimal Moebius bending_, is also a conformal infinitesimal bending of \(f\) whose conformal factor is
\[\rho=-\frac{1}{2}\frac{\partial}{\partial t}|_{t=0}(\gamma(t,x))=-\frac{1}{2} \phi_{0}^{-2}(x)\frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2}(x)). \tag{5}\]
By the above, the variational vector field of a conformal infinitesimal variation is a conformal infinitesimal bending. Conversely, any conformal infinitesimal bending \(\mathcal{T}\in\Gamma(f^{*}T\mathbb{R}^{m})\) is the variational vector field of a (non-unique) conformal infinitesimal variation \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) of \(f\). For instance, one may take
\[F(t,x)=f(x)+t\mathcal{T}(x)\]
for all \((t,x)\in(-\epsilon,\epsilon)\times M^{n}\). The reason why it is convenient to consider the conformal infinitesimal bending associated with a conformal infinitesimal variation of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) is that one can establish a fundamental theorem providing necessary and sufficient conditions for the existence of a conformal infinitesimal bending (and hence of a conformal infinitesimal variation); see [3].
_Infinitesimal variations_ of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) correspond to the conformal infinitesimal variations for which the function \(\gamma\) in (3) has the constant value \(\gamma=1\). The associated variational vector fields are called _infinitesimal bendings_ and correspond to the conformal infinitesimal bendings with conformal factor \(\rho=0\). The reason why isothermic surfaces \(f\colon L^{2}\to\mathbb{Q}_{c}^{3}\) appear in this context is that they are precisely the surfaces that are locally _infinitesimally Bonnet bendable_, that is, the surfaces that admit local infinitesimal variations \(F\colon(-\epsilon,\epsilon)\times L^{2}\to\mathbb{Q}_{c}^{3}\) such that the mean curvature functions \(H_{t}\) of \(f_{t}=F(t,\cdot)\), \(t\in(-\epsilon,\epsilon)\), coincide up to the first order, that is, \(\partial/\partial t|_{t=0}H_{t}=0\) (see, e.g., Proposition 9 of [11]).
The study of hypersurfaces \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 3\), that admit non-trivial variations preserving the _induced_ metric goes back to Sbrana [15] and Cartan [1] (see also [8] or Chapter 11 of [9]), whereas the hypersurfaces that admit non-trivial infinitesimal variations were investigated by Sbrana [14] (see also [6], Chapter 14 of [9] and [4]). We point out that the latter class turns out to be much larger than the former.
In the proof of Theorem (1), a main step is the following characterization of independent interest of the infinitesimally Moebius bendable isometric immersions \(f\colon M^{n}\to\mathbb{R}^{m}\) of arbitrary codimension among those that admit a non-trivial conformal infinitesimal bending \(\mathscr{T}\) with conformal factor \(\rho\in C^{\infty}(M)\). In the next statement, we denote by \(\mathscr{H}\) the mean curvature vector field of \(f\) and by \(\mathscr{L}\in\Gamma(N_{f}M)\) the normal vector field given by
\[\mathscr{L}=\frac{1}{n}\sum_{i=1}^{n}\beta(X_{i},X_{i})\in\Gamma(N_{f}M) \tag{6}\]
for any orthonormal frame \(\{X_{1},\ldots,X_{n}\}\) of \(M^{n}\), where \(\beta\) is the symmetric section of \(\operatorname{Hom}\left(TM,TM;N_{f}M\right)\) associated with \(\mathscr{T}\) (see (8) below).
**Theorem 2**.: _An isometric immersion \(f\colon M^{n}\to\mathbb{R}^{m}\) is infinitesimally Moebius bendable if and only if it admits a non-trivial conformal infinitesimal bending such that_
\[\Delta\rho+n\langle\mathscr{L},\mathscr{H}\rangle=0. \tag{7}\]
By means of Theorem 2, it is shown in the proof of Theorem 1 that any conformal infinitesimal bending of a hypersurface as in part \((ii)\) of the statement of that result is also an infinitesimal Moebius bending.
## 2 The Fundamental theorem of conformal infinitesimal bendings
In this section we recall from [3] the Fundamental theorem for conformal infinitesimal bendings of Euclidean hypersurfaces.
Let \(f\colon M^{n}\to\mathbb{R}^{m}\) be an isometric immersion and let \(\mathscr{T}\) be a conformal infinitesimal bending of \(f\) with conformal factor \(\rho\), that is, \(\mathscr{T}\) and \(\rho\) satisfy (4). Defining \(L\in\Gamma(\operatorname{Hom}(TM;f^{*}T\mathbb{R}^{m}))\) by
\[LX=\tilde{\nabla}_{X}\mathscr{T}-\rho f_{*}X=\mathscr{T}_{*}X-\rho f_{*}X\]
for any \(X\in\mathfrak{X}(M)\), then (4) can be written as
\[\langle LX,f_{*}Y\rangle+\langle f_{*}X,LY\rangle=0\]
for all \(X,Y\in\mathfrak{X}(M)\). Let \(B\in\Gamma(\operatorname{Hom}\left(TM,TM;f^{*}T\mathbb{R}^{m}\right))\) be given by
\[B(X,Y)=(\tilde{\nabla}_{X}L)Y=\tilde{\nabla}_{X}LY-L\nabla_{X}Y\]
for all \(X,Y\in\mathfrak{X}(M)\), and define \(\beta\in\Gamma(\mbox{\rm Hom}\,(TM,TM;N_{f}M))\) by
\[\beta(X,Y)=(B(X,Y))_{N_{f}M}=(\tilde{\nabla}_{X}\tilde{\nabla}_{Y}\mathcal{T}- \tilde{\nabla}_{\nabla_{X}Y}\mathcal{T})_{N_{f}M}-\rho\alpha(X,Y) \tag{8}\]
for all \(X,Y\in\mathfrak{X}(M)\). Flatness of the ambient space and the symmetry of \(\alpha\) imply that \(\beta\) is symmetric.
Given \(\eta\in\Gamma(N_{f}M)\), let \(B_{\eta}\in\Gamma(\mbox{\rm End}(TM))\) be given by \(\langle B_{\eta}X,Y\rangle=\langle\beta(X,Y),\eta\rangle\) for all \(X,Y\in\mathfrak{X}(M)\). Then it can be shown that
\[A_{\beta(Y,Z)}X+B_{\alpha(Y,Z)}X-A_{\beta(X,Z)}Y-B_{\alpha(X,Z)}Y+\langle Y,Z \rangle\nabla_{X}\nabla\rho \tag{9}\]
for all \(X,Y,Z\in\mathfrak{X}(M)\); see [3], where a fundamental theorem for conformal infinitesimal bendings of Euclidean submanifolds with arbitrary codimension was obtained. Here we restrict ourselves to state that theorem for the particular case of hypersurfaces.
Given a hypersurface \(f\colon M^{n}\to\mathbb{R}^{n+1}\), let \(A\) be its shape operator with respect to a unit normal vector field \(N\), and let \(\mathcal{B}\in\Gamma(\mbox{\rm End}(TM))\) be given by
\[\langle\mathcal{B}X,Y\rangle=\langle\beta(X,Y),N\rangle\]
for all \(X,Y\in\mathfrak{X}(M)\). The Fundamental theorem for conformal infinitesimal bendings of \(f\) reads as follows.
**Theorem 3**.: ([3]) _The pair \((\mathcal{B},\rho)\) associated with a conformal infinitesimal bending of the hypersurface \(f\colon M^{n}\to\mathbb{R}^{n+1}\) satisfies the equations_
\[\mathcal{B}X\wedge AY-\mathcal{B}Y\wedge AX+X\wedge\mbox{Hess}\,\rho(Y)-Y \wedge\mbox{Hess}\,\rho(X)=0 \tag{10}\]
_and_
\[(\nabla_{X}\mathcal{B})Y-(\nabla_{Y}\mathcal{B})X+(X\wedge Y)A\nabla\rho=0 \tag{11}\]
_for all \(X,Y\in\mathfrak{X}(M)\). Conversely, if \(M^{n}\) is simply connected, then a symmetric tensor \(\mathcal{B}\in\Gamma(\mbox{End}(TM))\) and \(\rho\in C^{\infty}(M)\) satisfying (10) and (11) determine a unique conformal infinitesimal bending of \(f\)._
**Remarks 4**.: 1) For an infinitesimal variation of a hypersurface \(f\colon M^{n}\to\mathbb{R}^{n+1}\), its associated tensor \(\mathcal{B}\) satisfies (10) with \(\rho=0\), and (11) reduces to the Codazzi equation for \(\mathcal{B}\).
2) By Proposition 12 in [5] (respectively, Theorem 13 in [6]), a conformal infinitesimal bending (respectively, infinitesimal bending) of a conformal infinitesimal variation (respectively, infinitesimal variation) of a hypersurface \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 3\), is trivial if and only if its associated tensor \(\mathcal{B}\) has the form \(\mathcal{B}=\varphi I\) for some \(\varphi\in C^{\infty}(M)\) (respectively, its associated tensor \(\mathcal{B}\) vanishes).
Proof of Theorem 2
This section is devoted to the proof of Theorem 2, for which we first establish several preliminary facts.
Let \(f\colon M^{n}\to\mathbb{R}^{m}\) be an isometric immersion and let \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) be a smooth variation of \(f\) by immersions \(f_{t}=F(t,\cdot)\) with \(f_{0}=f\). From now on, given a one-parameter family of vector fields \(X^{t}\in\mathfrak{X}(M)\), we define \(X^{\prime}\in\mathfrak{X}(M)\) by setting, for each \(x\in M^{n}\),
\[X^{\prime}(x)=\frac{\partial}{\partial t}|_{t=0}X^{t}(x).\]
For the proofs of the next two lemmas we refer to [11] (see Lemma 4 and Lemma 5 therein, respectively).
**Lemma 5**.: _For any fixed \(x\in M^{n}\), the velocity vector at \(t=0\) of the smooth curve \(t\mapsto f_{t*}X^{t}(x)\) is_
\[\frac{\partial}{\partial t}|_{t=0}f_{t*}X^{t}(x)=\tilde{\nabla}_{X(x)} \mathcal{T}+f_{*}X^{\prime}(x),\]
_where \(\mathcal{T}\) is the variational vector field of \(F\)._
**Lemma 6**.: _If \(\alpha^{t}\) denotes the second fundamental form of \(f_{t}\), then_
\[\langle\frac{\partial}{\partial t}|_{t=0}\alpha^{t}(X,Y),\eta\rangle=\langle \tilde{\nabla}_{X}\tilde{\nabla}_{Y}\mathcal{T}-\tilde{\nabla}_{\nabla_{X}Y} \mathcal{T},\eta\rangle\]
_for all \(X,Y\in\mathfrak{X}(M)\) and \(\eta\in\Gamma(N_{f}M)\)._
Taking into account (8), Lemma 6 yields the following for a conformal infinitesimal variation of \(f\).
**Corollary 7**.: _If \(F\) is a conformal infinitesimal variation of \(f\) and \(\rho\in C^{\infty}(M)\) is the conformal factor associated to its conformal infinitesimal bending \(\mathcal{T}\), then_
\[\langle\frac{\partial}{\partial t}|_{t=0}\alpha^{t}(X,Y),\eta\rangle=\langle \beta(X,Y)+\rho\alpha(X,Y),\eta\rangle \tag{12}\]
_for all \(X,Y\in\mathfrak{X}(M)\) and \(\eta\in\Gamma(N_{f}M)\)._
For a conformal infinitesimal variation \(F\) of \(f\) and a (local) orthonormal frame \(\{X_{i}\}_{1\leq i\leq n}\) with respect to the metric induced by \(f\), let \(X_{i}^{t}\in\mathfrak{X}(M)\), \(1\leq i\leq n\), \(t\in I\), be a smooth one-parameter family of tangent frames such that \(X_{i}^{0}=X_{i}\), \(1\leq i\leq n\), and \(\langle f_{t*}X_{i}^{t},f_{t*}X_{j}^{t}\rangle=\delta_{ij}\) for all \(1\leq i,j\leq n\) and \(t\in I\), that is, \(\{X_{i}^{t}\}_{1\leq i\leq n}\) is an orthonormal frame for the metric induced by \(f_{t}=F(t,\cdot)\).
**Lemma 8**.: _The vector fields \(X_{i}^{\prime}\), \(1\leq i\leq n\), satisfy_
\[\langle X_{i}^{\prime},X_{i}\rangle=-\rho \tag{13}\]
_and_
\[\langle X_{i}^{\prime},X_{j}\rangle+\langle X_{i},X_{j}^{\prime}\rangle=0 \tag{14}\]
_for all \(1\leq i,j\leq n\) with \(i\neq j\)._
_Proof:_ Taking the derivative with respect to \(t\) of \(\langle f_{t*}X_{i}^{t},f_{t*}X_{j}^{t}\rangle=\delta_{ij}\) at \(t=0\) and using Lemma (5), we obtain
\[0= \frac{\partial}{\partial t}|_{t=0}\langle f_{t*}X_{i}^{t},f_{t*}X _{j}^{t}\rangle\] \[= \langle\tilde{\nabla}_{X_{i}}\mathcal{T}+f_{*}X_{i}^{\prime},f_{ *}X_{j}\rangle+\langle f_{*}X_{i},\tilde{\nabla}_{X_{j}}\mathcal{T}+f_{*}X_{j }^{\prime}\rangle\] \[= \langle X_{i}^{\prime},X_{j}\rangle+\langle\tilde{\nabla}_{X_{i} }\mathcal{T},f_{*}X_{j}\rangle+\langle X_{i},X_{j}^{\prime}\rangle+\langle f_ {*}X_{i},\tilde{\nabla}_{X_{j}}\mathcal{T}\rangle.\]
Combining the preceding equation with (4) yields
\[\langle X_{i}^{\prime},X_{j}\rangle+\langle X_{i},X_{j}^{\prime}\rangle+2\rho \langle X_{i},X_{j}\rangle=0,\ 1\leq i,j\leq n.\ \ \vrule width 1px\]
**Lemma 9**.: _Let \(f\colon M^{n}\to\mathbb{R}^{m}\) be an isometric immersion and let \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) be a conformal infinitesimal variation of \(f\) with corresponding conformal infinitesimal bending \(\mathcal{T}\) and conformal factor \(\rho\in C^{\infty}(M)\). Let \(\phi_{t}\) be given by (1) for each immersion \(f_{t}=F(t,\cdot)\), \(t\in(-\epsilon,\epsilon)\). Then_
\[\frac{\partial}{\partial t}|_{t=0}\phi_{t}^{2}=2n\Delta\rho-2\phi^{2}\rho+2n^ {2}\langle\mathcal{L},\mathcal{H}\rangle, \tag{15}\]
_where \(\mathcal{H}\) is the mean curvature vector field of \(f\) and \(\mathcal{L}\in\Gamma(N_{f}M)\) is given by (6)._
_Proof:_ Let us first compute \(\partial/\partial t|_{t=0}\|\alpha^{t}\|^{2}\). Let \(\{X_{i}^{t}\}_{1\leq i\leq n}\) be a one-parameter family of frames such that, for each fixed \(t\in(-\epsilon,\epsilon)\), \(\{X_{i}^{t}\}_{1\leq i\leq n}\) is orthonormal with respect to the metric induced by \(f_{t}\). By (12) we have
\[\langle\frac{\partial}{\partial t}|_{t=0}\alpha^{t}(X_{i}^{t},X_{j}^{t}),\eta \rangle=\langle\beta(X_{i},X_{j})+\rho\alpha(X_{i},X_{j})+\alpha(X_{i}^{\prime },X_{j})+\alpha(X_{i},X_{j}^{\prime}),\eta\rangle, \tag{16}\]
hence
\[\frac{1}{2}\frac{\partial}{\partial t}|_{t=0}\|\alpha^{t}(X_{i}^{ t},X_{j}^{t})\|^{2}= \langle\frac{\partial}{\partial t}|_{t=0}\alpha^{t}(X_{i}^{t},X_{j }^{t}),\alpha(X_{i},X_{j})\rangle\] \[= \langle\beta(X_{i},X_{j})+\rho\alpha(X_{i},X_{j}),\alpha(X_{i},X _{j})\rangle\] \[+\langle\alpha(X_{i}^{\prime},X_{j})+\alpha(X_{i},X_{j}^{\prime }),\alpha(X_{i},X_{j})\rangle.\]
Thus
\[\frac{\partial}{\partial t}|_{t=0}\|\alpha^{t}\|^{2}= 2\rho\|\alpha\|^{2}+2\sum_{i,j=1}^{n}\left\langle\beta(X_{i},X_{ j}),\alpha(X_{i},X_{j})\right\rangle\] \[+2\sum_{i,j=1}^{n}\left\langle\alpha(X_{i}^{\prime},X_{j})+\alpha (X_{i},X_{j}^{\prime}),\alpha(X_{i},X_{j})\right\rangle\] \[= 2\rho\|\alpha\|^{2}+2\sum_{i,j=1}^{n}\left\langle\beta(X_{i},X_{ j}),\alpha(X_{i},X_{j})\right\rangle\] \[+4\sum_{i,j=1}^{n}\left\langle\alpha(X_{i}^{\prime},X_{j}),\alpha( X_{i},X_{j})\right\rangle.\]
It follows from (9) that
\[2\langle\beta(X_{i},X_{j}),\alpha(X_{i},X_{j})\rangle= \langle\beta(X_{i},X_{i}),\alpha(X_{j},X_{j})\rangle+\langle \beta(X_{j},X_{j}),\alpha(X_{i},X_{i})\rangle\] \[+\langle X_{j},X_{j}\rangle\mathrm{Hess}\,\rho(X_{i},X_{i})+ \langle X_{i},X_{i}\rangle\mathrm{Hess}\,\rho(X_{j},X_{j})\]
for all \(1\leq i,j\leq n\) with \(i\neq j\). Therefore
\[2\sum_{i,j=1}^{n}\left\langle\beta(X_{i},X_{j}),\alpha(X_{i},X_{j})\right\rangle =2(n-1)\Delta\rho+2\sum_{i,j=1}^{n}\left\langle\beta(X_{i},X_{i}),\alpha(X_{j},X_{j})\right\rangle,\]
where \(\Delta\) denotes the Laplacian. On the other hand, from the Gauss equation,
\[\langle\alpha(X_{i}^{\prime},X_{j}),\alpha(X_{i},X_{j})\rangle=\langle\alpha(X _{i}^{\prime},X_{i}),\alpha(X_{j},X_{j})\rangle+\langle R(X_{i}^{\prime},X_{j })X_{i},X_{j}\rangle,\]
where \(R\) denotes the Riemann curvature tensor of \(M^{n}\), we obtain
\[\sum_{i\neq j}\left\langle\alpha(X_{i}^{\prime},X_{j}),\alpha(X_{i},X_{j})\right\rangle =\sum_{i\neq j}\left\langle\alpha(X_{i}^{\prime},X_{i}),\alpha(X_{j},X_{j}) \right\rangle-\sum_{i=1}^{n}\text{Ric }(X_{i}^{\prime},X_{i}).\]
Thus
\[\frac{\partial}{\partial t}|_{t=0}\|\alpha^{t}\|^{2}= 2\rho\|\alpha\|^{2}+2(n-1)\Delta\rho+2\sum_{i,j=1}^{n}\left\langle \beta(X_{i},X_{i}),\alpha(X_{j},X_{j})\right\rangle\] \[+4\sum_{i,j=1}^{n}\left\langle\alpha(X_{i}^{\prime},X_{i}),\alpha (X_{j},X_{j})\right\rangle-4\sum_{i=1}^{n}\text{Ric }(X_{i}^{\prime},X_{i})\] \[= 2\rho\|\alpha\|^{2}+2(n-1)\Delta\rho+2n^{2}\langle\mathcal{L}, \mathcal{H}\rangle\] \[+4n\sum_{i=1}^{n}\left\langle\alpha(X_{i}^{\prime},X_{i}), \mathcal{H}\right\rangle-4\sum_{i=1}^{n}\text{Ric }(X_{i}^{\prime},X_{i}).\]
By (13), we may write \(X_{i}^{\prime}=-\rho X_{i}+\sum_{i\neq k}\left\langle X_{i}^{\prime},X_{k} \right\rangle X_{k}\); hence
\[\sum_{i=1}^{n}\left\langle\alpha(X_{i}^{\prime},X_{i}),\mathcal{H}\right\rangle= -\rho n\|\mathcal{H}\|^{2}+\sum_{i\neq k}\left\langle X_{i}^{ \prime},X_{k}\right\rangle\langle\alpha(X_{k},X_{i}),\mathcal{H}\rangle\] \[= -\rho n\|\mathcal{H}\|^{2},\]
where the last equality follows from (14). Similarly,
\[\sum_{i=1}^{n}\text{Ric }(X_{i}^{\prime},X_{i})=-\rho n(n-1)s,\]
where \(s=\frac{1}{n(n-1)}\sum_{i=1}^{n}\text{Ric }(X_{i},X_{i})\) is the scalar curvature of \(M^{n}\). Thus
\[\frac{\partial}{\partial t}|_{t=0}\|\alpha^{t}\|^{2}= 2\rho\|\alpha\|^{2}+2(n-1)\Delta\rho+2n^{2}\langle\mathcal{L}, \mathcal{H}\rangle\] \[-4n^{2}\rho\|\mathcal{H}\|^{2}+4\rho n(n-1)s.\]
Using that
\[s=\frac{n}{n-1}\|\mathcal{H}\|^{2}-\frac{1}{n(n-1)}\|\alpha\|^{2},\]
we obtain
\[\frac{\partial}{\partial t}|_{t=0}\|\alpha^{t}\|^{2}=2(n-1)\Delta\rho+2n^{2} \langle\mathcal{L},\mathcal{H}\rangle-2\rho\|\alpha\|^{2}. \tag{17}\]
We now compute \(\partial/\partial t|_{t=0}\|\mathcal{H}^{t}\|^{2}\). With \(\{X_{i}^{t}\}_{1\leq i\leq n}\) as above, we have
\[\frac{\partial}{\partial t}|_{t=0}\|\mathcal{H}^{t}\|^{2}= 2\langle\frac{\partial}{\partial t}|_{t=0}\mathcal{H}^{t}, \mathcal{H}\rangle\] \[= 2\langle\frac{1}{n}\sum_{i=1}^{n}\frac{\partial}{\partial t}|_{t =0}\alpha^{t}(X_{i}^{t},X_{i}^{t}),\mathcal{H}\rangle\] \[= 2\rho\|\mathcal{H}\|^{2}+2\langle\mathcal{L},\mathcal{H}\rangle +\frac{4}{n}\sum_{i=1}^{n}\langle\alpha(X_{i}^{\prime},X_{i}),\mathcal{H}\rangle,\]
where the last step follows from (16). Using (13) and (14) as before, we obtain
\[\frac{\partial}{\partial t}|_{t=0}\|\mathcal{H}^{t}\|^{2}=2\langle\mathcal{L },\mathcal{H}\rangle-2\rho\|\mathcal{H}\|^{2}. \tag{18}\]
It follows from (17) and (18) that
\[\frac{\partial}{\partial t}|_{t=0}\phi_{t}^{2}= \frac{n}{n-1}\left(2(n-1)\Delta\rho+2n^{2}\langle\mathcal{L}, \mathcal{H}\rangle-2\rho\|\alpha\|^{2}\right.\] \[\left.-2n\langle\mathcal{L},\mathcal{H}\rangle+2n\rho\|\mathcal{H }\|^{2}\right)\] \[= \frac{n}{n-1}(2(n-1)\Delta\rho+2n(n-1)\langle\mathcal{L}, \mathcal{H}\rangle\] \[\left.-2\rho\|\alpha\|^{2}+2\rho n\|\mathcal{H}\|^{2})\right.\] \[= 2n\Delta\rho+2n^{2}\langle\mathcal{L},\mathcal{H}\rangle-2\phi ^{2}\rho,\]
where we have used (1) in the last equality.
_Proof of Theorem (2):_ If \(\mathcal{T}\) is the variational vector field of an infinitesimal Moebius variation \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{m}\) of \(f\), then the corresponding conformal factor \(\rho\) is given by (5). Thus (15) yields
\[-2\phi^{2}\rho=2n\Delta\rho-2\phi^{2}\rho+2n^{2}\langle\mathcal{L},\mathcal{H }\rangle,\]
and hence (7) holds.
For the converse, assume that \(\mathcal{T}\) is a conformal infinitesimal bending of \(f\) whose conformal factor \(\rho\) satisfies (7). The variation
given by \({\cal F}(t,x)=f(x)+t{\cal T}(x)\) is a conformal infinitesimal variation with variational vector field \({\cal T}\). Let \(f_{t}={\cal F}(t,\cdot)\) and let \(\phi_{t}\) be given by (1) for each \(f_{t}\), \(t\in\mathbb{R}\). We claim that \({\cal F}\) is an infinitesimal Moebius variation of \(f\). Indeed, we have
\[\frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2}\langle f_{t*}X,f_{t*}Y\rangle)= \frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2})\langle X,Y\rangle+\phi^{2}( \langle\tilde{\nabla}_{X}{\cal T},f_{*}Y\rangle+\langle f_{*}X,\tilde{\nabla}_ {Y}{\cal T}\rangle)\]
for all \(X,Y\in{\mathfrak{X}}(M)\), hence
\[\frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2}\langle f_{t*X},f_{t*}Y\rangle) =\left(\frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2})+2\phi^{2}\rho\right) \langle X,Y\rangle\]
by (4). On the other hand, from (15) and (7) we have
\[\frac{\partial}{\partial t}|_{t=0}(\phi_{t}^{2})+2\phi^{2}\rho=0,\]
which proves the claim and completes the proof.
Before concluding this section, we state for later use the following consequence of some of the preceding computations (see [6] for the corresponding fact for (isometric) infinitesimal variations).
**Proposition 10**.: _Let \(F\colon(-\epsilon,\epsilon)\times M^{n}\to\mathbb{R}^{n+1}\) be a conformal infinitesimal variation of an isometric immersion \(f\colon M^{n}\to\mathbb{R}^{n+1}\). Let \(N_{t}\) be a unit vector field normal to \(f_{t}=F(t,\cdot)\), \(t\in(-\epsilon,\epsilon)\), and denote by \(A_{t}\) the corresponding shape operator. Then the tensor \({\cal B}\in\Gamma(\mbox{End}(TM))\) associated with \(F\) satisfies_
\[{\cal B}=A^{\prime}+\rho A, \tag{19}\]
_where \(\rho\in C^{\infty}(M)\) is the conformal factor of \(F\) and \(A^{\prime}=\partial/\partial t|_{t=0}A_{t}\)._
_Proof:_ It follows from (4) and Lemma 5 that
\[\partial/\partial t|_{t=0}\langle\alpha^{t}(X,Y),N_{t}\rangle = \partial/\partial t|_{t=0}\langle f_{t*}A_{t}X,f_{t*}Y\rangle\] \[= \langle A^{\prime}X,Y\rangle+2\rho\langle AX,Y\rangle\]
for all \(X,Y\in{\mathfrak{X}}(M)\). On the other hand, from (12) we obtain
\[\partial/\partial t|_{t=0}\langle\alpha^{t}(X,Y),N_{t}\rangle=\langle{\cal B} X,Y\rangle+\rho\langle AX,Y\rangle.\]
Comparing the two preceding equations yields (19).
Proof of Theorem 1
In this section we prove Theorem 1. We start with some preliminary results, which make use of the following lemma in [5] (see Lemma 14 therein).
**Lemma 11**.: _Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 5\), be a hypersurface that admits a conformal infinitesimal bending \(\mathscr{T}\) that is non-trivial on any open subset. Then its associated tensor \(\mathscr{B}\), the Hessian \(H\) of its conformal factor \(\rho\), and the shape operator \(A\) of \(f\) share, on an open and dense subset of \(M^{n}\), a common eigenbundle \(\Delta\) of constant dimension \(\dim\Delta\geq n-2\)._
The next result states how Theorem (2) reads for hypersurfaces.
**Proposition 12**.: _Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 5\), be an umbilic-free infinitesimally Moebius bendable hypersurface. Then there exists an open and dense subset \(\mathscr{U}\) of \(M^{n}\) where \(\mathscr{B}\), \(H\) and \(A\) share a common eigenbundle \(\Delta\) of rank \(n-2\) and \(\text{tr}(A-\lambda I)\text{tr}(\mathscr{B}-bI)=0\), where \(b,\lambda\in C^{\infty}(\mathscr{U})\) are such that \(\mathscr{B}|_{\Delta}=bI\) and \(A|_{\Delta}=\lambda I\)._
_Proof:_ Since \(f\) is infinitesimally Moebius bendable, it admits, in particular, a conformal infinitesimal bending \(\mathscr{T}\) that is non-trivial on any open subset. By Lemma 11, there exists an open and dense subset \(\mathscr{U}\) of \(M^{n}\) where \(\mathscr{B}\), \(H\) and \(A\) share a common eigenbundle \(\Delta\) of constant dimension \(\dim\Delta\geq n-2\). In the proof of Proposition 15 in [5] (see Eq. (35) therein), it was shown that
\[bA+\lambda(\mathscr{B}-bI)+\operatorname{Hess}\rho=0 \tag{20}\]
on \(\mathscr{U}\). Let \(\mathscr{H}\) and \(\mathscr{L}\) be given by \(\text{tr}\,A=n\mathscr{H}\) and \(\text{tr}\,\mathscr{B}=n\mathscr{L}\). Taking traces in (20) yields
\[nb\mathscr{H}+n\lambda\mathscr{L}-n\lambda b+\Delta\rho=0.\]
We write the preceding equation as
\[n(\mathscr{H}-\lambda)(\mathscr{L}-b)=\Delta\rho+n\mathscr{L}\mathscr{H},\]
which is also equivalent to
\[\text{tr}\,(A-\lambda I)\text{tr}\,(\mathscr{B}-bI)=n(\Delta\rho+n\mathscr{L} \mathscr{H}). \tag{21}\]
Taking into account that (7) reduces to \(\Delta\rho+n\mathscr{L}\mathscr{H}=0\), it follows from (21) and Theorem (2) that \(\text{tr}\,(A-\lambda I)\text{tr}\,(\mathscr{B}-bI)\ =\ 0\).
Finally, notice that the preceding condition can not occur on any open subset where \(\dim\Delta=n-1\). Indeed, if \(\dim\Delta=n-1\), then the condition \(\operatorname{tr}\left(A-\lambda I\right)=0\) would imply that \(A=\lambda I\), whereas \(\operatorname{tr}\left(\mathcal{B}-bI\right)=0\) would yield \(\mathcal{B}=bI\), in contradiction with the assumptions that \(f\) is free of umbilic points and that the infinitesimal bending \(\mathcal{T}\) is non-trivial, respectively.
**Lemma 13**.: _The distribution \(\Delta\) given by Proposition 12 is umbilical._
_Proof:_ If \(\Delta=\ker(A-\lambda I)\), then it is the eigenbundle corresponding to the principal curvature \(\lambda\), and hence umbilical. Thus we only need to consider the case in which \(\Delta\) coincides with \(\ker(B-bI)\) and is a proper subspace of \(\ker(A-\lambda I)\). Equation (11) can be written as
\[(\nabla_{X}(\mathcal{B}-bI))Y-(\nabla_{Y}(\mathcal{B}-bI))X+(X\wedge Y)(A \nabla\rho-\nabla b)=0\]
for any \(X,Y\in\mathfrak{X}(M)\), where \(\nabla b\) and \(\nabla\rho\) are the gradients of \(b\) and of the conformal factor \(\rho\), respectively. Let \(T,S\in\Gamma(\Delta)\) be orthogonal and take \(X\in\Gamma(\Delta^{\perp})\). Evaluating the preceding equation in \(X\) and \(T\) and taking the inner product of both sides with \(S\) gives \(\langle\nabla_{T}(\mathcal{B}-bI)X,S\rangle=0.\) Since we are assuming that rank \((\mathcal{B}-bI)=2\), the above equation gives
\[(\nabla_{T}S)_{\Delta^{\perp}}=0\]
for all \(T,S\in\Gamma(\Delta)\) with \(\langle T,S\rangle=0\). Thus \(\Delta\) is an umbilical distribution.
From now on, for \(\mathcal{U}\) and \(b,\lambda\in C^{\infty}(\mathcal{U})\) as in Proposition 12, we denote \(\bar{A}=A-\lambda I\) and \(\bar{\mathcal{B}}=\mathcal{B}-bI\).
**Proposition 14**.: _Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 5\), be an umbilic-free infinitesimally Moebius bendable hypersurface, let \(\mathcal{U}\) be the open and dense subset of \(M^{n}\) given by Proposition 12, and let \(\mathcal{U}_{1}\) be the subset of \(\mathcal{U}\) where \(\operatorname{tr}\bar{\mathcal{B}}=0\). Then \(\mathcal{U}_{1}=\mathcal{Y}_{1}\cup\mathcal{Y}_{2}\), where the following holds on \(\mathcal{Y}_{1}\) and \(\mathcal{Y}_{2}\), respectively:_
* \(\bar{A}|_{\Delta^{\perp}}\) _is a multiple of the identity endomorphism_ \(I\in\Gamma(\text{End}(\Delta^{\perp}))\)_;_
* _there exists at each point an orthonormal basis_ \(\{X,Y\}\) _of_ \(\Delta^{\perp}\) _given by principal directions of_ \(f\) _and_ \(\theta\in\mathbb{R}\) _such that_ \(\bar{\mathcal{B}}X=\theta Y\) _and_ \(\bar{\mathcal{B}}Y=\theta X\)_._
_Proof:_ It follows from (20) that, at each \(x\in\mathcal{U}\), Eq. (10) can be written as
\[\bar{\mathcal{B}}X\wedge\bar{A}Y=\bar{\mathcal{B}}Y\wedge\bar{A}X, \tag{22}\]
or equivalently,
\[\langle\bar{A}Y,X\rangle\langle\bar{\mathscr{B}}X,Y\rangle-\langle\bar{\mathscr{B}} X,X\rangle\langle\bar{A}Y,Y\rangle=\langle\bar{A}X,X\rangle\langle\bar{\mathscr{B}}Y,Y \rangle-\langle\bar{\mathscr{B}}Y,X\rangle\langle\bar{A}X,Y\rangle\]
for all \(X,Y\in T_{x}\mathcal{U}\). Applying the preceding equation to orthogonal unit eigenvectors \(X\) and \(Y\) of \(\bar{A}|_{\Delta^{\perp}}\), with \(\bar{A}X=\mu_{1}X\) and \(\bar{A}Y=\mu_{2}Y\), gives
\[-\mu_{2}\langle\bar{\mathscr{B}}X,X\rangle=\mu_{1}\langle\bar{\mathscr{B}}Y,Y\rangle.\]
Therefore, at each point of \(\mathcal{U}_{1}\), either \(\mu_{1}=\mu_{2}:=\mu\neq 0\), and hence \(\bar{A}|_{\Delta^{\perp}}=\mu I\), or \(\langle\bar{\mathscr{B}}X,X\rangle=0=\langle\bar{\mathscr{B}}Y,Y\rangle\). In the latter case, denoting \(\theta=\langle\bar{\mathscr{B}}X,Y\rangle\), we have \(\bar{\mathscr{B}}X=\theta Y\) and \(\bar{\mathscr{B}}Y=\theta X\).
Given a distribution \(\Delta\) on a Riemannian manifold \(M^{n}\), recall that the _splitting tensor_\(C\colon\Gamma(\Delta)\to\Gamma(\operatorname{End}(\Delta^{\perp}))\) of \(\Delta\) is defined by
\[C_{T}X=-\nabla_{X}^{h}T\]
for all \(T\in\Gamma(\Delta)\) and \(X\in\Gamma(\Delta^{\perp})\), where \(\nabla_{X}^{h}T=(\nabla_{X}T)_{\Delta^{\perp}}\).
**Proposition 15**.: _Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 5\), be an umbilic-free infinitesimally Moebius bendable hypersurface carrying a principal curvature of constant multiplicity \(n-2\) with corresponding eigenbundle \(\Delta\). Assume that at no point of \(M^{n}\) the splitting tensor \(C\colon\Gamma(\Delta)\to\Gamma(\operatorname{\mathit{End}}(\Delta^{\perp}))\) of \(\Delta\) satisfies \(C(\Gamma(\Delta))\subset\text{span}\{I\}\). Then the central sphere congruence of \(f\) is determined by a minimal space-like surface \(s\colon L^{2}\to\mathbb{S}_{1}^{n+2}\)._
_Proof:_ Denoting by \(\lambda\) the principal curvature of \(f\) with constant multiplicity \(n-2\) with respect to a unit normal vector field \(N\), the map
\[x\in M^{n}\mapsto f(x)+\frac{1}{\lambda(x)}N\]
determines a two-parameter congruence of hyperspheres that is enveloped by \(f\). As explained in the introduction, this congruence of hyperspheres is determined by a space-like surface \(s\colon L^{2}\to\mathbb{S}_{1}^{n+2}\).
Since \(f\) is infinitesimally Moebius bendable, it admits, in particular, a conformal infinitesimal bending that is non-trivial on any open subset. By Proposition 15 in [5], the hypersurface \(f\) is either elliptic, hyperbolic or parabolic with respect to \(J\in\Gamma(\operatorname{End}(\Delta^{\perp}))\) satisfying \(J^{2}=-I\), \(J^{2}=I\)
or \(J^{2}=0\), respectively, with \(J\neq I\) if \(J^{2}=I\) and \(J\neq 0\) if \(J^{2}=0\). Moreover, the tensor \(\mathcal{B}\) associated with \(\mathcal{T}\) satisfies
\[\bar{\mathcal{B}}=\mu\bar{A}J, \tag{23}\]
where \(0\neq\mu\in C^{\infty}(M)\) is constant along the leaves of \(\Delta\).
It was also shown in [5] that, in the hyperbolic and elliptic cases, the tensor \(J\) is projectable with respect to the quotient map \(\pi\colon M^{n}\to L^{2}\) onto the spaces of leaves of the eigenbundle \(\Delta\) of \(\lambda\), that is, there exists \(\bar{J}\in\mathrm{End}(TL)\) such that \(\bar{J}\circ\pi_{*}=\pi_{*}\circ J\). Moreover, the surface \(s\colon L^{2}\to\mathbb{S}_{1}^{n+2}\) is either a _special elliptic_ or _special hyperbolic_ surface with respect to \(\bar{J}\). This means that
\[\alpha^{s}(\bar{J}\bar{X},\bar{Y})=\alpha^{s}(\bar{X},\bar{J}\bar{Y}) \tag{24}\]
for all \(\bar{X},\bar{Y}\in\mathfrak{X}(L)\), and that there exists \(\mu\in C^{\infty}(L)\) such that \(\mu\bar{J}\) is a Codazzi tensor on \(L^{2}\).
In the sequel we will show that, under the assumptions of the proposition, the tensors \(J\) and \(\bar{J}\) act as a rotation of angle \(\pi/2\) on \(\Delta^{\perp}\) and on each tangent space of \(L^{2}\), respectively, that is, both \(J\) and \(\bar{J}\) are orthogonal tensors satisfying \(J^{2}=-I\) and \(\bar{J}^{2}=-I\). From the orthogonality of \(J\) and the symmetry of \(\bar{B}\) it will follow that the tensor \(\bar{A}=A-\lambda I\) is traceless by (23). This implies that \(\lambda\) is the mean curvature function of \(f\), and hence the congruence of hyperspheres determined by \(s\) is its central sphere congruence. On the other hand, the orthogonality of \(\bar{J}\) and the fact that \(\bar{J}^{2}=-I\) implies the minimality of \(s\) by (24), and this will conclude the proof.
First we rule out the parabolic case. So, assume that there exists \(J\in\Gamma(\Delta^{\perp})\) such that \(J^{2}=0\), \(J\neq 0\), \(\nabla^{h}_{T}J=0\) for all \(T\in\Gamma(\Delta)\), and such that \(C_{T}\in\mathrm{span}\{I,J\}\) for all \(T\in\Gamma(\Delta)\). By Proposition 16 of [5], \(f\) is conformally ruled, with the leaves of the distribution \(\Delta\oplus\mathrm{ker}(J)\) as the rulings of \(f\). Let \(X,Y\in\Gamma(\Delta^{\perp})\) be an orthonormal basis of \(\Delta^{\perp}\) such that \(JX=Y\) and \(JY=0\), let \(\lambda_{1},\lambda^{\prime}\in C^{\infty}(M)\) be such that \(\bar{A}X=\lambda_{1}X+\lambda^{\prime}Y\) and \(\bar{A}Y=\lambda^{\prime}X\). From (23) we see that \(\bar{\mathcal{B}}X=\mu\lambda^{\prime}X\) and \(\bar{\mathcal{B}}Y=0\). Since \(\mathcal{B}\) is not a multiple of the identity endomorphism, then \(\lambda^{\prime}\neq 0\), and hence \(\mathrm{tr}\,\bar{\mathcal{B}}\neq 0\). It follows from Proposition (12) that \(\lambda_{1}=\mathrm{tr}\,\bar{A}=0\). It follows from the Codazzi equation that
\[\nabla^{h}_{T}\bar{A}=\bar{A}C_{T} \tag{25}\]
for any \(T\in\Gamma(\Delta)\). For a fixed \(T\in\Gamma(\Delta)\), write \(C_{T}=dI+eJ\) for some smooth functions \(d\) and \(e\). On one hand, \(\nabla^{h}_{T}\bar{A}X=\nabla^{h}_{T}\lambda^{\prime}Y=T(\lambda^{\prime})Y,\) where
we have used that \(\nabla^{h}_{T}Y=0\), for \(Y\) is tangent to the rulings. On the other hand,
\[\bar{A}C_{T}X=\bar{A}(dX+eY)=d\lambda^{\prime}Y+e\lambda^{\prime}X.\]
Therefore \(e=0\) by (25) and, since \(T\in\Gamma(\Delta)\) was chosen arbitrarily, it follows that \(C_{T}\in{\rm span}\{I\}\) for any \(T\in\Gamma(\Delta)\), a contradiction with our assumption.
Now assume that \(f\) is hyperbolic, that is, that there exists \(J\in\Gamma({\rm End}(\Delta^{\perp}))\) such that \(J^{2}=I\), with \(J\neq I\), \(\nabla^{h}_{T}J=0\) and such that \(C_{T}\in{\rm span}\{I,J\}\) for all \(T\in\Gamma(\Delta)\). Let \(\{X,Y\}\) be a frame of \(\Delta^{\perp}\) of unit eigenvectors of \(J\), with \(JX=X\) and \(JY=-Y\). Since \(\nabla^{h}_{T}J=0\) for all \(T\in\Gamma(\Delta)\), it follows that \(\nabla^{h}_{T}X=0=\nabla^{h}_{T}Y\). The symmetry of \(\bar{\cal B}=\mu\bar{A}J\) yields \(\langle\bar{A}X,Y\rangle=0\). Write \(\bar{A}X=\alpha X+\beta Y\) and \(\bar{A}Y=\gamma X+\delta Y\) for some smooth functions \(\alpha\), \(\beta\), \(\gamma\), \(\delta\). Then
\[\langle\bar{A}X,X\rangle=\alpha+\beta\langle Y,X\rangle,\ \ \ \langle\bar{A}Y,Y \rangle=\gamma\langle Y,X\rangle+\delta, \tag{26}\]
and from \(\langle\bar{A}X,Y\rangle=0=\langle X,\bar{A}Y\rangle\) we obtain
\[\alpha\langle X,Y\rangle+\beta=0=\gamma+\delta\langle X,Y\rangle. \tag{27}\]
On the other hand, writing as before \(C_{T}=dI+eJ\) for some smooth functions \(d\) and \(e\), Eq. (25) gives
\[\langle\nabla^{h}_{T}\bar{A}X,X\rangle=\langle\bar{A}C_{T}X,X\rangle=(d+e) \langle\bar{A}X,X\rangle, \tag{28}\]
and similarly,
\[\langle\nabla^{h}_{T}\bar{A}Y,Y\rangle=\langle\bar{A}C_{T}Y,Y\rangle=(d-e) \langle\bar{A}Y,Y\rangle. \tag{29}\]
Suppose that \({\rm tr}\,\bar{A}=0\). Then \(\alpha=-\delta\), hence (27) implies that \(\beta=-\gamma\). Thus \(\langle\bar{A}X,X\rangle=-\langle\bar{A}Y,Y\rangle\) by (26), and hence
\[\langle\nabla^{h}_{T}\bar{A}X,X\rangle=T\langle\bar{A}X,X\rangle=-T\langle \bar{A}Y,Y\rangle=-\langle\nabla^{h}_{T}\bar{A}Y,Y\rangle.\]
Comparing with (28) and (29) gives \(d+e=d-e\), for \(\langle\bar{A}X,X\rangle\neq 0\) by the assumption that rank \(\bar{A}=2\). Hence \(e=0\).
If \({\rm tr}\,\bar{\cal B}=0\), then (23) gives \(\alpha=\delta\), and hence \(\gamma=\beta\) by (27). Therefore \(\langle\bar{A}X,X\rangle=\langle\bar{A}Y,Y\rangle\) by (26), and hence
\[\langle\nabla^{h}_{T}\bar{A}X,X\rangle=T\langle\bar{A}X,X\rangle=T\langle\bar {A}Y,Y\rangle=\langle\nabla^{h}_{T}\bar{A}Y,Y\rangle.\]
Then we obtain as before that \(e=0\) by comparing with (28) and (29), and we conclude as in the parabolic case that \(C_{T}\in{\rm span}\{I\}\) for any \(T\in\Gamma(\Delta)\), a contradiction.
Finally, suppose that \(f\) is elliptic, that is, that there exists \(J\in\Gamma(\mbox{End}(\Delta^{\perp}))\) such that \(J^{2}=-I\), \(\nabla^{h}_{T}J=0\), and such that \(C_{T}\in\mbox{span}\{I,J\}\) for all \(T\in\Gamma(\Delta)\). Let \(\{X,Y\}\) be a frame of \(\Delta^{\perp}\) such that \(JX=Y\) and \(JY=-X\). This is equivalent to asking the complex vector fields \(X-iY\) and \(X+iY\) to be pointwise eigenvectors of the \(\mathbb{C}\)-linear extension of \(J\), also denoted by \(J\), associated to the eigenvalues \(i\) and \(-i\), respectively. Thus \(z(X-iY)=(sX+tY)+i(tX-sY)\) and \(z(X+iY)=(sX-tY)+i(tX+sY)\) are also eigenvectors of \(J\) associated to \(i\) and \(-i\), respectively, for any \(z=s+it\in\mathbb{C}\), that is, \(\bar{X}=sX+tY\) and \(\bar{Y}=-tX+sY\) form a new frame of \(\Delta^{\perp}\) such that \(J\bar{X}=\bar{Y}\) and \(J\bar{Y}=-\bar{X}\). It is easily seen that \(s\) and \(t\) can be chosen so that \(\bar{X}\) and \(\bar{Y}\) are unit vector fields. In summary, we can always choose a frame \(\{X,Y\}\) of _unit_ vector fields such that \(JX=Y\) and \(JY=-X\).
Since \(\nabla^{h}_{T}J=0\) for all \(T\in\Gamma(\Delta)\), then \(J\nabla^{h}_{T}X=\nabla^{h}_{T}Y\) and \(J\nabla^{h}_{T}Y=-\nabla^{h}_{T}X\). Denoting \(\hat{X}=\nabla^{h}_{T}X\) and \(\hat{Y}=\nabla^{h}_{T}Y\), it follows that \(\hat{X}-i\hat{Y}=(s+it)(X-iY)\) for some \(s+it\in\mathbb{C}\), that is,
\[\hat{X}=sX+tY\ \ \ \mbox{and}\ \ \ \hat{Y}=-tX+sY.\]
Since \(X\) and \(Y\) have unit length, then \(\langle\hat{X},X\rangle=0=\langle\hat{Y},Y\rangle\). Thus
\[s+t\langle X,Y\rangle=0=s-t\langle X,Y\rangle,\]
and hence \(t\langle X,Y\rangle=0=s\).
Assume that \(J\) is not an orthogonal tensor, that is, that \(\langle X,Y\rangle\neq 0\). Then \(s=0=t\), that is, \(\nabla^{h}_{T}X=0=\nabla^{h}_{T}Y\) for all \(T\in\Gamma(\Delta)\).
Write \(\bar{A}X=\alpha X+\beta Y\) and \(\bar{A}Y=\gamma X+\delta Y\) for some smooth functions \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\). The symmetry of \(\bar{\mathcal{B}}\) gives
\[\langle\bar{A}X,X\rangle+\langle\bar{A}Y,Y\rangle=0. \tag{30}\]
Then
\[\langle\bar{A}X,X\rangle=\alpha+\beta\langle Y,X\rangle,\ \ \ \langle\bar{A}Y,Y \rangle=\gamma\langle Y,X\rangle+\delta, \tag{31}\]
and from (30) and \(\langle\bar{A}X,Y\rangle=\langle X,\bar{A}Y\rangle\) we obtain, respectively,
\[(\alpha+\delta)+(\beta+\gamma)\langle X,Y\rangle=0 \tag{32}\]
and
\[\alpha\langle X,Y\rangle+\beta=\gamma+\delta\langle X,Y\rangle. \tag{33}\]
On the other hand, writing as before \(C_{T}=dI+eJ\) for some smooth functions \(d\) and \(e\), Eq. (25) gives
\[\langle\nabla^{h}_{T}\bar{A}X,Y\rangle=\langle\bar{A}C_{T}X,Y\rangle=d\langle \bar{A}X,Y\rangle+e\langle\bar{A}Y,Y\rangle. \tag{34}\]
Now assume that \(\mathop{\rm tr}\bar{A}=0\). Then \(\alpha=-\delta\), hence \(\beta=-\gamma\) by (32). Thus \(\langle\bar{A}X,Y\rangle=0\) by (33), and hence \(\langle\nabla^{h}_{T}\bar{A}X,Y\rangle=T\langle\bar{A}X,Y\rangle=0\). It follows from (34) that \(e=0\), for \(\langle\bar{A}Y,Y\rangle\neq 0\) by the assumption that rank \(\bar{A}=2\).
If \(\mathop{\rm tr}\bar{B}=0\), then (23) gives \(\gamma=\beta\), hence \(\alpha=\delta\) by (33). Therefore \(\langle\bar{A}X,X\rangle=0=\langle\bar{A}Y,Y\rangle\) by (32) and (31). Then \(\langle\nabla^{h}_{T}\bar{A}X,X\rangle=T\langle\bar{A}X,X\rangle=0\), and, on the other hand,
\[\langle\nabla^{h}_{T}\bar{A}X,X\rangle = \langle\bar{A}C_{T}X,X\rangle\] \[= \langle\bar{A}(dX+eY),X\rangle\] \[= d\langle\bar{A}X,X\rangle+e\langle\bar{A}Y,X\rangle\] \[= e\langle\bar{A}Y,X\rangle.\]
It follows that \(e=0\), for \(\langle\bar{A}Y,X\rangle\neq 0\) by the assumption that \(f\) is free of points with a principal curvature of multiplicity at least \(n-1\). We conclude as in the previous cases that \(C_{T}\in\mathop{\rm span}\{I\}\) for any \(T\in\Gamma(\Delta)\), a contradiction.
It follows that \(J\) must be an orthogonal tensor, that is, \(\langle X,Y\rangle=0\). It remains to show that the tensor \(\bar{J}\in\mathop{\rm End}(TL)\) given by \(\bar{J}\circ\pi_{*}=\pi_{*}\circ J\) is also orthogonal. For this, we use the fact that the metric \(\langle\cdot,\cdot\rangle^{\prime}\) on \(L^{2}\) induced by \(s\) is related to the metric of \(M^{n}\) by
\[\langle\bar{Z},\bar{W}\rangle^{\prime}=\langle\bar{A}Z,\bar{A}W\rangle \tag{35}\]
for all \(\bar{Z},\bar{W}\in\mathfrak{X}(L)\), where \(Z\), \(W\) are the horizontal lifts of \(\bar{Z}\) and \(\bar{W}\), respectively. Let \(\bar{X}\in\mathfrak{X}(L)\) and denote by \(X\in\Gamma(\Delta^{\perp})\) its horizontal lift. Using the symmetry of \(\bar{A}J\), we have
\[\langle\bar{X},\bar{J}\bar{X}\rangle^{\prime} =\langle\bar{A}X,\bar{A}JX\rangle\] \[=\langle\bar{A}J\bar{A}X,X\rangle\] \[=\langle J\bar{A}X,\bar{A}X\rangle\] \[=0,\]
where in the last step we have used that \(J\) acts as a rotation of angle \(\pi/2\) on \(\Delta^{\perp}\). Using again the symmetry of \(\bar{A}J\), the proof of the orthogonality of \(\bar{J}\) is
completed by noticing that
\[\langle\bar{J}\bar{X},\bar{J}\bar{X}\rangle^{\prime} =\langle\bar{A}JX,\bar{A}JX\rangle\] \[=\langle J\bar{A}JX,\bar{A}X\rangle\] \[=\langle JJ^{t}\bar{A}X,\bar{A}X\rangle\] \[=-\langle J^{2}\bar{A}X,\bar{A}X\rangle\] \[=\langle\bar{X},\bar{X}\rangle^{\prime}.\ \ \vrule width 1px\]
For the proof of Theorem 1 we will also need the following fact (see Theorem 1 in [7] or Corollary 9.33 in [9]).
**Lemma 16**.: _Let \(f\colon M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 3\), be a hypersurface and let \(\Delta\) be an umbilical subbundle of rank \(n-2\) of the eigenbundle of \(f\) correspondent to a principal curvature of \(f\). Then \(f\) is conformally surface-like (with respect to the decomposition \(TM=\Delta^{\perp}\oplus\Delta\)) if and only if the splitting tensor \(C\colon\Gamma(\Delta)\to\Gamma(\text{End}(\Delta^{\perp}))\) of \(\Delta\) satisfies \(C(\Gamma(\Delta))\subset\text{span}\{I\}\)._
_Proof of Theorem 1_: Let \(\mathcal{U}\) and \(\Delta\) be, respectively, the open and dense subset of \(M^{n}\) and the distribution of rank \(n-2\) given by Proposition 12. By that result, \(\mathcal{U}\) splits as \(\mathcal{U}=\mathcal{U}_{1}\cup\mathcal{U}_{2}\), with \(\operatorname{tr}\bar{\mathcal{B}}=0\) on \(\mathcal{U}_{1}\) and \(\operatorname{tr}\bar{A}=0\) on \(\mathcal{U}_{2}\).
We also consider the decompositions \(\mathcal{U}=\mathcal{V}_{1}\cup\mathcal{V}_{2}\) and \(\mathcal{U}=\mathcal{W}_{1}\cup\mathcal{W}_{2}\), where \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) are the subsets where the dimension of \(\ker\bar{A}\) is either \(n-2\) or \(n-1\), respectively, \(\mathcal{W}_{2}\) is the subset where \(C(\Gamma(\Delta))\subset\text{span}\{I\}\) and \(\mathcal{W}_{1}=\mathcal{U}\setminus\mathcal{W}_{2}\).
In the following we denote by \(S^{0}\) the interior of the subset \(S\). We will show that the direct statement holds on the open and dense subset
\[\mathcal{U}^{*}=\mathcal{V}_{2}^{0}\cup(\mathcal{V}_{1}\cap\mathcal{W}_{1}) \cup(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{U}_{2}^{0})\cup( \mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{2}^{0})\cup(\mathcal{V }_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{1}^{0}),\]
where \(\mathcal{Y}_{1}\) and \(\mathcal{Y}_{2}\) are the subsets of \(\mathcal{U}_{1}\) given by Proposition 14.
It follows from Proposition 15 that the central sphere congruence of \(f|_{\mathcal{V}_{1}\cap\mathcal{W}_{1}}\) is determined by a minimal space-like surface \(s\colon L^{2}\to\mathbb{S}_{1}^{n+2}\).
The proof of the direct statement will be completed once we prove that, for each connected component \(\mathcal{W}\) of the subsets \(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{2}^{0}\), \(\mathcal{V}_{2}^{0}\), \(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{U}_{2}^{0}\) and \(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{1}^{0}\), respectively, \(f|_{\mathcal{W}}\) is a conformally surface-like hypersurface determined by a surface \(h\colon L^{2}\to\mathbb{Q}_{\epsilon}\), \(\epsilon\in\{-1,0,1\}\) of one of the following types:
1. an isothermic surface;
* a generalized cone over a unit-speed curve \(\gamma\colon J\to\mathbb{Q}_{c}^{2}\) in an umbilical surface \(\mathbb{Q}_{c}^{2}\subset\mathbb{Q}_{\epsilon}^{3}\), \(c\geq\epsilon\);
* a minimal surface;
* an umbilical surface.
Notice that any surface as in \((ii)\), \((iii)\) and \((iv)\) is also isothermic (for \(h\) as in \((ii)\) see Corollary 12 in [11]). Notice also that \(f|_{\mathcal{W}}\) being a conformally surface-like hypersurface determined by a surface \(h\colon L^{2}\to\mathbb{Q}_{\epsilon}\), \(\epsilon\in\{-1,0,1\}\), is equivalent to \(\mathcal{W}\) being (isometric to) a Riemannian product \(L^{2}\times N^{n-2}\) and to \(f|_{\mathcal{W}}\) being given by \(f|_{\mathcal{W}}=\mathcal{I}\circ\Phi\circ(h\times i)\), where \(i\) is the inclusion map of an open subset \(N^{n-2}\) of either \(\mathbb{R}^{n-2}\) or \(\mathbb{Q}_{-\epsilon}^{n-2}\), according to whether \(\epsilon\) is zero or not, \(\mathcal{I}\) is a Moebius transformation of \(\mathbb{R}^{n+1}\), and \(\Phi\) is the standard isometry \(\Phi\colon\mathbb{R}^{3}\times\mathbb{R}^{n-2}\to\mathbb{R}^{n+1}\) if \(\epsilon=0\) and, if \(\epsilon=-1\) or \(1\), respectively, the conformal diffeomorphism
* \(\Phi\colon\mathbb{H}^{3}\times\mathbb{S}^{n-2}\subset\mathbb{L}^{4}\times \mathbb{R}^{n-1}\to\mathbb{R}^{n+1}\setminus\mathbb{R}^{2}\), \(\Phi(x,y)=\frac{1}{x_{0}}(x_{1},x_{2},y)\) for all \(x=x_{0}e_{0}+x_{1}e_{1}+x_{2}e_{2}+x_{3}e_{3}\in\mathbb{L}^{4}\) and \(y=(y_{1},\ldots,y_{n-1})\in\mathbb{S}^{n-2}\subset\mathbb{R}^{n-1}\), where \(\{e_{0},e_{1},e_{2},e_{3}\}\) is a pseudo-orthonormal basis of the Lorentzian space \(\mathbb{L}^{k+1}\) with \(\langle e_{0},e_{0}\rangle=0=\langle e_{3},e_{3}\rangle\) and \(\langle e_{0},e_{3}\rangle=-1/2\).
* \(\Phi\colon\mathbb{S}^{3}\times\mathbb{H}^{n-2}\subset\mathbb{R}^{4}\times \mathbb{L}^{n-1}\to\mathbb{R}^{n+1}\setminus\mathbb{R}^{n-3}\), \(\Phi(x,y)=\frac{1}{y_{0}}(x,y_{0},\ldots,y_{n-3})\) for all \(x=(x_{1},\ldots,x_{4})\in\mathbb{S}^{3}\subset\mathbb{R}^{4}\) and \(y=y_{0}e_{0}+\ldots y_{n-3}e_{n-2}\in\mathbb{H}^{n-2}\subset\mathbb{L}^{n-1}\), where \(\{e_{0},\ldots,e_{n-2}\}\) is a pseudo-orthonormal basis of \(\mathbb{L}^{n-1}\) with \(\langle e_{0},e_{0}\rangle=0=\langle e_{n-2},e_{n-2}\rangle\) and \(\langle e_{0},e_{n-2}\rangle=\ \ -1/2\).
_Case \((i)\):_ Let \(\mathcal{W}\) be a connected component of \(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{2}^{0}\). Since, in particular, \(\mathcal{W}\subset\mathcal{W}_{2}\), then \(C(\Gamma(\Delta))\subset\mathrm{span}\{I\}\) on \(\mathcal{W}\). By Lemma 16, \(f|_{\mathcal{W}}\) is a conformally surface-like hypersurface determined by a surface \(h\colon L^{2}\to\mathbb{Q}_{\epsilon}\), \(\epsilon\in\{-1,0,1\}\). Thus, with notations as in the preceding paragraph, we may write \(\mathcal{W}=L^{2}\times N^{n-2}\) and \(f|_{\mathcal{W}}=\mathcal{I}\circ\Phi\circ(h\times i)\). In particular, the distributions \(\Delta\) and \(\Delta^{\perp}\) are given by the tangent spaces to \(N^{n-2}\) and \(L^{2}\), respectively.
Denote by \(g_{1}\) the product metric of \(\mathbb{Q}_{\epsilon}^{3}\times\mathbb{Q}_{-\epsilon}^{n-2}\) and let \(g_{2}\) be the metric on \(\mathbb{Q}_{\epsilon}^{3}\times\mathbb{Q}_{-\epsilon}^{n-2}\) induced from the metric of \(\mathbb{R}^{n+1}\) by the conformal diffeomorphism \(\mathcal{I}\circ\Phi\). Let \(\varphi\in C^{\infty}(\mathbb{Q}_{\epsilon}^{3}\times\mathbb{Q}_{-\epsilon}^{n -2})\) be the conformal factor of \(g_{2}\) with respect to \(g_{1}\), that is, \(g_{2}=\varphi^{2}g_{1}\). Then the shape operators of \(F_{1}=h\times i\colon L^{2}\times N^{n-2}\to\mathbb{Q}_{\epsilon}^{3}\times \mathbb{Q}_{-\epsilon}^{n-2}\) and of \(F_{2}=F_{1}\colon(L^{2}\times N^{n-2},F_{1}^{*}g_{2})\to(\mathbb{Q}_{\epsilon }^{3}\times\mathbb{Q}_{-\epsilon}^{n-2},g_{2})\) with respect
to unit normal vector fields \(N_{1}\) and \(N_{2}=N_{1}/\varphi\), respectively, are related by
\[A^{F_{2}}_{N_{2}}=\frac{1}{\varphi\circ F_{1}}A^{F_{1}}_{N_{1}}-\frac{g_{1}( \nabla^{1}\varphi,N_{1})}{(\varphi\circ F_{1})^{2}}I. \tag{36}\]
We recall that the Levi-Civita connections \(\bar{\nabla}\) and \(\nabla\) of the metrics \(\bar{g}\) and \(g=\langle\,,\,\rangle\) on \(M^{n}=L^{2}\times N^{n-2}\) induced by \(F_{1}\) and \(F_{2}\), respectively, satisfy
\[\nabla_{X}Y=\bar{\nabla}_{X}Y+\frac{1}{\bar{\varphi}}(X(\bar{\varphi})Y+Y( \bar{\varphi})X-\bar{g}(X,Y)\bar{\nabla}\bar{\varphi}), \tag{37}\]
where \(\bar{\varphi}=\varphi\circ F_{1}\). It follows from (37) that the mean curvature vector field \(\delta\in\Gamma(\Delta^{\perp})\) of \(\Delta\) (with respect to the metric \(g\)) is
\[\delta=-\bar{\varphi}^{-3}(\bar{\nabla}\bar{\varphi})_{\Delta^{\perp}}. \tag{38}\]
Now, we can write (11) as
\[(\nabla_{X}\bar{\mathbb{B}})Y-(\nabla_{Y}\bar{\mathbb{B}})X+(X\wedge Y)(A \nabla\rho-\nabla b)=0, \tag{39}\]
for all \(X,Y\in\mathfrak{X}(M)\). The \(\Delta\)-component of (39) evaluated in unit vector fields \(Z\in\Gamma(\Delta^{\perp})\) and \(T\in\Gamma(\Delta)\) gives \(\langle\bar{\mathbb{B}}Z,\nabla_{T}T\rangle=\langle Z,A\nabla\rho-\nabla b\rangle\), or equivalently,
\[\bar{\mathbb{B}}\delta=A\nabla\rho-\nabla b. \tag{40}\]
Since \(\mathcal{W}\subset\mathcal{Y}^{0}_{2}\), there exists locally a smooth function \(\theta\) and an orthonormal frame \(\{X,Y\}\) of \(\Delta^{\perp}\) given by principal directions of \(f\) such that \(\bar{\mathbb{B}}X=\theta Y\) and \(\bar{\mathbb{B}}Y=\theta X\). From (37) we have \(\langle\nabla_{X}T,X\rangle=\langle\nabla_{Y}T,Y\rangle=T(\log\circ\bar{ \varphi})\). Evaluating (39) in \(T\) and \(X\) (or \(Y\)) gives
\[T(\theta)=-T(\log\circ\bar{\varphi})\theta, \tag{41}\]
whereas (39) evaluated in \(X\) and \(Y\) yields
\[X(\theta)=2\theta\langle\nabla_{Y}Y,X\rangle-\langle Y,A\nabla\rho-\nabla b\rangle \tag{42}\]
and
\[Y(\theta)=2\theta\langle\nabla_{X}X,Y\rangle-\langle X,A\nabla\rho-\nabla b\rangle. \tag{43}\]
Set \(\bar{\theta}=\bar{\varphi}\theta\). It follows from (41) that \(T(\bar{\theta})=0\) for any \(T\in\Gamma(\Delta)\). Thus \(\bar{\theta}\) induces a function on \(L^{2}\), which we also denote by \(\bar{\theta}\). By (36), the vector fields
\(\bar{X}=\bar{\varphi}X\) and \(\bar{Y}=\bar{\varphi}Y\) form an orthonormal frame of principal directions of \(h\). Using (37), (38), (40) and (42) we obtain
\[\bar{X}(\bar{\theta}) =\bar{X}(\bar{\varphi})\theta+\bar{\varphi}^{2}X(\theta)\] \[=\bar{X}(\bar{\varphi})\theta+\bar{\varphi}^{2}(2\theta(\nabla_{Y }Y,X)-\langle Y,A\nabla\rho-\nabla b\rangle)\] \[=\bar{X}(\bar{\varphi})\theta+\bar{\varphi}^{2}(2\theta(\bar{ \varphi}^{-1}\bar{g}(\bar{\nabla}_{\bar{Y}}\bar{Y},\bar{X})-\bar{\varphi}^{-2} \bar{X}(\bar{\varphi}))-\langle\bar{B}Y,\delta\rangle)\] \[=\bar{X}(\bar{\varphi})\theta+2\bar{\theta}\bar{g}(\bar{\nabla}_ {\bar{Y}}\bar{Y},\bar{X})-2\theta\bar{X}(\bar{\varphi})-\bar{\varphi}^{2} \theta\langle X,\delta\rangle\] \[=\bar{X}(\bar{\varphi})\theta+2\bar{\theta}\bar{g}(\bar{\nabla}_ {\bar{Y}}\bar{Y},\bar{X})-2\theta\bar{X}(\bar{\varphi})+\theta\bar{\varphi}^{ -1}\langle X,\bar{\nabla}\bar{\varphi}\rangle\] \[=\bar{X}(\bar{\varphi})\theta+2\bar{\theta}\bar{g}(\bar{\nabla}_ {\bar{Y}}\bar{Y},\bar{X})-2\theta\bar{X}(\bar{\varphi})+\theta\bar{X}(\bar{ \varphi})\] \[=2\bar{\theta}\bar{g}(\bar{\nabla}_{\bar{Y}}\bar{Y},\bar{X}). \tag{44}\]
A similar computation using (43) instead of (42) gives
\[\bar{Y}(\bar{\theta})=2\bar{\theta}\bar{g}(\bar{\nabla}_{\bar{X}}\bar{X},\bar {Y}). \tag{45}\]
Let \(\mathcal{B}^{*}\in\Gamma(\operatorname{End}(TL))\) be defined by \(\mathcal{B}^{*}X=\bar{\theta}\bar{Y}\) and \(\mathcal{B}^{*}Y=\bar{\theta}\bar{X}\). Then (44) and (45) are equivalent to \(\mathcal{B}^{*}\) being a Codazzi tensor on \(L^{2}\). By (36), the shape operator \(A^{h}\) of \(h\) is a multiple of \(\bar{A}|_{\Delta^{\perp}}\), which has rank two, for \(\mathcal{W}\subset\mathcal{V}_{1}\), and \(\mathcal{B}^{*}\) is a multiple of \(\bar{\mathcal{B}}|_{\Delta^{\perp}}\). Therefore, the fact that \(\bar{A}\) and \(\bar{\mathcal{B}}\) satisfy (22) implies that \(\mathcal{B}^{*}\) and \(A^{h}\) also satisfy (22) with respect to \(\bar{g}\). Since, in addition, \(\operatorname{tr}\mathcal{B}^{*}=0\), it follows from Proposition 8 of [11], together with Theorem 4.7 in [4] (also stated in [11] as Theorem 2), that \(h\) is locally infinitesimally Bonnet bendable, hence isothermic by Proposition 9 of [11].
_Case \((ii)\):_ First we show that the interior \(\mathcal{V}_{2}^{0}\) of the subset \(\mathcal{V}_{2}\) where \(\dim\ker\bar{A}=n-1\) is contained in \(\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{2}^{0}\). Clearly, \(\mathcal{V}_{2}\subset\mathcal{U}_{1}\), the subset where \(\operatorname{tr}\bar{B}=0\), for if \(\operatorname{tr}\bar{A}(x)=0\), that is, if \(x\in\mathcal{V}_{2}\cap\mathcal{U}_{2}\), then we would have \(\bar{A}(x)=0\), in contradiction with the assumption that \(f\) is free of umbilic points. Also, since \(\dim\ker\bar{A}=n-1\) on \(\mathcal{V}_{2}\), then \(\bar{A}|_{\Delta^{\perp}}\) can not be a multiple of the identity endomorphism \(I\in\Gamma(\operatorname{End}(\Delta^{\perp}))\) at any point of \(\mathcal{V}_{2}\). Thus \(\mathcal{V}_{2}\subset\mathcal{Y}_{2}\).
We now show that, on any connected component \(\mathcal{W}\) of \(\mathcal{V}_{2}^{0}\), the splitting tensor \(C\colon\Gamma(\Delta)\to\Gamma(\operatorname{End}(\Delta^{\perp}))\) of \(\Delta\) satisfies \(C(\Gamma(\Delta))\subset\operatorname{span}\{I\}\), which will yield the inclusion \(\mathcal{V}_{2}^{0}\subset\mathcal{W}_{2}^{0}\). So let \(\mathcal{W}\) be such a connected component. Since we already know that \(\mathcal{W}\subset\mathcal{Y}_{2}^{0}\), there exist locally smooth functions \(\theta,\mu\) and unit vector fields \(Y\in\Gamma(\Delta^{\perp}\cap\ker\bar{A})\) and \(X\in\Gamma((\ker\bar{A})^{\perp})\) such that \(\bar{A}X=\mu X\), \(\bar{\mathcal{B}}X=\theta Y\) and \(\bar{\mathcal{B}}Y=\theta X\).
Applying (39) to \(T\) and \(Y\) gives
\[\begin{split} T(\theta)X+\theta\nabla_{T}X-\bar{\mathcal{B}}\nabla _{T}Y+\bar{\mathcal{B}}\nabla_{Y}T+\langle\lambda\nabla\rho-\nabla b,Y\rangle T \\ -\langle\lambda\nabla\rho-\nabla b,T\rangle Y=0.\end{split} \tag{46}\]
Taking the \(X\)-component of (46) we obtain
\[T(\theta)=\theta\langle\nabla_{Y}Y,T\rangle, \tag{47}\]
whereas the \(Y\)-component and the \(T\)-component give, respectively,
\[\langle\lambda\nabla\rho-\nabla b,T\rangle=0 \tag{48}\]
and
\[\langle\lambda\nabla\rho-\nabla b,Y\rangle=\theta\langle\nabla_{T}T,X\rangle.\]
Applying (39) to \(T\) and \(X\) and using (48) give
\[T(\theta)Y+\theta\nabla_{T}Y-\bar{\mathcal{B}}\nabla_{T}X+\bar{\mathcal{B}} \nabla_{X}T+\langle\mu\nabla\rho-\nabla b,X\rangle T=0. \tag{49}\]
Taking the \(S\)-component of (49) for \(S\in\Gamma(\Delta)\) with \(\langle S,T\rangle=0\) gives
\[\langle\nabla_{T}S,Y\rangle=0,\]
and taking its \(T\)-component yields
\[\theta\langle\nabla_{T}T,Y\rangle=\langle\mu\nabla\rho-\nabla b,X\rangle.\]
Using that \(\ker\bar{A}=\{X\}^{\perp}\) is an umbilical distribution, it follows that the same holds for \(\Delta\).
Taking the \(X\)-component of (49) yields
\[\langle\nabla_{X}Y,T\rangle=0, \tag{50}\]
whereas the \(Y\)-component gives
\[T(\theta)=\theta\langle\nabla_{X}X,T\rangle. \tag{51}\]
It follows from (47), (50) and (51), taking into account that one also has \(\langle\nabla_{Y}X,T\rangle\ =\ 0\), that the distribution \(\Delta^{\perp}\) is umbilical with mean curvature vector field \(\zeta=(\nabla\log\theta)|_{\Delta}\), which is equivalent to the splitting tensor \(C\colon\Gamma(\Delta)\to\Gamma(\operatorname{End}(\Delta^{\perp}))\) of \(\Delta\) satisfying \(C_{T}=\langle\zeta,T\rangle I\) for all \(T\in\Gamma(\Delta)\).
Now that we know that \(\mathcal{V}_{2}^{0}\subset\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{2}^{0}\), the argument used in case \((i)\) shows that \(f|_{\mathcal{W}}\) is a conformally surface-like hypersurface determined by an isothermic surface \(h\colon L^{2}\to\mathbb{Q}_{\epsilon}^{3}\), \(\epsilon\in\{-1,0,1\}\). But since \(\operatorname{rank}\ \ker\bar{A}=n-1\) on \(\mathcal{V}_{2}\), then \(h\) has index of relative nullity equal to one at any point. By
Corollary 12 in [11], \(h\) is a generalized cone over a unit-speed curve \(\gamma\colon J\to\mathbb{Q}_{c}^{2}\) in an umbilical surface \(\mathbb{Q}_{c}^{2}\subset\mathbb{Q}_{\epsilon}^{3}\), \(c\geq\epsilon\).
_Cases \((iii)\) and \((iv)\):_ Let \(\mathcal{W}\) be a connected component of \(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{U}_{2}^{0}\) (respectively, \(\mathcal{V}_{1}\cap\mathcal{W}_{2}^{0}\cap\mathcal{Y}_{1}^{0}\)). As in cases \((i)\) and \((ii)\), \(f|_{\mathcal{W}}\) is a conformally surface-like hypersurface determined by a surface \(h\colon L^{2}\to\mathbb{Q}_{\epsilon}\), \(\epsilon\in\{-1,0,1\}\), by Lemma 16. Since \(\operatorname{tr}\bar{A}=0\) on \(\mathcal{U}_{2}\) (respectively, \(\bar{A}|_{\Delta^{\perp}}\) is a multiple of the identity endomorphism of \(\Delta^{\perp}\) on \(\mathcal{Y}_{1}\)), it follows from (36) that also \(\operatorname{tr}A^{h}=0\) (respectively, \(A^{h}\) is a multiple of the identity endomorphism of \(TL\)), hence \(h\) is a minimal surface (respectively, \(h\) is umbilical).
We now prove the converse. Assume first that \(f\colon M^{n}\to\mathbb{R}^{n+1}\) is a simply connected hypersurface whose central sphere congruence is determined by a minimal space-like surface \(s\colon L^{2}\to\mathbb{S}_{1,1}^{n+2}\). Let \(\bar{J}\in\Gamma(\operatorname{End}(TL))\) represent a rotation of angle \(\pi/2\), and let \(\bar{X},\bar{Y}\) be an orthonormal frame satisfying \(\bar{J}\bar{X}=\bar{Y}\) and \(\bar{J}\bar{Y}=-\bar{X}\). Then \(\bar{J}\) is parallel with respect to the Levi-Civita connection \(\nabla^{\prime}\) on \(L^{2}\), hence it is, in particular, a Codazzi tensor on \(L^{2}\). Since \(s\) is minimal, then \(\alpha^{\prime}(\bar{X},\bar{X})+\alpha^{\prime}(\bar{Y},\bar{Y})=0\), hence \(s\) is a special elliptic surface by Proposition 11 in [5].
By Theorem 1 in [5], \(f\) admits a non-trivial conformal infinitesimal bending \(\mathcal{T}\). We now show that \(\mathcal{T}\) is also an infinitesimal Moebius bending. Let \(X,Y\in\mathfrak{X}(M)\) be the lifts of \(\bar{X}\) and \(\bar{Y}\). From (35) we see that \(\bar{A}X\) and \(\bar{A}Y\) form an orthonormal frame of \(\Delta^{\perp}\). Let \(J\in\Gamma(\operatorname{End}(\Delta^{\perp}))\) be the lift of \(\bar{J}\). It was shown in the proof of the converse of Theorem 1 in [5] that \(\bar{A}J\) is symmetric. Thus
\[\langle J\bar{A}X,\bar{A}X\rangle=\langle\bar{A}J\bar{A}X,X\rangle=\langle\bar {A}X,\bar{A}JX\rangle=\langle\bar{X},\bar{J}\bar{X}\rangle^{\prime}=0.\]
Similarly, \(\langle J\bar{A}Y,\bar{A}Y\rangle=0\), \(\langle J\bar{A}X,\bar{A}Y\rangle=-1\) and \(\langle J\bar{A}Y,\bar{A}X\rangle=1\). Hence \(J\) is an orthogonal tensor, and the symmetry of \(\bar{A}J\) implies that \(\operatorname{tr}\bar{A}=0\). By Proposition 12, \(\mathcal{T}\) is an infinitesimal Moebius bending.
Now let \(f\colon M^{n}\to\mathbb{R}^{n+1}\) be a conformally surface-like hypersurface determined by an isothermic surface \(h\colon L^{2}\to\mathbb{Q}_{\epsilon}\), \(\epsilon\in\{-1,0,1\}\). Then \(h\) is locally infinitesimally Bonnet bendable (see, e.g., Proposition 9 and Remark 11 of [11]), that is, it admits locally a non-trivial infinitesimal variation \(h_{t}\colon L^{2}\to\mathbb{Q}_{\epsilon}\) such that the metrics \(\bar{g}_{t}\) induced by the \({h_{t}}^{\prime}s\) and their mean curvatures \(\mathcal{H}_{t}\) satisfy \(\partial/\partial t|_{t=0}\bar{g}_{t}=0=\partial/\partial t|_{t=0}\mathcal{H} _{t}.\) Thus also \(\partial/\partial t|_{t=0}K_{t}=0\), where \(K_{t}\) is the Gauss curvature of \(\bar{g}_{t}\).
Let \(f_{t}\) be the variation of \(f\) given by the conformally surface-like hypersurfaces determined by \(h_{t}\). The Moebius metric of \(f_{t}\) is (see Remark 3.7 in
[13])
\[\left(4\mathcal{H}_{t}^{2}-\frac{2n}{n-1}(K_{t}-\epsilon)\right)(\bar{g}_{t}+g_{- \epsilon}).\]
Here \(g_{-\epsilon}\) is the metric of \(\mathbb{Q}_{-\epsilon}^{n-2}\) and \(\bar{g}_{t}+g_{-\epsilon}\) denotes the product metric on \(L^{2}\times\mathbb{Q}_{-\epsilon}^{n-2}\). Therefore the immersions \(f_{t}\) determine an infinitesimal Moebius variation of \(f\). It remains to argue that the latter is non-trivial.
From Proposition 10 we know that the associated tensor \(\mathcal{B}\) satisfies (19). On the other hand, by (36) the shape operator \(A_{t}\) of \(f_{t}\) has the form
\[A_{t}=\delta_{1}(t)\bar{A}_{t}+\delta_{2}(t)I, \tag{52}\]
for some smooth functions \(\delta_{1}\) and \(\delta_{2}\), with \(\delta_{1}(0)\neq 0\). Here \(\bar{A}_{t}\) denotes the second fundamental form of \(h_{t}\) extended to \(TM\) by defining \(\bar{A}_{t}T=0\) for any \(T\) tangent to \(\mathbb{Q}_{-\epsilon}^{n-2}\). Since \(\partial/\partial t|_{t=0}\bar{A}_{t}\neq 0\), for \(h_{t}\) determine a non-trivial infinitesimal Bonnet variation of \(h\), it follows from (19) and (52) that \(\mathcal{B}\) is not a multiple of the identity endomorphism. Hence the infinitesimal Moebius variation of \(f\) determined by \(f_{t}\) is non-trivial (see Remarks 4-2)).
|
2306.09148 | A Recursive Newton Method for Smoothing in Nonlinear State Space Models | In this paper, we use the optimization formulation of nonlinear Kalman
filtering and smoothing problems to develop second-order variants of iterated
Kalman smoother (IKS) methods. We show that Newton's method corresponds to a
recursion over affine smoothing problems on a modified state-space model
augmented by a pseudo measurement. The first and second derivatives required in
this approach can be efficiently computed with widely available automatic
differentiation tools. Furthermore, we show how to incorporate line-search and
trust-region strategies into the proposed second-order IKS algorithm in order
to regularize updates between iterations. Finally, we provide numerical
examples to demonstrate the method's efficiency in terms of runtime compared to
its batch counterpart. | Fatemeh Yaghoobi, Hany Abdulsamad, Simo SΓ€rkkΓ€ | 2023-06-15T14:09:07Z | http://arxiv.org/abs/2306.09148v1 | # A Recursive Newton Method for Smoothing
###### Abstract
In this paper, we use the optimization formulation of nonlinear Kalman filtering and smoothing problems to develop second-order variants of iterated Kalman smoother (IKS) methods. We show that Newton's method corresponds to a recursion over affine smoothing problems on a modified state-space model augmented by a pseudo measurement. The first and second derivatives required in this approach can be efficiently computed with widely available automatic differentiation tools. Furthermore, we show how to incorporate line-search and trust-region strategies into the proposed second-order IKS algorithm in order to regularize updates between iterations. Finally, we provide numerical examples to demonstrate the method's efficiency in terms of runtime compared to its batch counterpart.
Newton's method, state-space model, iterated Kalman filter and smoother, line search, trust region.
## I Introduction
State estimation problem in nonlinear state-space models (SSMs) plays an important role in various areas of applications such as in control theory, signal processing, and robotics [1, 2, 3]. In this paper, we are interested in solving state estimation problems in SSMs of the form
\[\mathbf{x}_{k}=\mathbf{f}(\mathbf{x}_{k-1})+\mathbf{q}_{k-1},\quad\mathbf{y}_ {k}=\mathbf{h}(\mathbf{x}_{k})+\mathbf{r}_{k}, \tag{1}\]
\(\mathbf{x}_{k}\in\mathbb{R}^{d}\) is the state at time step \(k\), \(\mathbf{y}_{k}\in\mathbb{R}^{m}\) is the measurement at the same time step, \(\mathbf{f}(.)\) is the state transition function, and \(\mathbf{h}(.)\) is the observation function. Furthermore, \(\mathbf{q}_{k}\) and \(\mathbf{r}_{k}\) are the process and measurement noises, assumed to be Gaussian with zero mean and covariance matrices \(\mathbf{Q}\) and \(\mathbf{R}\), respectively. The prior distribution of the state at \(k=0\) is Gaussian with known mean \(\mathbf{m}_{0}\) and covariance \(\mathbf{P}_{0}\).
The smoothing problem (see, e.g., [1]) amounts to computing the estimate of the state \(\mathbf{x}_{k}\) given a batch of measurements \(\mathbf{y}_{1},\ldots,\mathbf{y}_{N}\), where \(k\in\{0,\ldots,N\}\). The Kalman filter [4] and Rauch-Tung-Striebel (RTS) smoother [5] for linear SSM and their extension for nonlinear systems (see, e.g., [6, 1, 2, 11, 7, 8, 9, 10, 11]) provide powerful recursive solutions which are optimal in the minimum mean squared error (MMSE) sense.
On the other hand, the smoothing problem can be viewed in an optimization framework (see, e.g., [8, 12]), where the aim is to find the maximum a posteriori (MAP) trajectory estimate, that is, the trajectory \(\mathbf{x}_{0:N}^{*}\) which maximizes \(p(\mathbf{x}_{0:N}\mid\mathbf{y}_{1:N})\).
For the SSM of the form (1), the MAP estimate is the minimizer of the negative log-posterior
\[\mathbf{x}_{0:N}^{*}=\operatorname*{arg\,min}_{\mathbf{x}_{0:N}}\ L(\mathbf{x }_{0:N}), \tag{2}\]
where the negative log-posterior is given by
\[L(\mathbf{x}_{0:N})=\frac{1}{2}\|\mathbf{x}_{0}-\mathbf{m}_{0} \|_{\mathbf{p}_{0}^{-1}}^{2}+\frac{1}{2}\sum_{k=1}^{N}\|\mathbf{x}_{k}- \mathbf{f}(\mathbf{x}_{k-1})\|_{\mathbf{Q}^{-1}}^{2}\] \[\quad+\frac{1}{2}\sum_{k=1}^{N}\|\mathbf{y}_{k}-\mathbf{h}( \mathbf{x}_{k})\|_{\mathbf{R}^{-1}}^{2},\ \ \mathrm{with}\ \|\mathbf{x} \|_{\mathbf{A}}^{2}\coloneqq\mathbf{x}^{\top}\mathbf{A}\mathbf{x}. \tag{3}\]
Viewing the state estimation problem from an optimization standpoint enables us to employ several optimization techniques [13]. One widely-used example in the filtering and smoothing literature is the Gauss-Newton (GN) method [14, 15, 16], which has a close relationship with iterated extended Kalman filtering and smoothing methods [8, 17]. In particular, for nonlinear SSM with additive Gaussian noise, Bell [8] proved that the GN-method is equivalent to the iterated extended Kalman smoother (IEKS), a recursive method with less computational complexity than batch GN-methods. Recently, Sarkka & Svensson [12] developed line-search and Levenberg-Marquart extensions of the IEKS method.
Newton's method has received less attention as an optimization method to solve smoothing problems due to the effort associated with computing second-order derivatives. However, the availability of automatic differentiation tools has eliminated the need for manual computation, making Newton's method attractive for smoothing problems. Although the application of Newton's method to filtering and smoothing has been mentioned in literature [18, 19, 20], the full Newton version of the IKS is yet to be realized.
The contribution of this paper is to develop the Newton formulation of iterated Kalman smoothers while leveraging automatic differentiation tools to compute the derivatives and Hessians. We also present robust modifications of the proposed method that incorporate line-search and trust-region schemes into the recursive structure.
This paper is structured as follows: Section II presents Newton's method for the MAP problem in batch and recursive form. Section III presents line-search and trust-region strategies to enhance the robustness of iterative Newton updates. Section IV analyzes the efficiency of the proposed recursive methods in the sense of runtime on a numerical example.
## II Newton Iterated Kalman Smoother
Assuming a SSM of the form (1) and the objective specified in Equation (3), our aim, in this section, is to use Newton's optimization technique to minimize the objective function and develop the corresponding batch solution. Subsequently, we present a recursive alternative analogous to the IKS to improve computational efficiency.
### _Batch Newton Optimization_
The batch solution for smoothing follows the standard iterative optimization framework without specifically leveraging the underlying temporal structure of the problem. Accordingly, we can implement Newton's method as a generic second-order optimization of Equation (3) with respect to a decision variable \(\mathbf{x}_{0:N}\) with a dimension \(d_{N}=d\times N\).
At every iteration \(i\), Newton's method approximates a twice differentiable objective \(L(\mathbf{x}_{0:N})\) up to the second order in the neighborhood of a nominal trajectory \(\hat{\mathbf{x}}_{0:N}^{(i)}\)
\[L(\mathbf{x}_{0:N})\approx \,L(\hat{\mathbf{x}}_{0:N}^{(i)})+\nabla L^{\top}(\hat{\mathbf{x} }_{0:N}^{(i)})(\mathbf{x}_{0:N}-\hat{\mathbf{x}}_{0:N}^{(i)}) \tag{4}\] \[+\frac{1}{2}(\mathbf{x}_{0:N}-\hat{\mathbf{x}}_{0:N}^{(i)})^{\top }\nabla^{2}L(\hat{\mathbf{x}}_{0:N}^{(i)})(\mathbf{x}_{0:N}-\hat{\mathbf{x}}_ {0:N}^{(i)}),\]
where \(\nabla L(.)\) and \(\nabla^{2}L(.)\) denote the gradient and the Hessian of \(L(.)\), respectively. Using this quadratic approximation, we get the Newton update rule
\[\hat{\mathbf{x}}_{0:N}^{(i+1)}=\hat{\mathbf{x}}_{0:N}^{(i)}-\left(\nabla^{2}L (\hat{\mathbf{x}}_{0:N}^{(i)})+\lambda\,\mathbf{I}_{d_{N}}\right)^{-1}\nabla L (\hat{\mathbf{x}}_{0:N}^{(i)}). \tag{5}\]
Note that we have included a diagonal regularization term \(\lambda\,\mathbf{I}_{d_{N}}\), with \(\lambda\geq 0\), to ensure a positive-definite Hessian and a valid descent direction.
Despite the convenience of automatic differentiation frameworks that readily deliver \(\nabla L(.)\) and \(\nabla^{2}L(.)\), the computational effort associated with the Newton update in Equation (5) is still a major issue. The Hessian \(\nabla^{2}L(.)\) is of dimensions \(d_{N}\times d_{N}\), and its inversion leads to a worst-case computational complexity \(\mathcal{O}(N^{3}d^{3})\), which scales poorly both in the state dimension and the trajectory length.
In the following, we will rely on the quadratic approximation in Equation (4). However, by taking advantage of the temporal structure of the state-space model, we will construct a modified affine state-space model and derive a recursive algorithm akin to the iterated Kalman smoother, leading to a considerable reduction in computational complexity.
### _Recursive Newton Optimization_
Constructing the modified state-space model requires analyzing the first- and second-order approximations of the individual terms in Equation (3). We start by considering the approximation of the transition dynamics term. For convenience, we define
\[S(\mathbf{x}_{0:N})\coloneqq\sum_{k=1}^{N}S_{k}(\mathbf{x}_{k},\mathbf{x}_{k- 1})=\sum_{k=1}^{N}\lVert\mathbf{x}_{k}-\mathbf{f}(\mathbf{x}_{k-1})\rVert_{ \mathbf{Q}^{-1}}^{2},\]
and expand it around the current nominal trajectory \(\hat{\mathbf{x}}_{0:N}^{(i)}\). For a simplified notation, we drop the iteration index \(i\)
\[S(\mathbf{x}_{0:N})\approx \,\frac{1}{2}\delta\mathbf{x}_{0:N}^{\top}\,\nabla^{2}S(\hat{ \mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N} \tag{6}\] \[+\nabla S^{\top}(\hat{\mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N} +S(\hat{\mathbf{x}}_{0:N}),\]
where \(\delta\mathbf{x}_{0:N}=\mathbf{x}_{0:N}-\hat{\mathbf{x}}_{0:N}\) and
\[\nabla S^{\top}(\hat{\mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N} =2\sum_{k=1}^{N}(\hat{\mathbf{x}}_{k}-\mathbf{f}(\hat{\mathbf{x}}_{k-1}))^{ \top}\mathbf{Q}^{-1}\,\delta\mathbf{x}_{k} \tag{7}\] \[-2\sum_{k=1}^{N}(\hat{\mathbf{x}}_{k}-\mathbf{f}(\hat{\mathbf{x} }_{k-1}))^{\top}\mathbf{Q}^{-1}\mathbf{F}_{\mathbf{x}}(\hat{\mathbf{x}}_{k-1} )\,\delta\mathbf{x}_{k-1},\] \[\frac{1}{2}\delta\mathbf{x}_{0:N}^{\top}\,\nabla^{2}S(\hat{ \mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N}=\sum_{k=1}^{N}\delta\mathbf{x}_{k}^ {\top}\mathbf{Q}^{-1}\delta\mathbf{x}_{k}\] (8) \[-2\sum_{k=1}^{N}\delta\mathbf{x}_{k}^{\top}\mathbf{F}_{\mathbf{x} }^{\top}(\hat{\mathbf{x}}_{k-1})\,\mathbf{Q}^{-1}\delta\mathbf{x}_{k-1}\] \[+\sum_{k=1}^{N}\delta\mathbf{x}_{k-1}^{\top}\mathbf{F}_{\mathbf{x} \mathbf{x}}^{\top}(\hat{\mathbf{x}}_{k-1})\,\mathbf{Q}^{-1}\mathbf{F}_{ \mathbf{x}}(\hat{\mathbf{x}}_{k-1})\,\delta\mathbf{x}_{k-1}\] \[-\sum_{k=1}^{N}\delta\mathbf{x}_{k-1}^{\top}\mathbf{F}_{\mathbf{x} \mathbf{x}}^{\top}(\hat{\mathbf{x}}_{k-1})\cdot\mathbf{Q}^{-1}(\hat{\mathbf{x }}_{k}-\mathbf{f}(\hat{\mathbf{x}}_{k-1}))\,\delta\mathbf{x}_{k-1},\]
where \(\mathbf{F}_{\mathbf{x}}(.)\) is the Jacobian and \(\mathbf{F}_{\mathbf{x}\mathbf{x}}(.)\) is a third-rank Hessian tensor of the transition function \(\mathbf{f}(.)\). The notation \((M\cdot v)\) refers to a tensor dot product so that \((M\cdot v)_{ij}=\sum_{k}M_{ijk}v_{k}\).
Plugging Equations (7) and (8) into Equation (6) and applying simple algebraic manipulations, we arrive at the following decomposition of the quadratic expansion in Equation (6)
\[\begin{split} S(\mathbf{x}_{0:N})\approx&\sum_{k=1 }^{N}\lVert\mathbf{x}_{k}-\mathbf{F}_{k-1}\,\mathbf{x}_{k-1}-\mathbf{b}_{k-1 }\rVert_{\mathbf{Q}^{-1}}^{2}\\ &+\sum_{k=1}^{N}\lVert\hat{\mathbf{x}}_{k-1}-\mathbf{x}_{k-1} \rVert_{\mathbf{v}_{k-1}}^{2},\end{split} \tag{9}\]
where
\[\mathbf{F}_{k-1}=\mathbf{F}_{\mathbf{x}}(\hat{\mathbf{x}}_{k-1}),\] \[\mathbf{b}_{k-1}=\mathbf{f}(\hat{\mathbf{x}}_{k-1})-\mathbf{F}_{ \mathbf{x}}(\hat{\mathbf{x}}_{k-1})\,\hat{\mathbf{x}}_{k-1},\] \[\mathbf{\Psi}_{k-1}=-\mathbf{F}_{\mathbf{x}\mathbf{x}}^{\top}(\hat{ \mathbf{x}}_{k-1})\cdot\mathbf{Q}^{-1}(\hat{\mathbf{x}}_{k}-\mathbf{f}(\hat{ \mathbf{x}}_{k-1})).\]
A similar second-order expansion can be carried out for the observation model term in Equation (3). Again, for convenience, we define the following
\[G(\mathbf{x}_{0:N})\coloneqq\sum_{k=1}^{N}G_{k}(\mathbf{x}_{k})=\sum_{k=1}^{N }\lVert\mathbf{y}_{k}-\mathbf{h}(\mathbf{x}_{k})\rVert_{\mathbf{R}^{-1}}^{2},\]
and expand it to the second order around \(\hat{\mathbf{x}}_{0:N}\)
\[\begin{split} G(\mathbf{x}_{0:N})\approx&\,\frac{1 }{2}\delta\mathbf{x}_{0:N}^{\top}\,\nabla^{2}G(\hat{\mathbf{x}}_{0:N})\, \delta\mathbf{x}_{0:N}\\ &+\nabla G^{\top}(\hat{\mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N} +G(\hat{\mathbf{x}}_{0:N}),\end{split} \tag{10}\]
where the linear and quadratic terms are
\[\nabla G^{\top}(\hat{\mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N}=-2 \sum_{k=1}^{N}(\mathbf{y}_{k}-\mathbf{h}(\hat{\mathbf{x}}_{k}))^{\top}\mathbf{R}^{ -1}\mathbf{H}_{\mathbf{x}}(\hat{\mathbf{x}}_{k})\,\delta\mathbf{x}_{k},\] \[\frac{1}{2}\delta\mathbf{x}_{0:N}^{\top}\,\nabla^{2}G(\hat{ \mathbf{x}}_{0:N})\,\delta\mathbf{x}_{0:N}\!=\!\sum_{k=1}^{N}\delta\mathbf{x}_ {k}^{\top}\mathbf{H}_{\mathbf{x}}^{\top}(\hat{\mathbf{x}}_{k})\,\mathbf{R}^{- 1}\mathbf{H}_{\mathbf{x}}(\hat{\mathbf{x}}_{k})\delta\mathbf{x}_{k}\] \[-\sum_{k=1}^{N}\delta\mathbf{x}_{k}^{\top}\mathbf{H}_{\mathbf{xx} }^{\top}(\hat{\mathbf{x}}_{k})\cdot\mathbf{R}^{-1}(\mathbf{y}_{k}-\mathbf{h}( \hat{\mathbf{x}}_{k}))\,\delta\mathbf{x}_{k}.\]
The matrix \(\mathbf{H}_{\mathbf{x}}(.)\) is the Jacobian and \(\mathbf{H}_{\mathbf{xx}}(.)\) is a third-rank Hessian tensor of the observation function \(\mathbf{h}(.)\). Similarly, by rearranging these terms, we can construct a specific decomposition of the quadratic expansion in Equation (10)
\[G(\mathbf{x}_{0:N})\!\approx\!\sum_{k=1}^{N}\!\|\mathbf{y}_{k}\!-\!\mathbf{H}_ {k}\,\mathbf{x}_{k}\!-\!\mathbf{c}_{k}\|_{\mathbf{R}^{-1}}^{2}\!+\!\sum_{k=1}^ {N}\!\|\hat{\mathbf{x}}_{k}\!-\!\mathbf{x}_{k}\|_{\mathbf{I}_{k}}^{2}, \tag{11}\]
where
\[\mathbf{H}_{k}=\mathbf{H}_{\mathbf{x}}(\hat{\mathbf{x}}_{k}),\] \[\mathbf{c}_{k}=\mathbf{h}(\hat{\mathbf{x}}_{k})-\mathbf{H}_{ \mathbf{x}}(\hat{\mathbf{x}}_{k})\,\hat{\mathbf{x}}_{k},\] \[\mathbf{\Gamma}_{k}=-\mathbf{H}_{\mathbf{xx}}^{\top}(\hat{\mathbf{ x}}_{k})\cdot\mathbf{R}^{-1}(\mathbf{y}_{k}-\mathbf{h}(\hat{\mathbf{x}}_{k})).\]
We can now take the second-order terms of the transition and observation functions in Equations (9) and (11) and plug them back into the objective in Equation (3) which leads to the overall (regularized) second-order approximation
\[\tilde{L}(\mathbf{x}_{0:N})=\,\frac{1}{2}\|\mathbf{x}_{0}-\mathbf{ m}_{0}\|_{\mathbf{P}_{0}^{-1}}^{2}+\frac{1}{2}\|\mathbf{x}_{0}-\hat{\mathbf{x} }_{0}\|_{\mathbf{\Phi}_{0}^{-1}}^{2}\] \[\qquad+\frac{1}{2}\sum_{k=1}^{N}\!\|\hat{\mathbf{x}}_{k}-\mathbf{ x}_{k}\|_{\mathbf{\Phi}_{k}^{-1}}^{2}+\frac{1}{2}\sum_{k=1}^{N}\!\|\mathbf{y}_{k}- \mathbf{H}_{k}\,\mathbf{x}_{k}-\mathbf{c}_{k}\|_{\mathbf{R}^{-1}}^{2}\] \[\qquad+\frac{1}{2}\sum_{k=1}^{N}\!\|\mathbf{x}_{k}-\mathbf{F}_{k -1}\,\mathbf{x}_{k-1}-\mathbf{b}_{k-1}\|_{\mathbf{Q}^{-1}}^{2}, \tag{12}\]
where
\[\mathbf{\Phi}_{0}=(\mathbf{\Psi}_{0}+\lambda\,\mathbf{I}_{d})^{-1},\] \[\mathbf{\Phi}_{k}=(\mathbf{\Psi}_{k}+\mathbf{\Gamma}_{k}+ \lambda\,\mathbf{I}_{d})^{-1},\] \[\mathbf{\Phi}_{N}=(\mathbf{\Gamma}_{N}+\lambda\,\mathbf{I}_{d})^ {-1}.\]
The result in Equation (12) indicates that the second-order approximation of \(L(.)\) can be viewed as a first-order approximation of the functions \(\mathbf{f}\) and \(\mathbf{h}\), augmented by an affine pseudo observation model, in which the expansion point \(\hat{\mathbf{x}}_{k}\) acts as a pseudo measurement of the state \(\mathbf{x}_{k}\). This interpretation of (12) corresponds to the _modified_ state-space model of the form
\[\mathbf{x}_{k}\approx\mathbf{F}_{k-1}\,\mathbf{x}_{k-1}+\mathbf{b }_{k-1}+\mathbf{q}_{k}, \mathbf{q}_{k}\sim\mathcal{N}(0,\mathbf{Q}),\] \[\mathbf{y}_{k}\approx\mathbf{H}_{k}\,\mathbf{x}_{k}+\mathbf{c}_{ k}+\mathbf{r}_{k}, \mathbf{r}_{k}\sim\mathcal{N}(0,\mathbf{R}),\] \[\hat{\mathbf{x}}_{k}\approx\mathbf{x}_{k}+\mathbf{e}_{k}, \mathbf{e}_{k}\sim\mathcal{N}(0,\mathbf{\Phi}_{k}),\]
with a _modified_ prior distribution \(\mathbf{x}_{0}\sim\mathcal{N}(\tau_{0},\mathbf{\Omega}_{0})\)
\[\mathbf{\Omega}_{0}=(\mathbf{P}_{0}^{-1}+\mathbf{\Phi}_{0}^{-1})^ {-1},\] \[\tau_{0}=(\mathbf{P}_{0}^{-1}+\mathbf{\Phi}_{0}^{-1})^{-1}\,( \mathbf{P}_{0}^{-1}\mathbf{m}_{0}+\mathbf{\Phi}_{0}^{-1}\hat{\mathbf{x}}_{0}).\]
Note that we have again included a diagonal term \(\lambda\,\mathbf{I}_{d}\) equivalent to that in Section II-A. In this modified state-space model, \(\lambda\,\mathbf{I}_{d}\) can be interpreted as regularization of the pseudo observation model to guarantee a positive-definite covariance and well-defined Gaussian noise. The significance of this regularization will become clear in the upcoming section.
Given this modified affine state-space model, we can iteratively minimize the approximate objective in Equation (12) by implementing a recursive RTS smoother [5] that incorporates the pseudo measurements and dramatically lowers the computational complexity to \(\mathcal{O}(Nd^{3})\). Algorithm 1 summarizes a single iteration of a Newton iterated Kalman smoother (Newton-IKS). For more details on smoothing algorithms for affine state space models, we refer to [1].
```
1:input: Nominal trajectory \(\hat{\mathbf{x}}_{0:N}^{(i)}\), measurements \(\mathbf{y}_{1:N}\), Jacobians at nominal: \(\mathbf{F}_{0:N-1}\), \(\mathbf{H}_{1:N}\), offsets at nominal: \(\mathbf{b}_{0:N-1},\mathbf{c}_{1:N}\), covariances at nominal: \(\mathbf{Q},\mathbf{R},\mathbf{\Phi}_{1:N}\), prior at nominal: \(\tau_{0}\), \(\mathbf{\Omega}_{0}\), and optional regularization \(\lambda\)
2:output: Smoothed trajectory \(\hat{\mathbf{x}}_{0:N}\)
3:procedureNewton-IKS(\(\hat{\mathbf{x}}_{0:N}^{(i)},\lambda\)):
4: Set \(\mathbf{x}_{0}^{f}\leftarrow\tau_{0}(\lambda)\), \(\mathbf{P}_{0}^{f}\leftarrow\mathbf{\Omega}_{0}(\lambda)\)\(\triangleright\) Initialize
5:for\(k\gets 1\)to\(N\)do
6:\(\mathbf{x}_{k}^{p}\leftarrow\mathbf{F}_{k-1}\,\mathbf{x}_{k-1}^{f}+\mathbf{b}_{k-1}\)\(\triangleright\) Prediction
7:\(\mathbf{P}_{k}^{p}\leftarrow\mathbf{F}_{k-1}\mathbf{P}_{k-1}^{f}\mathbf{F}_{k-1}^{\top}+ \mathbf{Q}\)
8:\(\mu_{k}\leftarrow\mathbf{H}_{k}\,\mathbf{x}_{k}^{p}+\mathbf{c}_{k}\)
9:\(\mathbf{\Sigma}_{k}\leftarrow\mathbf{H}_{k}\,\mathbf{P}_{k}^{p}\,\mathbf{H}_{k}^{ \top}+\mathbf{R}\)
10:\(\mathbf{K}_{k}\leftarrow\mathbf{P}_{k}^{p}\,\mathbf{H}_{k}^{\top}\,\mathbf{S}_{k}^{-1}\)
11:\(\mathbf{x}_{k}^{y}+\mathbf{x}_{k}^{p}+\mathbf{K}_{k}(\mathbf{y}_{k}-\mu_{k})\)\(\triangleright\) Measure. Update
12:\(\mathbf{P}_{k}^{y}\leftarrow\mathbf{P}_{k}^{p}-\mathbf{K}_{k}\mathbf{\Sigma}_{k}\mathbf{K}_{k}^{\top}\)
13:\(\mathbf{\Delta}_{k}\leftarrow\mathbf{P}_{k}^{y}+\mathbf{\Phi}_{k}(\lambda)\)
14:\(\mathbf{U}_{k}\leftarrow\mathbf{P}_{k}^{y}\mathbf{\Delta}_{k}^{-1}\)
15:\(\mathbf{x}_{k}^{f}\leftarrow\mathbf{x}_{k}^{y}+\mathbf{U}_{k}(\hat{\mathbf{x}}_{k} ^{(i)}-\mathbf{x}_{k}^{y})\)\(\triangleright\) Pseudo Update
16:\(\mathbf{P}_{k}^{f}\leftarrow\mathbf{P}_{k}^{y}-\mathbf{U}_{k}\mathbf{\Delta}_{k} \mathbf{U}_{k}^{\top}\)
17:endfor
18: Set \(\hat{\mathbf{x}}_{N}\leftarrow\mathbf{x}_{N}^{f}\) and \(\mathbf{P}_{N}\leftarrow\mathbf{P}_{N}^{f}\)
19:for\
update of an iterate \(\hat{\mathbf{x}}^{(i)}_{0:N}\) along a direction \(\mathbf{p}^{(i)}\) to guarantee a consistent reduction of the objective [13].
### _Recursive Newton Method with Line Search_
The procedure of line search assumes the existence of a direction \(\mathbf{p}^{(i)}\) at a current iterate \(\mathbf{x}^{(i)}_{0:N}\) and proposes an updated iterate \(\mathbf{x}^{(i+1)}_{0:N}\). The distance taken along the direction \(\mathbf{p}^{(i)}\) is scaled by a step size \(\alpha>0\) in a way that guarantees a reduction of the objective function
\[\hat{\mathbf{x}}^{(i+1)}_{0:N}=\hat{\mathbf{x}}^{(i)}_{0:N}+\alpha\,\mathbf{p }^{(i)}. \tag{13}\]
In our case, the Newton-IKS from Section II-B indirectly supplies the search direction of the smoothed trajectory via \(\mathbf{p}^{(i)}=\hat{\mathbf{x}}_{0:N}-\hat{\mathbf{x}}^{(i)}_{0:N}\), where \(\tilde{\mathbf{x}}_{0:N}\) is the output of Algorithm 1 given the current iterate \(\hat{\mathbf{x}}^{(i)}_{0:N}\) as a nominal trajectory.
However, the direction that the Newton-IKS delivers may not be a valid search direction as the Hessian of the objective function may not be positive-definite. To overcome this challenge, we propose a simple approach that increases the diagonal regularization factor \(\lambda\) until the _expected_ cost reduction is positive \(\tilde{L}(\hat{\mathbf{x}}^{(i)}_{0:N})-\tilde{L}(\hat{\mathbf{x}}_{0:N})>0\) where \(\tilde{L}(.)\) is the (regularized) second-order approximation in Equation (12), which corresponds to a descent direction.
Given a descent direction \(\mathbf{p}^{(i)}\), various approaches are available for choosing \(\alpha\) exactly or approximately [13]. We choose to apply a backtracking line-search to find a step size \(\alpha\) such that \(L(\hat{\mathbf{x}}^{(i)}_{0:N}+\alpha\,\mathbf{p}^{(i)})<L(\hat{\mathbf{x}}^{ (i)}_{0:N})\), where \(L(.)\) is the _original_ nonlinear objective in Equation (3). Algorithm 2 provides an overview of a Newton-IKS algorithm with an approximate line-search strategy.
### _Recursive Newton Method with a Trust Region_
While line-search techniques optimize the step size along a pre-defined search direction, trust-region methods intervene and directly modify the search direction based on an approximate model of the nonlinear objective in a region around the current iterate. The size of this region implies the relative trust of the local approximation and simultaneously influences both the update direction and the step size.
In the case of the Newton-IKS, we implement a trust-region technique akin to a Levenberg-Marquardt algorithm [21]. This approach directly controls the regularization in Equation (12) to modify the search direction based on the quality of the local approximation. The quality is measured by the ratio of the _actual_ cost difference to the _expected_ cost difference given a nominal trajectory \(\hat{\mathbf{x}}^{(i)}_{0:N}\) and a candidate solution \(\hat{\mathbf{x}}_{0:N}\)
\[\rho=\frac{\Delta L}{\Delta\tilde{L}}=\frac{L(\hat{\mathbf{x}}^{(i)}_{0:N})-L (\hat{\mathbf{x}}_{0:N})}{\tilde{L}(\hat{\mathbf{x}}^{(i)}_{0:N})-\tilde{L}( \hat{\mathbf{x}}_{0:N})}.\]
An update is accepted when \(\rho>0\), implying that the current approximation is close to the true underlying objective around the current iterate, and the trust region is enlarged accordingly by reducing \(\lambda\). When \(\rho\leq 0\), the update is rejected, and the region is tightened by increasing \(\lambda\). Algorithm 3 provides an overview of the Newton-IKS with a trust-region strategy.
```
1:input: Initial trajectory \(\hat{\mathbf{x}}^{(0)}_{0:N}\), measurements \(\mathbf{y}_{1:N}\), Models, Hessians, and Jacobians: \(\mathbf{f},\mathbf{h},\mathbf{F}_{\mathbf{x}},\mathbf{H}_{\mathbf{x}},\mathbf{ F}_{\mathbf{xx}},\mathbf{H}_{\mathbf{xx}}\), constants: \(\mathbf{m}_{0},\mathbf{P}_{0},\mathbf{Q},\mathbf{R}\), backtracking mult. \(\beta\in(0,1)\), backtracking iterations \(M\), and overall iterations \(N_{i}\)
2:output: The MAP trajectory \(\hat{\mathbf{x}}^{*}_{0:N}\)
3:for\(0\leq i<N_{i}\)do
4:\(\hat{\mathbf{x}}_{0:N}\leftarrow\textsc{Newton-IKS}(\hat{\mathbf{x}}^{(i)}_{0: N},\lambda=0)\)
5:if\(\tilde{L}(\hat{\mathbf{x}}^{(i)}_{0:N})-\tilde{L}(\hat{\mathbf{x}}_{0:N})>0\)then
6:\(\mathbf{p}^{(i)}\leftarrow\hat{\mathbf{x}}_{0:N}-\hat{\mathbf{x}}^{(i)}_{0:N}\)\(\triangleright\) Descent direction
7:else
8: Set \(\lambda\gets 10^{-6}\)
9:\(\hat{\mathbf{x}}_{0:N}\leftarrow\textsc{Newton-IKS}(\hat{\mathbf{x}}^{(i)}_{0: N},\lambda)\)\(\triangleright\) Regularize
10:while\(\tilde{L}(\hat{\mathbf{x}}^{(i)}_{0:N})-\tilde{L}(\hat{\mathbf{x}}_{0:N})\leq 0\)and\(\lambda\leq 10^{16}\)do
11:\(\lambda\gets 10\,\lambda\)
12:\(\hat{\mathbf{x}}_{0:N}\leftarrow\textsc{Newton-IKS}(\hat{\mathbf{x}}^{(i)}_{0: N},\lambda)\)
13:endwhile
14:\(\mathbf{p}^{(i)}\leftarrow\hat{\mathbf{x}}_{0:N}-\hat{\mathbf{x}}^{(i)}_{0:N}\)
15:endif
16: Set \(\alpha\gets 1\), \(m\gets 0\)
17:while\(L(\hat{\mathbf{x}}^{(i)}_{0:N}+\alpha\,\mathbf{p}^{(i)})\geq L(\hat{ \mathbf{x}}^{(i)}_{0:N})\)and\(m\leq M\)do
18:\(\alpha\leftarrow\beta\,\alpha\), \(m\gets m+1\)\(\triangleright\) Backtracking
19:endwhile
20:if\(L(\hat{\mathbf{x}}^{(i)}_{0:N}+\alpha\,\mathbf{p}^{(i)})<L(\hat{\mathbf{x}}^{(i)} _{0:N})\)then
21:\(\hat{\mathbf{x}}^{(i+1)}_{0:N}+\hat{\mathbf{x}}^{(i)}_{0:N}+\alpha\,\mathbf{p}^ {(i)}\)\(\triangleright\) Accept step
22:else
23:\(\hat{\mathbf{x}}^{(i+1)}_{0:N}\leftarrow\hat{\mathbf{x}}^{(i)}_{0:N}\)\(\triangleright\) Reject step
24:endif
25:endfor
```
**Algorithm 3** Newton-IKS with a Trust Region
### _Recursive Newton Method with a Trust Region_
While line-search techniques optimize the step size along a pre-defined search direction, trust-region methods intervene and directly modify the search direction based on an approximate model of the nonlinear objective in a region around the current iterate. The size of this region implies the relative trust of the local approximation and simultaneously influences both the update direction and the step size.
In the case of the Newton-IKS, we implement a trust-region technique akin to a Levenberg-Marquardt algorithm [21]. This approach directly controls the regularization in Equation (12) to modify the search direction based on the quality of the local approximation. The quality is measured by the ratio of the _actual_ cost difference to the _expected_ cost difference given a nominal trajectory \(\hat{\mathbf{x}}^{(i)}_{0:N}\) and a candidate solution \(\hat{\mathbf{x}}_{0:N}\)
\[\rho=\frac{\Delta L}{\Delta\tilde{L}}=\frac{L(\hat{\mathbf{x}}^{(i)}_{0:N})-L(\hat{ \mathbf{x}}_{0:N})}{\tilde{L}(\hat{\mathbf{x}}^{(i)}_{0:N})-\tilde{L}(\hat{ \mathbf{x}}_{0:N})}.\]
An update is accepted when \(\rho>0\), implying that the current approximation is close to the true underlying objective around the current iterate, and the trust region is enlarged accordingly by reducing \(\lambda\). When \(\rho\leq 0\), the update is rejected, and the region is tightened by increasing \(\lambda\). Algorithm 3 provides an overview of the Newton-IKS with a trust-region strategy.
```
1:input: Initial trajectory \(\hat{\mathbf{x}}^{(0)}_{0:N}\), measurements \(\mathbf{y}_{1:N}\), Models, Hessians, and Jacobians: \(\mathbf{f},\mathbf{h},\mathbf{F}_{\mathbf{x}},\mathbf{H}_{\mathbf{x}},\mathbf{F}_{ \mathbf{xx}},\mathbf{H}_{\mathbf{xx}}\), constants: \(\mathbf{m}_{0},\,\mathbf{P}_{0},\,\mathbf{Q},\,\mathbf{R}\), initial regularization \(\lambda_{0}\), regularization mult. \(\nu>1\), and overall iterations \(N_{i}\)
2:output: The MAP trajectory \(\hat{\mathbf{x}}^{*}_{0:N}\)
3: Set \(\lambda\leftarrow\lambda_{0}\ \nu\gets 2\)
4:for\(0\leq i<N_{i}\)do
5:\(\hat{\mathbf{x}}_{0:N}\leftarrow\textsc{Newton-IKS}(\hat{\mathbf{x}}^{(i)}_{0: N},\lambda)\)
6:\(\Delta L\gets L(\hat{\mathbf{x}}^{(i)}_{0:N})-L(\hat{\mathbf{x}}_{0:N})\)\(\triangleright\) Actual cost diff.
7:\(\Delta\tilde{L}\leftarrow\tilde{L}(\hat{\mathbf{x}}^{(i)}_{0:N})-\tilde{L}(\hat{ \mathbf{x}}_{0:N})\)\(\triangleright\) Expected cost diff.
8:\(\rho\leftarrow\Delta L/\Delta\tilde{L}\)
9:if\(\rho>0\)and\(\Delta\tilde{L}>0\)then
10:\(\lambda\leftarrow\lambda\max\{\frac{1}{3},1-(2\,\rho-1)^{3}\}\), \(\nu\gets 2\)
11:\(\hat{\mathbf{x}}^{(i+1)}_{0:N}\leftarrow\hat{\mathbf{x}}_{0:N}\)\(\triangleright\) Accept step
12:else
13:\(\lambda\leftarrow\nu\,\lambda\), \(\nu\gets 2\,\nu\)
## IV Experimental Results
In this section, we assess the performance of the proposed approaches using a simulated coordinated turn model example with bearings-only measurements [12, 15, 19]. The system has a 5-dimensional state vector \(\mathbf{x}=[p_{x},\,p_{y},\,\hat{p}_{x},\,\hat{p}_{y},\,\omega]^{\top}\) which describes the \(x-y\) position, the \(x-y\) velocity, and the turn rate of the target. The bearing is measured by two sensors located at known positions. Figure 1 depicts an example true trajectory, an estimated trajectory using a trust-region Newton-IKS, and the locations of the two sensors.
In addition to our recursive algorithms, we implement the equivalent batch optimization techniques as presented in [13] and use the same hyperparameters in the line-search and trust-region variants. We focus on comparing the computational complexity of the recursive and batch techniques. We rely on JAX [22] for automatic differentiation.
In this study, we investigate trajectories of different lengths, ranging from \(N=100\) to \(N=1500\), and report the average runtime (over \(20\) runs) of running \(30\) overall iterations of the iterated batch and recursive Newton methods. The average runtime as a function of trajectory length is illustrated in Figure 2. As expected, the computational performance of the recursive Newton algorithms is superior to their batch counterpart in terms of runtime. An open-source implementation is available at [https://github.com/hanyas/second-order-smoothers](https://github.com/hanyas/second-order-smoothers).
## V Conclusion
We presented a computationally efficient realization of Newton's method for smoothing in nonlinear state-space models with additive noise. We leveraged automatic differentiation tools to compute the required first- and second-order derivatives with minimal effort and formulated a corresponding affine state-space model with augmented pseudo measurements. We showed that this modified SSM form enables the implementation of a recursive, computationally favorable Kalman smoothing algorithm equivalent to a Newton step. Furthermore, We proposed line-search and trust-region extensions of the proposed method to ensure the convergence to a local optimum. Finally, we empirically validated the efficiency of our recursive Newton method against standard batch solutions.
|
2306.11707 | Hexagonal circular 3-webs with polar curves of degree three | The paper reports the progress with the classical problem, posed by Blaschke
and Bol in 1938. We present new examples and new classifications of natural
classes of hexagonal circular 3-webs. The main results is the classification of
hexagonal circular 3-webs with polar curves of degree 3. | Sergey I. Agafonov | 2023-06-20T17:36:40Z | http://arxiv.org/abs/2306.11707v1 | # Hexagonal circular 3-webs with polar curves of degree three
###### Abstract
The paper reports the progress with the classical problem, posed by Blaschke and Bol in 1938. We present new examples and new classifications of natural classes of hexagonal circular 3-webs. The main results is the classification of hexagonal circular 3-webs with polar curves of degree 3.
MSC: 53A60.
**Keywords:** circular hexagonal 3-webs
## 1 Introduction
The problem to describe hexagonal 3-webs formed by circles in the plane appeared in the first monograph on the web theory published by Blaschke and Bol in 1938 (see [BB-38] p.31). The authors presented an example with 3 elliptic pencils of circles, each pair of pencils sharing a common vertex, and observed that one can construct hexagonal circular 3-webs from hexagonal linear 3-webs, completely described by Graf and Sauer [GS-24] as being formed by tangent to a fixed curves of third class. The construction involves a central projection from a plane to a unit sphere followed by sterographic projection to a plane. The corresponding circular 3-webs were described earlier by Volk [V-29] and Strubecker [S-32].
Stereographic projection puts the problem into a natural framework of the Lie sphere geometry: instead of planar circular webs we study circular webs on the unit sphere, thus treating cirles and straight lines on equal footing. Lie sphere geometry assigns points outside the unit sphere to circles on this sphere: the assigned point is the polar point of the plane that cuts the circle on the sphere. This sphere is called also _Darboux quadric_. Thus any circular 3-web on the unit sphere determines locally 3 curve arcs outside the Darboux quadric, one arc per web foliation. Globally these arcs may glue together into one curve. In what follows we call this set of polar points a _polar curve_ of the web. For example, the polar curve of the hexagonal circular 3-web obtained from a linear 3-web is a planar cubic, possibly reducible. The polar curve of the cited example from [BB-38] splits into 3 non-coplanar lines.
In the same year as the book [BB-38] appeared, Wunderlich published a new remarkable example of hexagonal circular 3-web. Its polar curve splits into 3 conics lying in 3 different planes. Since through a point \(p\) on the unit sphere pass the circles whose polar points are intersection of the web polar curve with the plane tangent to the unit sphere at \(p\), the Wunderlich web is actually 6-web, containing 8 hexagonal 3-subwebs.
Wunderlich gave also a construction of hexagonal 3-webs whose polar curve splits into 3 non-coplanar lines, two being dual with respect to the Darboux quadric and the third joining them. These webs were later rediscovered by other authors.
Further, he presented the following way to construct hexagonal 3-webs: for any one-parametric group acting in the plane, choose 2 transversally intersecting curve arcs that are also transversal to the group orbits; acting on the arcs by the group one gets 2 foliations; the third is composed by the group orbits. These 3 foliations compose a (local) hexagonal 3-web. Choosing a one-parameter group either of translation, or of dilatation, or of rotations and taking two intersecting circles (a straight line counts as a circle), we get circular hexagonal 3-webs.
Blaschke was well aware of the difficulty of the posed problem and, in his last book on the web geometry [B-55], discussed the simpler problem of classifying hexagonal circular 3-webs whose polar curve splits into 3 non-coplanar lines. Note that to a line corresponds a pencil of circles that is _hyperbolic_ if the line spears the Darboux quadric, _elliptic_ if the line completely misses the Darboux quadric, or _parabolic_ if the line touches the Darboux quadric.
By the year 1977, the list of 6 types (one from [BB-38] and five indicated in [W-38]) of circular hexagonal 3-webs whose polar curve splits into 3 non-coplanar lines was completed by Erdogan [E-74] and Lazareva [L-77]. The first attempt to prove that the list is actually complete was published in 1989 by Erdogan [E-89]. Based on direct computational approach, it did not provide the crucial computation: in fact, a modern computer systems for symbolic computations shows that there must be a mistake in the proof presented in [E-89] (see Concluding remarks for further detail).
The Erdogan's claim was proved only in 2005 by Shelekhov [S-05]. His insight was to look into the singular set of the webs: defined globally, the webs under study inevitably have singularities. Shelekhov considered the simplest possible singularities where two of the three circular foliations are tangent. It turns out that hexagonality imposes a strong restriction: locally, such singular set is either a circle arc of the 3d foliation or the common circle arc of the first two. The restriction was rigid enough to obtain all the types on the list.
Five new types of hexagonal circular 3-webs were presented by Nilov [N-14] in 2014. Polar curves for four of them split into a line and a conic. The fifth example may be viewed as a 5-web whose polar curve is a union of a line and two conics. Taking the line and two arcs on different conics as the polar curve, one gets a hexagonal 3-subwebs.
One can not help to observe that the polar curves of all the known examples are algebraic. Motivated also by the dual reformulation of the Graf and Sauer Theorem, we consider the following natural class of 3-webs: hexagonal circular 3-webs with polar curve of degree three. The main result of the paper is the complete classification of such webs.
The case of planar polar curve follows immediately from the Graf and Sauer Theorem: 3 points on the polar curve corresponding to 3 circles through a point \(p\) on the sphere are the ones where the plane, tangent to the sphere at \(p\) meets the polar curve. This plane cuts the polar curve plane along the line. On the polar curve plane we get the configuration dual to the Graf and Sauer Theorem.
The case of non-planar set of 3 lines was finally settled by Shelekhov [S-05].
We prove that there is no hexagonal circular 3-web whose polar curve is a rational normal
curve and obtain a classification of 3-webs whose planar polar curve splits into a line and a smooth conic. Up to Mobius transformation, there are 15 types, most of them depending on one parameter. Four types of five in Nilov's paper [N-14] are webs of this list, namely, of the types 6, 10, 11 and 15, presented in Section 5. (In fact, Nilov has found only one Mobius orbit from one-parametric family of orbits of our type 6.)
Another natural class that we study in this paper is the set of hexagonal circular 3-webs symmetric by action of one-parameter subgroup of the Mobius group. We also give a complete classification of such webs.
To select candidates for hexagonal webs we exploit further the above mentioned observation of Shelekhov on simplest singularities of hexagonal 3-webs. The proof of the observed property in [S-05], based on considering the normal form of the web function is not complete: this normal form often does not exists at singular points (see Concluding remarks for more detail). We make precise the ideas about the type of singularities and then prove the key singularity property.
For completeness, we also present the classification of hexagonal webs with 3 non-coplanar polar lines. The proof mainly follows the line taken by Shelekhov in [S-05].
## 2 Hexagonal 3-webs, Blaschke curvature, singularities.
A planar 3-web \({\cal W}_{3}\) in a planar domain is a superposition of 3 foliations \({\cal F}_{i},\) which may be given by integral curves of three ODEs
\[\sigma_{1}=0,\ \ \ \ \sigma_{2}=0,\ \ \ \ \sigma_{3}=0,\]
where \(\sigma_{i}\) are differential one-forms. At non-singular points, where the kernels of these forms are pairwise transverse, we normalize the forms so that \(\sigma_{1}+\sigma_{2}+\sigma_{3}=0.\) The _connection form_ of the web \({\cal W}_{3}\) is a one-form \(\gamma\) determined by the conditions
\[d\sigma_{i}+\gamma\wedge\sigma_{i}=0,\ \ \ \ i=1,2,3.\]
The connection form depends on the normalization of the forms \(\sigma_{i},\) the _Blaschke curvature_\(d\gamma\) does not.
**Definition 1**: _A 3-web is hexagonal if for any non-singular point there are a neighbourhood and a local diffeomorphism sending the web leaves of this neighbourhood in 3 families of parallel line segments._
Topologically, hexagonality means the following incidence property that has given its name to the notion: for any point \(m,\) each sufficiently small curvilinear triangle with the vertex \(m\) and sides formed by the web leaves, may be completed to the curvilinear hexagon, whose sides are web leaves and whose "large" diagonals are the web leaves meeting at \(m\) (see the gallery of pictures illustrating hexagonal webs in the next section). Computationally, hexagonality amounts to vanishing of the Blaschke curvature [BB-38].
Up to a suitable affine transformation, the forms \(\sigma_{i}\) may be normalized as follows:
\[\sigma_{1}=(Q-R)(dy-Pdx),\ \ \ \sigma_{2}=(R-P)(dy-Qdx),\ \ \ \sigma_{3}=(P-Q)(dy-Rdx),\]
where \(P(x,y),Q(x,y),R(x,y)\) are the slopes of the tangent lines to the web leaves at \((x,y).\) Vanishing of the curvature writes as
\[(R-Q)[P_{xx}+(Q+R)P_{xy}+QRP_{yy}]+(P-R)[Q_{xx}+(P+R)Q_{xy}+PRQ_{yy}]+\]
\[(Q-P)[R_{xx}+(P+Q)R_{xy}+PQR_{yy}]+\]
\[\frac{(Q-R)(2P-Q-R)[P_{x}^{2}+(Q+R)P_{x}P_{y}+QRP_{y}^{2}]}{(P-Q)(P-R)}+\frac{( R-P)(2Q-P-R)[Q_{x}^{2}+(P+R)Q_{x}Q_{y}+PRQ_{y}^{2}]}{(Q-R)(Q-P)}+\]
\[\frac{(P-Q)(2R-P-Q)[R_{x}^{2}+(P+Q)R_{x}R_{y}+PQR_{y}^{2}]}{(R-Q)(R-P)}+\]
\[\frac{(2R-P-Q)P_{x}Q_{x}}{P-Q}+\frac{(2P-Q-R)Q_{x}R_{x}}{Q-R}+\frac{(2Q-R-P)R_ {x}P_{x}}{R-P}+\]
\[\frac{(R^{2}-PQ)[P_{x}Q_{y}+P_{y}Q_{x}]}{(P-Q)}+\frac{(P^{2}-QR)[Q_{x}R_{y}+Q _{y}R_{x}]}{Q-R}+\frac{(Q^{2}-PR)[R_{x}P_{y}+R_{y}P_{x}]}{R-P}+\]
\[\frac{(2PQR-(P+Q)R^{2})P_{y}Q_{y}}{Q-P}+\frac{(2PQR-(Q+R)P^{2})Q_{y}R_{y}}{R-Q }+\frac{(2PQR-(P+R)Q^{2})R_{y}P_{y}}{P-R}=0.\]
If only one slope, say \(R\), is given as an explicit functions of \(x,y\) and \(P,Q\) are roots of a quadratic equation \(P^{2}+AP+B=0\) then one finds the first derivatives of \(P,Q\) by differentiating the Vieta relations
\[P+Q=-A,\quad PQ=B, \tag{1}\]
as a functions of \(P,Q\) and the first derivatives of \(A,B\). Differentiating these expressions, one gets also the second derivatives. Finally, excluding \(P\) and \(Q\) with the help of (1), one can rewrite (1) in terms of \(A,B,R\) and their derivatives. The result is presented in the Appendix.
The webs considered later will inevitably have singularities: some kernels of the forms \(\sigma_{i}\) can be not transverse or the forms can vanish at some points. We call a singular 3-web hexagonal if its Blaschke curvature vanishes identically at regular points. The simplest type of singularities of hexagonal 3-webs have the following remarkable property, first observed by Shelekhov [S-05].
**Lemma 1**: _Suppose that a hexagonal 3-web, defined by three smooth (possibly singular) direction fields \(\xi_{1},\xi_{2},\xi_{3}\), has a singular point \(p_{0}\) such that_
1. _all_ \(\xi_{i}\) _are well defined at_ \(p_{0}\)_,_
2. \(\xi_{1}\) _and_ \(\xi_{2}\) _are transverse at_ \(p_{0}\)_,_
3. \(\xi_{1}=\xi_{3}\) _at_ \(p_{0}\)_,_
_then either the leaves of \({\cal F}_{1}\) and \({\cal F}_{3}\) trough \(p_{0}\) coincide or \(\xi_{1}=\xi_{3}\) along the leaf of \({\cal F}_{2}\) trough \(p_{0}\)._
_Proof:_ The property is a consequence of separation of variables for hexagonal webs. The second condition implies that we can rectify \(\xi_{1}\) and \(\xi_{2}\), i.e. choose some local coordinates \(u,v\) so that \(\xi_{1}=\partial_{v}\) and \(\xi_{2}=\partial_{u}\). Then \(\xi_{3}=-f(u,v)\partial_{u}+\partial_{v}\) with \(f(u_{0},v_{0})=0\), where \(p_{0}=(u_{0},v_{0}).\) Now the hexagonality amounts to \(\partial_{v}\left(\frac{\partial_{u}f}{f}\right)=0\) hence \(f(u,v)=a(u)b(v)\). If \(a(u_{0})=0\) then the integral curves of \(\xi_{1}\) and \(\xi_{3}\) passing through \(p_{0}\) coincide. If \(b(v_{0})=0\) then the the leaves of \({\cal F}_{1}\) and \({\cal F}_{3}\) are tangent along the line \(v=v_{0}\), which is the leaf of \({\cal F}_{2}\). \(\Box\)
## 3 Projective model of Mobius geometry
Following Blaschke [B-29], we call the subgroup \(PSO(3,1)\) of projective transformations of \(\mathbb{RP}^{3}\), leaving invariant the quadric
\[X^{2}+Y^{2}+Z^{2}-U^{2}=0,\]
the Mobius group. For the reference, we present here infinitesimal generators of Mobius group in homogeneous coordinates \([X:Y:Z:U]\) in \(\mathbb{P}^{3}\), affine coordinates \(x=\frac{X}{U},\ y=\frac{Y}{U},\ z=\frac{Z}{U}\) in \(\mathbb{R}^{3}\) and cartesian coordinate \((\bar{x},\bar{y})\) in \(\mathbb{R}^{2}\) related to points \((x,y,z)\) on the unit sphere via stereographic projection:
\[x=\frac{2\bar{x}}{1+\bar{x}^{2}+\bar{y}^{2}},\quad y=\frac{2\bar{y}}{1+\bar{x} ^{2}+\bar{y}^{2}},\quad z=\frac{1-\bar{x}^{2}-\bar{y}^{2}}{1+\bar{x}^{2}+\bar{ y}^{2}}.\]
There are 3 rotations around the affine axes:
\[R_{z}=Y\partial_{X}-X\partial_{Y}=y\partial_{x}-x\partial_{y}=\bar{y}\partial _{\bar{x}}-\bar{x}\partial_{\bar{y}},\]
\[R_{x}=Z\partial_{Y}-Y\partial_{Z}=z\partial_{y}-y\partial_{z}=\bar{x}\bar{y} \partial_{\bar{x}}+\tfrac{1}{2}(1-\bar{x}^{2}+\bar{y}^{2})\partial_{\bar{y}},\]
\[R_{y}=X\partial_{Z}-Z\partial_{X}=x\partial_{z}-z\partial_{x}=-\tfrac{1}{2}(1+ \bar{x}^{2}-\bar{y}^{2})\partial_{\bar{x}}-\bar{x}\bar{y}\partial_{\bar{y}},\]
and 3 boosts (or "hyperbolic rotations"):
\[B_{x}=U\partial_{X}+X\partial_{U}=\partial_{x}-x(x\partial_{x}+y\partial_{y}+ z\partial_{z})=\tfrac{1}{2}(1-\bar{x}^{2}+\bar{y}^{2})\partial_{\bar{x}}-\bar{x} \bar{y}\partial_{\bar{y}},\]
\[B_{y}=U\partial_{Y}+Y\partial_{U}=\partial_{y}-y(x\partial_{x}+y\partial_{y}+ z\partial_{z})=-\bar{x}\bar{y}\partial_{\bar{x}}+\tfrac{1}{2}(1+\bar{x}^{2}- \bar{y}^{2})\partial_{\bar{y}},\]
\[B_{z}=U\partial_{Z}+Z\partial_{U}=\partial_{z}-z(x\partial_{x}+y\partial_{y}+ z\partial_{z})=-\bar{x}\partial_{\bar{x}}-\bar{y}\partial_{\bar{y}}.\]
The identity component of \(PSO(3,1)\) is well known to be isomorphic to the group \(PSL_{2}(\mathbb{C})\), the isomorphism being given by the action \(A(V)=AVA^{*}\) of \(A\in SL_{2}(\mathbb{C})\) on the vector space of matrices
\[V=\left(\begin{array}{cc}X+U&Y+iZ\\ Y-iZ&U-X\end{array}\right)\]
with real \(X,Y,Z,U\). This action preserves determinant of \(V\), which is \(U^{2}-X^{2}-Y^{2}-Z^{2}\). By this isomorphism, the generators are represented by the following matrices:
\[B_{x}=iR_{x},\ \ B_{y}=iR_{y},\quad B_{z}=iR_{z}.\]
Two points \(p_{1}=[X_{1}:Y_{1}:Z_{1}:U_{1}]\) and \(p_{2}=[X_{2}:Y_{2}:Z_{2}:U_{2}]\) in \(\mathbb{RP}^{3}\) determine a line with the Plucker coordinates
\[a:=X_{1}U_{2}-X_{2}U_{1},\quad\ b:=Y_{1}U_{2}-Y_{2}U_{1},\quad\ c:=Z_{1}U_{2}- Z_{2}U_{1}\]
\[f:=Y_{1}Z_{2}-Y_{2}Z_{1},\quad\ g:=Z_{1}X_{2}-Z_{2}X_{1},\quad\ h:=X_{1}Y_{2}- X_{2}Y_{1}.\]
By direct computation one proves the following fact.
**Lemma 2**: _All points of a line with Plucker coordinates \([a:b:c:f:g:h]\) are stable with respect to subgroup with the infinitesimal generator \(aR_{x}+bR_{y}+cR_{z}+fB_{x}+gB_{y}+hB_{z}\)._
Observe that the line dual to \([a:b:c:f:g:h]\) is the one with coordinates \([-f:-g:-h:a:b:c]\), which corresponds to multiplication by \(i\) of the corresponding matrix representation of the generator.
A line in \(\mathbb{RP}^{3}\) can be hyperbolic, elliptic or parabolic with the respect to the Darboux quadric. Considering simple representatives of these classes one sees that
1) for hyperbolic line, the corresponding operator "rotates" the dual line (consider \(R_{z}\)) thus not having extra stable points in in \(\mathbb{RP}^{3}\),
2) for elliptic line, the corresponding operator hyperbolically "rotates" the dual line (consider \(B_{z}\)) and leaves invariant two extra points \([0:0:\pm 1:1]\) on the dual line.
3) for parabolic line, the corresponding operator moves all points on the dual line (consider \(\partial_{y}=R_{x}+B_{y}\)). Thus the converse to the Lemma is not true in general.
In the following Proposition we summarize further properties of the above correspondence.
**Proposition 3**: _Let \(\xi=aR_{x}+bR_{y}+cR_{z}+fB_{x}+gB_{y}+hB_{z}\) be an infinitesimal operator of the Mobius group._
1. _The corresponding action of the one-parameter group is not loxodromic and therefore Mobius equivalent (i.e. conjugated) either to rotation, or dilation, or translation of the plane_ \((\bar{x},\bar{y})\) _if and only if_ \[af+bg+ch=0.\] (2)
2. _This action is Mobius equivalent to rotation if and only if (_2_) and_ \(a^{2}+b^{2}+c^{2}>f^{2}+g^{2}+h^{2}\) _are true, the line with Plucker coordinates_ \([a:b:c:f:g:h]\) _being the set of polar points of the circular orbits._
3. _This action is Mobius equivalent to dilatation if and only if (_2_) and_ \(a^{2}+b^{2}+c^{2}<f^{2}+g^{2}+h^{2}\) _are true, the line with Plucker coordinates_ \([a:b:c:f:g:h]\) _being the set of polar points of the orbits._
4. _The action is Mobius equivalent to translation if and only if (_2_) and_ \[a^{2}+b^{2}+c^{2}=f^{2}+g^{2}+h^{2}\] (3) _are true, the line with Plucker coordinates_ \([a:b:c:f:g:h]\) _being the set of polar points of the orbits._
_Proof:_ Consider the characteristic polynomial for \(\xi\)
\[\Psi(\lambda)=\lambda^{2}+\frac{1}{4}(a^{2}+b^{2}+c^{2}-f^{2}-g^{2}-h^{2})+ \frac{i}{2}(af+bg+ch)\]
and the matrices for non-loxodromic Mobius representatives \(R_{z}\), \(B_{z}\) and \(\partial_{y}=R_{x}+B_{y}\). \(\Box\)
Polar curve splits into 3 non-coplanar lines
Recall that _limit circles_ of hyperbolic pencil correspond to intersection points of the hyperbolic line with the Darboux quadric under the stereographic projection. _Vertexes_ of elliptic pencil are two points common to all the circles of the pencil, the vertexes correspond to the intersection of the Darboux quadric with the line, dual to the polar elliptic line. Considering the parabolic pencil as a limit case of elliptic, one calls the point corresponding to the point of tangency of the Darboux quadric with the parabolic line also the _vertex_. The vertex of a parabolic pencil is the common point for all circles of this parabolic pencil.
The reader may visualize the pencil fixed by a line \(L\) thinking of its circles as cut on the Darboux quadric by the pencil of planes containing the dual line \(L^{*}\).
Let us list hexagonal 3-webs with non-planar polar lines. There are 9 types of such webs up to Mobius transformation.
Take 3 polar lines intersecting inside the Darboux quadric so that each of the line contains the point dual to the plane of the other two polar lines, i.e. each pencil has a circle orthogonal to all the circles of the other two pencils. A representative of this web orbit, having one limit circle at infinity, is shown in Figure 1 on the left. This type was described by Lazareva [L-77].
Replacing two pencils by their orthogonal, we get the web in the center of Figure 1. In the projective model, we replace two polar lines by their dual ones. This web was also described by Lazareva [L-77].
Wunderlich [W-38] mentioned the following construction, used later also by Balabanova and Erdogan, to produce hexagonal 3-webs: take two dual polar lines and supplement it by a third intersecting that dual pair. There are four webs in the list, obtained in this way (see also [B-73]): with two hyperbolic and one elliptic pencils on the right of Figure 1; with one hyperbolic and two elliptic pencils on the left of Figure 2; with one hyperbolic, one elliptic, and one parabolic pencils in the center of Figure 2; and with one hyperbolic and two parabolic pencils on the right of Figure 2.
Another web with two elliptic and one hyperbolic pencils is depicted on the left of Figure 3. Its projective model has two elliptic polar lines, lying in a plane tangent to the Darboux quadric at a point, and a hyperbolic line passing through the points (different from the above tangency
Figure 1: Hexagonal circular 3-webs. 3 hyperbolic pencils (left), 1 hyperbolic and 2 elliptic pencils (center), 1 elliptic and 2 hyperbolic pencils (right).
point), where duals to elliptic lines intersect the Darboux quadric. Its elliptic pencils share one common vertex at infinity, the other two vertices are also the limit circles of the hyperbolic pencil. This web was described by Erdogan [E-74].
In the center of Figure 3, there is a web with three elliptic pencils, the vertices of the pencils are two of three fixed points. It is historically the first hexagonal circular 3-web described in the literature [BB-38]. The chosen representative has a vertex at infinity.
Erdogan [E-74] found a web with one hyperbolic, one elliptic and one parabolic pencil, arranged so that the vertex \(P\) of the parabolic pencil coincides with one vertex of the elliptic pencil, while the other vertex \(E\) of the elliptic pencil coincides with one of the limiting circles of the hyperbolic, the common circle of the elliptic and the parabolic pencils being orthogonal to the circle passing through the second limiting circle of the hyperbolic pencil and the points \(P\), \(E\). On the right of Figure 3 is a Mobius representative of this type web with \(P\) at infinity. On the projective model, we have 3 pairwise distinct points on the Darboux quadric: \(E,P\) and \(H\). The hyperbolic polar line \(L_{h}\) spears the quadric at \(E\) and \(H\), the parabolic polar \(L_{p}\) touches the quadric at \(P\) and intersects \(L_{h}\), and the elliptic polar \(L_{e}\) is dual to the line through \(P\) and
Figure 3: Hexagonal circular 3-webs. 1 hyperbolic and 2 elliptic pencils (left), 3 elliptic pencil (center), 1 hyperbolic, 1 elliptic and 1 parabolic pencil (right).
Figure 2: Hexagonal circular 3-webs. 1 hyperbolic and 2 elliptic pencils (left), 1 parabolic, 1 elliptic, and 1 hyperbolic pencil (center), 1 hyperbolic and 2 parabolic pencils (right).
\(E\) (and intersects \(L_{p}\)).
Finally, we present a family of hexagonal 3-webs formed by 3 pencils, whose Mobius orbits are parameterized by one parameter. Two pencils are parabolic with distinct vertexes, and the third is elliptic, whose dual to the polar line meets the Darboux quadric at these vertexes. A representative of the family is shown in Figure 4, the vertexes being the origin and the infinite point. In this normalization one can fix the direction of one parabolic line, the direction of the other is arbitrary. Any web in this normalization is symmetric by dilatation \(x\partial_{x}+y\partial_{y}\).
Surprising is not only the fact that the "largest" family was explicitly described only in 1977 by Lazareva [L-77], even more amazing is that the family falls within the general construction (see Introduction), which seems to have appeared first in the paper of Wunderlich [W-38] in 1938!
To prove that there is no other classes, we will strongly use Lemma 1. The singularity of the described type occurs when two of the three lines, tangent to Darboux quadric at a point and meeting each its own polar line, coincide and the point is not a vertex or limit circle of any pencil.
**Proposition 4**: _Consider the family of lines such that_
_1) they meet two fixed lines \(L_{1}\) and \(L_{2}\) and_
_2) they are tangent to the Darboux quadric._
_If the tangent points are on a circle then either \(L_{1}\) intersects \(L_{2}\), or both \(L_{1}\) and \(L_{2}\) are tangent to the Darboux quadric, or one meets the dual of the other at some point on the Darboux quadric._
_Moreover, in the case of skew \(L_{1},L_{2}\) tangent to the Darboux quadric at two points \(p_{1},p_{2}\), the curve of touching points splits into 2 circles, their planes containing the line \(p_{1}p_{2}\) and bissecting the angles between two planes \(P_{1},P_{2}\), where \(P_{i}\) is the plane through \(L_{i}\) and \(p_{1}p_{2}\)._
_Proof:_ Let \([a:b:c:f:g:h]\) be Plucker coordinates of a line \(L\) touching the Darboux quadric. Then, by Proposition 3, they satisfy equations (2) and (3). Therefore \(a^{2}+b^{2}+c^{2}\neq 0\) and we can normalize these coordinates to \(a^{2}+b^{2}+c^{2}=f^{2}+g^{2}+h^{2}=1\). One easily calculates the point \(p=(x,y,z)=(cg-bh,ah-cf,bf-ag)\) where the line \(L\) touches the quadric. Due to normalization, one can rewrite this as
\[a=hy-gz,\quad b=fz-hx,\quad c=gx-fy. \tag{4}\]
To simplify calculations, we can bring the Plucker coordinates of \(L_{1}\) to simple form by Mobius transformation. We suppose that \(L_{1}\) and \(L_{2}\) are skew since the case of \(L_{1},L_{2}\) intersecting is obvious.
Figure 4: Hexagonal circular 3-web. 1 elliptic and 2 parabolic pencils.
If \(L_{1}\) is hyperbolic we can choose a representative as \(L_{1}=[0:0:1:0:0:0]\). Then one has \(m\neq 0\) since the lines \(L_{1},L_{2}\) do not intersect. (We used the fact that the Plucker coordinates of intersecting lines are orthogonal with respect to the bilinear symmetric form defining the Plucker quadric.) Let us set \(m=1\). Moreover, the coordinates of \(L_{2}\) satisfy the Plucker equation \(ku+lv+w=0\) thus giving \(w\). Since \(L\) intersect \(L_{1}\) and \(L_{2}\) we have \(h=0\) and \(ka+lb+c+uf+gv+wh=0\) respectively. The above two equations cut a curve from the three-dimesional variety of lines touching the quadric.
The touching points \(p\) trace a curve on the Darboux quadric. This curve can be computed as follows. Equations (2) and (4) imply \(fx+gy+hz=0\), with \(h=0\) we have \(g=-fx/y\). Now normalization \(f^{2}+g^{2}+h^{2}=1\) gives \((x^{2}+y^{2})f^{2}=y^{2}\), which means \(f\) is not identically zero. Therefore intersection condition \(ka+lb+c+uf+gv+wh=0\) is equivalent to \(kxz+lyz+uy-vx-x^{2}-y^{2}=0\). This equation cuts the curve of tangent points on the Darboux quadric (i.e. unit sphere centered at the origin). If this curve is in a plane \(z=Ax+By+C\) then by direct calculation one gets \(C^{2}=1\). One can choose \(C=1\) and then, by further calculations, we get \(A=-k\), \(B=-l\), \(l=u\)\(k=-v\). Thus \(L_{2}=[u:v:0:-v:u:1]\) lies in the plane tangent to the Darboux quadric at \((0,0,-1)\), which is the the point where the dual to \(L_{2}\) spears the Darboux quadric. Note that the plane equation is \(z=vx-uy+1\).
If \(L_{1}\) is hyperbolic we choose \(L_{1}=[0:0:0:0:0:1]\). Then \(w\neq 0\) and we can normalize \(L_{2}=[u:v:1:k:l:m]\) and the rest of the reasoning goes in a similar way.
If \(L_{1}\) is parabolic we choose \(L_{1}=[0:-1:0:1:0:0]\). For skew \(L_{1},L_{2}\) we have \(-l+u\neq 0\). Normalizing \(l-u=1\) gives \(l=u+1\). Further we rewrite (4) as
\[f=bz-cy,\quad g=cx-az,\quad h=ay-bx\]
and, proceeding as before, obtain by calculation that \(L_{2}\) satisfy (2), which means it is tangent to the Darboux quadric.
To check the last claim, we normalize the configuration so that \(L_{1}\) and \(L_{2}\) are tangent to the Darboux quadric at points \((0,0,\pm 1)\), look for a plane containing a circle of tangent points in the form \(y=Ax\), find two solutions for \(A\), and verify the geometry by calculation. \(\Box\)
**Definition 2**: _Motivated by Lemma 1, we will refer to the circle of tangent points described in Proposition 4 whose polar point is not a intersection of \(L_{1},L_{2}\) as singular circle._
**Remark 1.** It is easy to see that the curve of touching points is of order four. In the hypothesis of the Proposition for the case of skew \(L_{1}\), \(L_{2}\) not tangent to the Darboux quadric, this curve also splits. One component is a circle and the real trace of other is the point, where dual to the elliptic line meets the hyperbolic.
**Corollary 5**: _For hexagonal circular 3-webs formed by three pencils with non-planar polar lines, any two \(L_{1},L_{2}\) of polar lines are either dual or obey geometrical restriction described by Proposition 4. Moreover, the polar point of a singular circle, defined by two polar lines, belongs to the third one._
_Proof:_ Let \(L_{1},L_{2}\) be skew but not dual and such that the curve of points, where lines \(L\) meeting both \(L_{1},L_{2}\) touch the Darboux quadric, is not planar. Then by Lemma 1 any such line meets also the third polar line \(L_{3}\). Since \(L_{1},L_{2}\) are skew \(L_{3}\) cannot intersect neither of \(L_{1},L_{2}\). Thus \(L_{1},L_{2},L_{3}\) belong to one ruling of some quadric \(Q\) and the touching lines \(L\) form the other ruling. But then \(L_{1},L_{2},L_{3}\) are also tangent to the Darboux quadric. This contradicts our initial
assumption. \(\square\)
Let \((p,q),(r,s)\in\mathbb{R}^{2}\) be vertices of an elliptic pencil, then the pencil circles form the family
\[I(x,y):=\frac{[(p-x)(r-x)+(q-y)(s-y)]^{2}}{[(p-x)^{2}+(q-y)^{2}][(r-x)^{2}+(s-y)^ {2}]}=const.\]
The circles are the integral curves of the ODE
\[\omega_{e}:=d(I)=f(x,y)dx+g(x,y)dy=0. \tag{5}\]
The circles of the hyperbolic pencil with limit circles at \((p,q),(r,s)\) are orthogonal to the circles of the above elliptic one. They are the integral curves of the ODE
\[\omega_{h}:=g(x,y)dx-f(x,y)dy=0. \tag{6}\]
Now we work out the cases with parabolic pencils. Let \((p,q)\in\mathbb{R}^{2}\) be the vertex of a parabolic pencil and \([r:1-r]\in\mathbb{P}\) a direction orthogonal to the line tangent to all circles of the pencil. Then the pencil circles form the family
\[\tilde{I}(x,y):=\frac{(x-p)^{2}+(y-q)^{2}}{r(p-x)+(r-1)(y-q)}=const.\]
The circles are the integral curves of the ODE
\[\omega_{p}:=d(\tilde{I})=0. \tag{7}\]
For the exceptional direction \((1,-1)\), the pencil circles family is
\[\bar{I}(x,y):=\frac{(x-p)^{2}+(y-q)^{2}}{(x-p)-(y-q)}=const\]
and the corresponding EDO
\[\omega_{\bar{p}}:=d(\bar{I})=0. \tag{8}\]
Observe that differential forms \(\sigma_{i}\), describing a 3-web of circles formed by 3 pencils, are algebraic. Thus Lemma 1 remains valid also over complex numbers in passing from \(\mathbb{RP}^{3}\) to \(\mathbb{CP}^{3}\). By circles here we understand sections of the complex Darboux quadric by complex planes. The complexification simplifies the proof of the following theorem.
**Theorem 6**: _[_S-05_]_ _Any hexagonal circular 3-webs formed by three pencils with non-coplanar polar lines is Mobius equivalent to one from the Blaschke-Wunderlich-Balabanova-Erdogan-Lazareva list._
_Proof:_ We consider all types of non-coplanar polar line triples \(L_{1},L_{2},L_{3}\).
\(\bullet\)_Three hyperbolic pencils._
By Corollary 5, all lines intersect at one point \(p\). This point cannot be outside the Darboux quadric. In fact, applying a suitable Mobius transformation, we send this point to an infinite one, say \(p_{x}=[1:0:0:0]\) and the plane of \(L_{1},L_{2}\) to the plane \(Z=0\). Then, by Corollary 5, the polar line \(L_{3}\) joins \(p_{x}\) and \(p_{z}=[0:0:1:0]\), the polar point of the plane \(Z=0\). Thus \(L_{3}\) cannot be hyperbolic as supposed. The point \(p\) cannot be on the Darboux quadric either: we
can send it to \(p=(1,0,0)\) and the plain of \(L_{1},L_{2}\) to the plane \(Z=0\). Now Corollary 5 implies that \(L_{3}\) joins \(p\) and \(p_{z}\). Thus \(L_{3}\) is parabolic and not hyperbolic.
Therefore \(p\) is inside the Darboux quadric and one can send it to the origin \((0,0,0)\). Corollary 5 implies that any of the lines \(L_{1},L_{2},L_{3}\) contains the point dual to the plane of the other two lines. Thus \(L_{1},L_{2},L_{3}\) are orthogonal and we have the web shown on the left of Figure 1. Note that there is only one such web up to Mobius transformation.
\(\bullet\)_One elliptic and two hyperbolic pencils_.
We can suppose that \(L_{3}\) is an elliptic polar line and that \(L_{3}\) is the infinite line in the plane \(Z=0\). Due to Corollary 5 the polar lines \(L_{1},L_{2}\), being hyperbolic, must intersect. Then the plane of \(L_{1},L_{2}\) is dual to some point on \(L_{3}\) and therefore contains the line \(X=Y=0\) dual to \(L_{3}\) and can be assumed to be the \(Y=0\).
Suppose that none of \(L_{1},L_{2}\) is dual to \(L_{3}\). If both \(L_{1},L_{2}\) intersect \(L_{3}\) then the intersection point is \(p_{x}=[1:0:0:0]\). Applying Corollary 5 to the pair \(L_{1},L_{3}\), we conclude that \(L_{2}\), joining \(p_{x}\) and the polar point of the plane of \(L_{1},L_{3}\) does not meet the Darboux quadric and is not hyperbolic as supposed. Thus we can suppose that \(L_{1},L_{3}\) are skew and that \(L_{1}\), by Corollary 5, contains \((0,0,-1)\). Since \(L_{1}\) is not dual to \(L_{3}\), we infer by Corollary 5 that \(L_{2}\) contains the dual point of the singular circle for \(L_{1},L_{3}\). This point lies in the tangent plane to the Darboux quadric at \((0,0,1)\), therefore \(L_{2}\), being non-parabolic, cannot meet \(L_{3}\) and cannot also contain \((0,0,1)\). Then it passes through \((0,0,-1)\), which contradicts the geometry restriction imposed by Corollary 5.
So we can assume that \(L_{2}\) is dual to \(L_{3}\), i.e. it is the line \(X=Y=0\). Corollary 5 prevents \(L_{1}\) to be skew with \(L_{3}\). Thus it meets \(L_{3}\) and therefore intersect \(L_{2}\) in a point inside the Darboux quadric. We obtain the hexagonal web equivalent to the one on the right of Figure 1.
\(\bullet\)_One hyperbolic and two elliptic pencils_.
By Corollary 5, elliptic lines, say \(L_{2},L_{3}\), intersect. We can assume that the hyperbolic line \(L_{1}\) is the coordinate axis \(X=Y=0\).
First consider the case when two lines, \(L_{1},L_{2}\), are dual. Then \(L_{2}\) is the infinite line in the plane \(Z=0\) and we can assume the intersection line of \(L_{2}\) and \(L_{3}\) being the point \(p_{y}=[0:1:0:0]\). Then \(L_{3}\) cannot be skew with \(L_{1}\) due to Corolllary 5: the polar of the singular circle determined by \(L_{1},L_{3}\) is finite and cannot lie on the infinite line \(L_{2}\). Thus \(L_{3}\), being elliptic, intersect \(L_{1}\) outside the Darboux quadric and we get the web equivalent to the one shown on the left of Figure 2.
Now suppose that no pair \(L_{1},L_{i}\) is skew. Then \(L_{2}\) and \(L_{3}\) intersect \(L_{1}\). Since the triple \(L_{1},L_{2},L_{3}\) is not coplanar, all three lines intersect at one point outside the Darboux quadric, which can be taken as \(p_{z}=[0:0:1:0]\). By Corollary 5, the line \(L_{3}\) contains the dual point of the singular circle determined by \(L_{1},L_{2}\) and the line \(L_{2}\) contains the dual point of the singular circle determined by \(L_{1},L_{3}\). This fixes \(L_{1},L_{2},L_{3}\) up to rotation around \(z\)-axis and gives the web shown in the center of Figure 1.
Finally, consider the case with skew but not dual \(L_{1},L_{2}\). Applying Mobius transformation, we send the intersection point of \(L_{2}\) and \(L_{3}\) to \(p_{x}=[1:0:0:0]\) (preserving the position of \(L_{1}\)). Then \(L_{3}\) joins \(p_{x}\) and the polar point of the singular circle determined by \(L_{1},L_{2}\). We obtain the web, equivalent to one on the left of Figure 3.
\(\bullet\)_Three elliptic pencils_.
We treat this case using the complex version of Corollary 5. First we conclude that, being nonplanar, all three polar lines intersect at one point, which we can send to \(p_{y}=[0:1:0:0]\). None of 3 real planes, containing a pair of polar lines, can miss completely the real Darboux quadric. In fact, if the real plane of \(L_{1},L_{2}\) do not intersect the real Darboux quadric then the
polar point of the complex singular circle of \(L_{1},L_{2}\) is inside the real Darboux quadric and \(L_{3}\), being elliptic, cannot contain this point. None of 3 real planes, containing a pair of polar lines, can intersect the real Darboux quadric. If the real plane of \(L_{1},L_{2}\) cuts the real Darboux quadric, the polar point of the real singular circle of \(L_{1},L_{2}\) is outside the real Darboux quadric and the real plane of \(L_{1},L_{3}\) do not meet the real Darboux quadric. Thus all 3 planes are tangent to the Darboux quadric. A representative of such web is shown in the center of Figure 3.
\(\bullet\)_One parabolic and two hyperbolic pencils_.
By Corollary 5 all 3 polar lines must intersect in one point. The intersection point cannot be inside the Darboux quadric since one line is parabolic. It also cannot be outside: we can send it to \(p_{x}=[1:0:0:0]\) and the parabolic line, joining \(p_{x}\) and the point dual to the plane of hyperbolic lines, will miss the Darboux quadric. Therefore this point is on the Darboux quadric. One can move the plane of hyperbolic lines to \(Z=0\). Then the parabolic line contains \(p_{z}=[0:0:1:0]\) and none of the hyperbolic lines can pass through the point dual the singular circle of the other two lines. Thus there is no hexagonal web with non-planar parabolic and two hyperbolic polar lines.
\(\bullet\)_One parabolic, one hyperbolic and one elliptic pencil_.
By Corollary 5 the parabolic line meets the other two. If the hyperbolic line intersects the elliptic at some point \(p\) outside the Darboux quadric, we can move the hyperbolic line to \(X=Z=0\) and \(p\) to \(p_{y}=[0:1:0:0]\). Then the point \(p_{s}\), dual for the singular circle of hyperbolic and elliptic lines, is infinite and the third polar line, joining \(p_{y}\) and \(p_{s}\) cannot be parabolic.
Therefore hyperbolic and elliptic lines are skew and Corollary 5 fixes the configuration up to Mobius transformation: if these lines are dual we get the type shown in the center of Figure 2, otherwise the type on the right of Figure 3.
\(\bullet\)_Two parabolic and one hyperbolic pencils_.
By Corollary 5 the hyperbolic line meets both parabolic lines.
If the parabolic lines are skew then the corresponding two singular circles have their polar points on the line dual to the one joining the points of tangency of parabolic lines with the Darboux quadric. This dual line is elliptic, therefore this configuration is not possible.
If the parabolic lines intersect outside the Darboux quadric then we can bring the plane of their intersection to \(Z=0\). Now the the hyperbolic line contains \(p_{z}=[0:0:1:0]\) by Corollary 5 and, intersecting the both elliptic lines, must meet them at their common point. Then it misses the Darboux quadric and is not hyperbolic.
Therefore the parabolic lines are tangent to the Darboux quadric at the same point. Since the polar lines are not coplanar the hyperbolic line contains this point. We can bring the hyperbolic line to \(X=Y=0\). Then Corollary 5 implies that the parabolic lines are dual and we obtain the web type shown on the right of Figure 2.
\(\bullet\)_Two parabolic and one elliptic pencils_.
By Corollary 5 the elliptic line meets both parabolic lines.
If the parabolic lines are skew then the corresponding two singular circles have their dual points on the line dual to the one joining the points of tangency of parabolic lines with the Darboux quadric. The third polar line must contain these point, it is elliptic and we get the web type presented in Figure 4.
If the parabolic lines intersect outside the Darboux quadric then we can bring the plane of their intersection to \(Z=0\). By Corollray 5 the elliptic line contains \(p_{z}=[0:0:1:0]\) and, intersecting the both elliptic lines, must meet them at their common point. Thus we get the type shown in Figure 4.
If the parabolic lines are tangent to the Darboux quadric at the same point then the third
line, being elliptic, cannot pass through this point. Therefore it is coplanar with the parabolic lines.
\(\bullet\)_Three parabolic pencils_.
Suppose that two polar lines \(L_{1}\) and \(L_{2}\) intersect. If the intersection point is outside the Darboux quadric then we bring the plane of \(L_{1},L_{2}\) to \(Z=0\). Since the third polar line \(L_{3}\) do not lie in this plain it contains \(p_{z}=[0:0:1:0]\) by Corollary 5. Therefore it is skew with at least one of \(L_{1},L_{2}\). Let it be \(L_{1}\). Then by Corollary 5 the line \(L_{2}\) contains both dual point of two singular circles of \(L_{1},L_{3}\) which is obviously not possible.
If \(L_{1},L_{2}\) touch the Darboux quadric at the same point then the line \(L_{3}\) is skew with at least one of \(L_{1},L_{2}\). Again this is precluded by Corollary 5.
Therefore \(L_{1},L_{2},L_{3}\) are pairwise skew. Consider two singular circles \(C_{1},C_{2}\) of \(L_{1},L_{2}\), their polar points \(p_{1},p_{2}\) and the family of lines \(L\) touching the Darboux quadric and meeting both \(L_{1},L_{2}\). The third line \(L_{3}\), being parabolic, cannot contain both points \(p_{1},p_{2}\). If \(p_{1}\notin L_{3}\) then by Lemma 1 the family of lines \(L\) touching the Darboux quadric at points of \(C_{1}\) must meet also \(L_{3}\). Therefore the lines \(L\) constitute one ruling of a quadric touching the Darboux quadric along \(C_{1}\). Therefore \(L_{1},L_{2},L_{3}\) belong to the second ruling. If \(p_{2}\in L_{3}\) then, sending the plane of \(C_{1}\) to \(Y=0\) and the points, where \(L_{1},L_{2}\) touch the Darboux quadric, to \((0,0,\pm 1)\), we see that \(p_{2}=[1:0:0:0]\) since \(C_{2}\) is orthogonal to \(C_{1}\) and passes through \((0,0,\pm 1)\). Then \(L_{3}\) must touch the Darboux quadric also at one of the points \((0,0,\pm 1)\), which contradicts the initial assumption that all 3 lines are skew. Thus \(p_{2}\notin L_{3}\). Then the family of lines \(L\) touching the Darboux quadric at points of \(C_{2}\) must meet also \(L_{3}\). This is not possible as the lines \(L\), constituting a ruling of a quadric that touch the Darboux quadric along \(C_{1}\) cannot meet the orthogonal circle \(C_{2}\). \(\Box\)
## 5 Polar curve splits into conic and straight line
First we describe the types then we prove that the list is complete. Some types are one-parametric families and we denote the parameter value by \(c\). The polar conic will be given either by an explicit parametrization of the circle equations and we reserve \(u\) for the parameter, or by indicating the conic equations. The former representation gives also a parametrization of the polar conic: to a circle
\[\epsilon(x^{2}+y^{2})+\alpha x+\beta y+\gamma=0,\]
(where \(\epsilon=0\) or \(\epsilon=1\), the case \(\epsilon=0\) giving a line) corresponds the polar point with the tetracyclic coordinates
\[[\alpha:\beta:\gamma-\epsilon:-\gamma-\epsilon]. \tag{9}\]
The parameter for circles in the pencil will be denoted by \(v\).
To check hexagonality, one computes the Blaschke curvature using the formula in Apendix as follows. The polar conic gives a one-parameter family of circles on the unit sphere. The stereographic projection from the "south" pole
\[X=\frac{2x}{1+x^{2}+y^{2}},\ \ \ \ Y=\frac{2y}{1+x^{2}+y^{2}},\ \ \ \ Z=\frac{1-x^{2}-y^{2}}{1+x^{2}+y^{2}},\]
transforms this family into a family of circles in the plane parametrized by the points of the conic. The ODE for circles, defined by the conic,
\[P^{2}+A(x,y)P+B(x,y)=0,\ \ \ \ P=\frac{dy}{dx}, \tag{10}\]
is obtained by differentiating and excluding the coordinates of the conic points. The slope \(R\) comes from the pencil of circles.
For some types we add an additional geometric detail in the "title" to separate the types.
**1. Polar conic plane does not cut Darboux quadric, hyperbolic pencil.**
The pencil with limit circles at the origin and in the infinite point gives circles
\[x^{2}+y^{2}=v,\]
the polar conic is the circle
\[X_{0}^{2}+Y_{0}^{2}+4cX_{0}Z_{0}=0,\quad U_{0}=0,\quad c>0,\]
defining the family
\[x^{2}+y^{2}-\frac{4c}{c^{2}u^{2}+1}x+\frac{4c^{2}u}{c^{2}u^{2}+1}y=1,\]
the circles of the family enveloping the cyclic
\[(x^{2}+y^{2})^{2}-(x^{2}+y^{2})(4cx+2)-4c^{2}y^{2}+4cx+1=0\]
as shown on the left in Figure 5.
**2. Polar conic plane does not cut Darboux quadric, hyperbolic pencil, webs symmetric by rotations.**
The pencil with limit circles at the origin and at the infinite point gives circles
\[x^{2}+y^{2}=v,\]
the polar conic is the circle
\[X_{0}^{2}+Y_{0}^{2}=\frac{4Z_{0}^{2}}{c^{2}},\quad U_{0}=0,\]
defining the family
\[x^{2}+y^{2}+\frac{2\cos(u)}{c}x+\frac{2\sin(u)}{c}y=1,\]
Figure 5: Types 1, 2, 3.
the circles of the family enveloping the cyclic
\[c^{2}(x^{2}+y^{2})^{2}-(2c^{2}+4)(x^{2}+y^{2})+c^{2}=0,\]
which splits into two concentric circles as shown in the center of Figure 5.
**3. Polar conic plane cuts Darboux quadric, hyperbolic pencil.**
The pencil with limit circles at the origin and at the infinite point gives circles
\[x^{2}+y^{2}=v,\]
the polar conic is the circle
\[x_{0}^{2}+y_{0}^{2}=4cx_{0},\quad z_{0}=0,\quad c>0,\]
defining the family
\[x^{2}+y^{2}+\frac{4c}{c^{2}u^{2}+1}x-\frac{4c^{2}u}{c^{2}u^{2}+1}y=-1,\]
the circles of the family enveloping the cyclic
\[(x^{2}+y^{2})^{2}+(x^{2}+y^{2})(4cx+2)-4c^{2}y^{2}+4cx+1=0,\]
as shown on the right of Figure 5.
**4. Polar conic plane cuts Darboux quadric, hyperbolic pencil, webs symmetric by rotations.**
The pencil with limit circles at the origin and at the infinite points gives circles
\[x^{2}+y^{2}=v,\]
the polar conic is the circle
\[x_{0}^{2}+y_{0}^{2}=\frac{4}{c^{2}},\quad z_{0}=0,\]
Figure 6: Types 4, 5, 6.
defining the family
\[x^{2}+y^{2}-\frac{2\cos(u)}{c}x-\frac{2\sin(u)}{c}y=-1,\]
the circles of the family enveloping the cyclic
\[c^{2}(x^{2}+y^{2})^{2}+(2c^{2}-4)(x^{2}+y^{2})+c^{2}=0,\]
which splits into two concentric circles as shown on the left in Figure 6.
**5. Polar conic plane cuts Darboux quadric, elliptic pencil, webs symmetric by homotheties.**
The pencil with vertexes at the origin and at the infinite point gives lines
\[y=vx,\]
the polar conic is
\[y_{0}^{2}+cz_{0}^{2}=c,\quad x_{0}=0,\ \ c>1,\]
defining the family of circles
\[x^{2}+(y+u)^{2}=(1-1/c)u^{2},\]
the circles enveloping the lines
\[x^{2}=(c-1)y^{2}\]
as shown in the center in Figure 6. The webs of the family are symmetric by homotheties with the center in the origin.
**6. Polar conic plane cuts Darboux quadric, elliptic pencil, polar conic and dual to conic line intersect in 2 points on the Darboux quadric.**
The pencil with vertexes at the origin and at the infinite point gives lines
\[y=vx,\]
the polar conic is
\[2cy_{0}=z_{0}^{2}-1,\quad x_{0}=0,\quad c>0,\]
defining the family of circles
\[x^{2}+y^{2}+\frac{(u-1)}{cu}y+\frac{u-1}{u+1}=0,\]
the circles enveloping the cyclic
\[c^{2}(x^{2}+y^{2})^{2}+(4cy-2c^{2})(x^{2}+y^{2})+(2y+c)^{2}=0\]
as shown on the right in Figure 6.
**7. Polar conic plane cuts Darboux quadric, elliptic pencil.**
The pencil with vertexes at the origin and at the infinite point gives lines
\[y=vx,\]
the polar conic is
\[2cy_{0}z_{0}+1=z_{0}^{2},\quad x_{0}=0,\quad c>0,\]
defining the family of circles
\[x^{2}+y^{2}-\frac{c+u}{cu}y+\frac{c+u}{c-u}=0,\]
as shown on the left in Figure 7.
**8. Polar conic plane cuts Darboux quadric, parabolic pencil.**
The pencil with the vertex at the infinite point gives lines
\[y=v,\]
the polar conic
\[x_{0}^{2}-z_{0}=1,\quad y_{0}=0,\]
gives the family of circles
\[\left(x-\frac{1}{u}\right)^{2}+y^{2}=\frac{u^{2}-1}{u^{2}}.\]
The circles envelopes the ellipse
\[x^{2}+2y^{2}=2\]
as shown in the center in Figure 7.
**9. Polar conic plane cuts Darboux quadric, parabolic pencil, webs symmetric by translations.**
The pencil with the vertex at the infinite point gives lines
\[y=v,\]
the polar conic is
\[x_{0}^{2}+(1-\sqrt{2})z_{0}^{2}-2\sqrt{2}z_{0}=1+\sqrt{2},\quad y_{0}=0,\]
defining the family of circles
\[(x+u)^{2}+y^{2}=1,\]
Figure 7: Types 7, 8, 9.
the circles touching the lines
\[y=\pm 1\]
as shown on the right in Figure 7.
**10. Polar conic plane tangent to Darboux quadric, hyperbolic pencil.**
The pencil with limit circles at \((1,0)\) and \((-1,0))\) gives circles
\[x^{2}+y^{2}+1=vx,\]
the polar conic is
\[x_{0}^{2}+(1-c)y_{0}^{2}=c,\quad z_{0}=-1,\quad c>0,\quad c\neq 1,\]
defining the family of lines
\[x_{0}x+y_{0}y=1,\]
the lines of the family enveloping the conic
\[cx^{2}+\frac{c}{c-1}y^{2}=1\]
with foci \((1,0)\) and \((-1,0))\) as shown on the left in Figure 8.
**11. Polar conic plane tangent to Darboux quadric, hyperbolic pencil, polar line meets polar conic.**
The pencil with limit circles at \((1,0)\) and \((-1,0))\) gives circles
\[x^{2}+y^{2}+1=vx,\]
the polar conic is
\[y_{0}^{2}-cx_{0}y_{0}+cy_{0}+x_{0}=0,\quad z_{0}=-1,\quad c\geq 0,\]
defining the family of lines
\[x_{0}x+y_{0}y=1,\]
the lines of the family enveloping the parabola
\[(cx-y)^{2}-(2c^{2}+4)x-2cy+c^{2}=0\]
Figure 8: Type 10, 11, 12.
with focus at \((1,0)\) as shown in the center in Figure 8.
**12. Polar conic plane tangent to Darboux quadric, hyperbolic pencil, polar line contains the point where polar conic plane touches the Darboux quadric.**
The pencil with limit circles at the origin and infinity gives circles
\[x^{2}+y^{2}=v,\]
the polar conic is
\[x_{0}^{2}+y_{0}^{2}-2y_{0}=0,\quad z_{0}=-1\]
defining the family of lines
\[x_{0}x+y_{0}y=1,\]
the lines of the family enveloping the parabola
\[x^{2}+2y=1\]
with focus at \((0,0)\) as shown on the right in Figure 8.
**13. Polar conic plane tangent to Darboux quadric, hyperbolic pencil, web symmetric by rotations.**
The pencil with limit circles at the origin and infinity gives circles
\[x^{2}+y^{2}=v,\]
the polar conic is
\[x_{0}^{2}+y_{0}^{2}=1,\quad z_{0}=-1\]
defining the family of lines
\[x_{0}x+y_{0}y=1,\]
the lines of the family enveloping the circle
\[x^{2}+y^{2}=1\]
as shown on the left in Figure 9.
**14. Polar conic plane tangent to Darboux quadric, elliptic pencil.**
The pencil with vertexes at \((1,0)\) and \((-1,0))\) gives circles
\[\frac{(x^{2}+y^{2}-1)^{2}}{(x^{2}+y^{2}-2x+1)(x^{2}+y^{2}+2x+1)}=v,\]
the polar conic is
\[cx_{0}^{2}+(c+1)y_{0}^{2}+1=0,\quad z_{0}=-1,\]
defining the family of lines
\[x_{0}x+y_{0}y=1,\]
the lines of the family enveloping the conic
\[\frac{x^{2}}{c}+\frac{y^{2}}{c+1}=-1\]
with foci \((1,0)\) and \((-1,0)\)) as shown in the center in Figure 9.
**15. Polar conic plane tangent to Darboux quadric, parabolic pencil.**
The pencil with vertex at the origin gives circles
\[x^{2}+y^{2}=2vy,\]
the polar conic is
\[x_{0}^{2}+y_{0}^{2}=1,\quad z_{0}=-1\]
defining the family of lines
\[x_{0}x+y_{0}y=1,\]
the lines of the family enveloping the circle
\[x^{2}+y^{2}=1\]
as shown on the right in Figure 9.
**Theorem 7**: _The webs of different types in the above classification list are not Mobius equivalent. The webs of the same type with different normal forms are not Mobius equivalent._
_Proof:_ The webs from different types are not Mobius equivalent: discrete geometric invariants indicated in the discriptions, such as 1) presence of infinitesimal symmetry and 2) the mutual position of polar line, polar conic and Darboux quadric, effectively separate the types.
To see that the different normal forms within a family are not Mobius equivalent, one computes the subgroup \(G_{p}\) of \(PSO(3,1)\) respecting the positions of the polar conic plane and the polar line in the chosen normalization and checks that the \(G_{p}\)-orbits of the canonical forms are different.
For the first 4 types, \(G_{p}\) is generated by \(R_{z}\) and by reflections in the coordinate planes.
For the 5th, 6th, 7th type, \(G_{p}\) is generated by \(B_{z}\) and by reflections in the coordinate planes.
For the 10th, 11th and 14th types, \(G_{p}\) is discrete and generated by reflections in the planes \(x=0\) and \(y=0\). \(\Box\)
As in the case of 3 pencils, Lemma 1 effectively selects candidates among 3-webs that can be hexagonal. The singularities of the type described by the Lemma arise when either 1) a line,
Figure 9: Type 13, 14, 15.
joining 2 different points on the polar conic, touches the Darboux quadric at a point \(p_{t}\), while the polar conic plane is not tangent the Darboux quadric or 2) a line, joining a point on the polar line with a point on the polar conic, touches the Darboux quadric at a point \(p_{t}\) while the tangent plane to the Darboux quadric at \(p_{t}\) is not tangent to the polar conic.
In the former case, the points \(p_{t}\) trace a circle, which is the intersection of the polar conic plane with the Darboux quadric. Then, by Lemma 1, the polar of the polar conic plane lies on the pencil polar line.
In the latter case, consider a point \(p_{r}\) running over the polar line. For non-planar polar set, \(p_{r}\) meets the polar conic plane \(\pi_{c}\) only at one point. Therefore the one-parameter family of cones tangent to the Darboux quadric and having their vertexes at \(p_{r}\) cuts the plane \(\pi_{c}\) in a one-parameter family of conics \(c_{r}\). Each conic \(c_{r}\) intersect the polar conic \(c_{p}\) at 4 (possibly complex or multiple) points. If the polar conic \(c_{p}\) is not a member of the family \(\{c_{r}\}\), at least one of these 4 intersection points is moving along \(c_{p}\) as \(p_{r}\) runs over the polar line. In fact, if intersection points are stable then \(\{c_{r}\}\) is a pencil of conics containing \(c_{p}\).
Choose one such moving intersection point \(p_{i}\). Lines \(l_{r}\) tangent to \(c_{r}\) at \(c_{i}\) form a one-parameter family, or congruence of lines. All the objects in this construction are considered as complex but the polar conic and the polar line must have equations with real coefficients and the real part of the polar conic cannot lie completely inside the Darboux quadric.
**Proposition 8**: _If the polar curve of a hexagonal circular 3-web is non-planar and splits into a line and a smooth conic then for the web complexification hold true_
_1) the polar of the polar conic plane lies on the polar line, if this plane is not tangent to the Darboux quadric and_
_2) the congruence of lines \(l_{r}\) is a pencil with the vertex on the polar conic, if the polar conic is not a member of the family \(\{c_{r}\}\)._
_Proof:_ The first claim follows directly from Lemma 1. For conic planes missing the Darboux quadric, the complex version works.
To derive the second claim, observe that the line \(p_{r}p_{i}\) touches the Darboux quadric at a singular point treated by Lemma 1. Thus the tangency point must trace a circle on the Darboux quadric and the polar point \(p_{0}\) of the circle must lie on the polar conic. Then the plane, tangent to the Darboux quadric and passing through \(p_{r}p_{i}\), contains \(p_{0}\). This plane cuts the plane \(\pi_{c}\) of the polar conic along a line \(l_{r}\), passing through the \(p_{i}\) and tangent to the corresponding conic \(c_{r}\). Hence \(\{l_{r}\}\) is the pencil with vertex at \(p_{0}\). \(\Box\)
**Theorem 9**: _If the polar curve of a hexagonal circular 3-web splits into a smooth conic and a straight line, not lying in a plane of the conic, then the web is Mobius equivalent to one from the above presented list._
_Proof:_ The polar conic plane either completely misses the Darboux quadric, or cuts it in a circle, or is tangent to it. The polar line is either hyperbolic, or elliptic, or parabolic. Thus we have 9 cases to consider, each case defining a set of webs (possibly empty).
\(\bullet\)_Polar conic plane misses the Darboux quadric_.
Applying a suitable Mobius transformation, we can send the polar conic plane to infinity. Then by Proposition 8 the polar line contains the origin of the affine chart and therefore meets the Darboux quadric at 2 points. Thus the polar line is hyperbolic. Applying a rotation around
the origin, we map these two points to \((0,0,\pm 1)\). In the affine coordinates \(x=\frac{X}{Z},\ y=\frac{Y}{Z}\) on the polar conic plane \(U=0\), the conics \(c_{r}\) are
\[x^{2}+y^{2}=r,\]
where \(r\) is considered as a complex parameter. These conics are real only for real non-negative \(r\). If the polar conic coincides with one of the conics \(c_{r}\) then we get Type 2, symmetric by \(R_{z}\).
If the polar conic is not one of \(c_{r}\) then, by Proposition 8, the points, where lines of pencil with the vertex at some point \(p_{0}=(x_{0},y_{0})\in\mathbb{C}^{2}\) touch the circles \(c_{r}\), run over the polar conic. The line of the pencil \(y=y_{0}+k(x-x_{0})\) corresponding to the parameter \(k\in\mathbb{C}\), is tangent to the conic \(c_{r}\) for
\[r=\frac{x_{0}^{2}k^{2}-2x_{0}y_{0}k+y_{0}^{2}}{k^{2}+1}\]
at the point
\[p(k)=(x(k),y(k))=\left(\frac{k(x_{0}k-y_{0})}{k^{2}+1},\frac{y_{0}-x_{0}k}{k^ {2}+1}\right).\]
The points \(p(k)\) run over the complex conic
\[x^{2}+y^{2}=x_{0}x+y_{0}x.\]
This conic is real if and only if \(x_{0}\) and \(y_{0}\) are real. It is smooth if and only if \(p_{0}\neq(0,0)\). We got the first web of the list, Type 1. This conic is the circle, passing through the origin \(O=(0,0)\) and \(p_{0}\) and having its center at the midpoint of the segment \(Op_{0}\). (We obtained a theorem of scholar geometry.) Using \(R_{z}\) we normalize \(p_{0}\) to \(y_{0}=0,x_{0}>0\).
For infinite \(p_{0}\), different from the cyclic points, the line \(l(k)\) of the pencil \(y=ax+k\), \(a\in\mathbb{C}\) touches a unique conic of the family \(\{c_{r}\}\) at the point
\[p(k)=\left(-\frac{ak}{a^{2}+1},\frac{k}{a^{2}+1}\right).\]
The points \(p(k)\) are collinear: \(x(k)+ay(k)=0\) and we cannot obtain a smooth polar conic in this way. Finally, if \(p_{0}\) is cyclic, the lines from the pencil touch the conics \(c_{r}\) at the very point \(p_{0}\) and we get no conic at all.
\(\bullet\)_Polar conic plane cuts the Darboux quadric, hyperbolic pencil_.
We send the conic plane to \(Z=0\). Now the polar line contains the infinite point \([0:0:1:0]\) by Proposition 8. The subgroup of the Mobius group, preserving the plane \(Z=0\), is generated by rotations around \(Z\)-axis and the boosts \(B_{x}\) and \(B_{y}\). Using this subgroup, we normalize the polar line to \(x=y=0\). The conics \(c_{r}\) are circles in the plane \(z=0\) with the center at the origin. Repeating the arguments that we used above for conic planes missing the Darboux quadric, we get Types 3 and 4.
\(\bullet\)_Polar conic plane cuts the Darboux quadric, elliptic pencil_.
We normalize the conic plane to \(X=0\). Then by Proposition 8 the point \([1:0:0:0]\) lies on the polar line. The polar line, being elliptic, meets the plane \(X=0\) at a point \(p\) outside the unit circle. Mobius transformations, preserving the plane \(X=0\), are generated by rotations around \(X\)-axis and the boosts \(B_{y}\) and \(B_{z}\). Using these transformations, we send the point \(p\) to \([0:1:0:0:0]\). Now the polar line is \(U=Z=0\) and the conics \(c_{r}\) have equations
\[z^{2}+\frac{y^{2}}{r}=1\]
in the affine coordinates. If the polar conic coincides with one of these conics then \(r\) is real and positive and we get Type 5.
Otherwise, by the second claim of Proposition 8, the lines \(l_{r}\) meet at some point \(p_{0}\in c_{p}\). If \(p_{0}=(0,y_{0},z_{0})\) is finite, a line of the pencil of lines \(z=z_{0}+k(y-y_{0})\) with vertex at \(p_{0}\) is tangent to the conic \(c_{r}\) with
\[r=\frac{y_{0}^{2}k^{2}-2y_{0}z_{0}k+z_{0}^{2}-1}{k^{2}}.\]
Thus the points, where the lines \(l_{r}\) touch \(c_{r}\), are parametrized by \(k\) via
\[y=\frac{y_{0}^{2}k^{2}-2y_{0}z_{0}k+z_{0}^{2}-1}{k(y_{0}k-z_{0})},\quad z=\frac {1}{z_{0}-y_{0}k}.\]
Excluding \(k\), we get the conic
\[y_{0}z^{2}-z_{0}yz+y-y_{0}=0.\]
This conic is real only for real \(y_{0},z_{0}\) and is smooth if and only if \(y_{0}(z_{0}^{2}-1)\neq 0.\) If \(z_{0}^{2}<1\) we normalize to \(z_{0}=0\) applying \(B_{z}\) and get Type 6. If \(z_{0}^{2}>1\) we send \(p_{0}\) to infinity applying \(B_{z}\).
For infinite point \(p_{0}=[0:Y_{0}:Z_{0}:0]\) we conclude that \(Y_{0}\neq 0\) and \(Z_{0}\neq 0\), otherwise the tangency points \(p_{i}\) do not trace a conic. Thus we can set \(p_{0}=[0:y_{0}:1:0]\), where \(y_{0}\neq 0\). In the affine coordinates \(u=\frac{U}{Z}\), \(y=\frac{Y}{Z}\) in the plane \(X=0\), the conic \(c_{r}\) is
\[1+\frac{y^{2}}{r}=u^{2}.\]
A line from the pencil \(y=ku+y_{0}\) is tangent to the conic \(c_{r}\) if and only if
\[r=k^{2}-y_{0}^{2}.\]
The points, where the lines \(l_{r}\) touch \(c_{r}\), are parameterized by \(k\) via
\[y=\frac{y_{0}^{2}-k^{2}}{y_{0}},\quad u=-\frac{k}{y_{0}}.\]
This is a parametrization of the polar conic of Type 7
\[y_{0}u^{2}+y=y_{0},\]
which becomes
\[y_{0}U^{2}-y_{0}Z^{2}+YZ=0\]
in homogeneous coordinates.
\(\bullet\)_Polar conic plane cuts the Darboux quadric, parabolic pencil_.
We normalize the conic plane to \(Y=0\). Then the point \([0:1:0:0]\) is on the polar line. The plane \(Y=0\) is stable under rotations \(R_{y}\) along the \(y\)-axis and under boosts \(B_{x}\) and \(B_{z}\). Rotating, if necessary, around the \(y\)-axis, we bring the polar line to \(z=-1\), \(x=0.\) In the affine coordinates on the plane \(Y=0\), the conics \(c_{r}\) have equations
\[x^{2}+\frac{r-1}{r}z^{2}-\frac{2}{r}z-\frac{r+1}{r}=0.\]
If the polar conic coincides with one of \(c_{r}\) then \(r\) is real. The chosen position of polar conic plane and polar line is stable by action of the group generated by \(B_{z}\). Applying it, one can normalize to \(r=\frac{1}{\sqrt{2}}\) and we get Type 9.
Otherwise consider first the pencil of lines \(z=z_{0}+k(x-x_{0})\) with finite vertex at \(p_{0}=(x_{0},0,z_{0})\). A line from the pencil is tangent to the conic \(c_{r}\) with
\[r=\frac{x_{0}^{2}k^{2}-2x_{0}(z_{0}+1)k+(z0+1)^{2}}{(x_{0}^{2}-1)k^{2}-2x_{0}z_{ 0}k+z_{0}^{2}-1}.\]
Thus the points, where the lines \(l_{r}\) touch \(c_{r},\) are parametrized by \(k\) via
\[x=\frac{k(x_{0}k-z_{0}-1)}{k^{2}-x_{0}k+z_{0}+1},\quad z=\frac{(x_{0}^{2}-1)k^{ 2}-x_{0}(2z_{0}+1)k+z_{0}(z_{0}+1)}{k^{2}-x_{0}k+z_{0}+1}.\]
Excluding \(k,\) we get the conic
\[(z_{0}+1)x^{2}-x_{0}xz+z^{2}-x_{0}x+(1-z_{0})z-z_{0}=0. \tag{11}\]
This conic is real only for real \(x_{0},z_{0}\) and is smooth if and only if \(z_{0}\neq-1.\) The chosen position of polar conic plane and polar line is stable by action of the group generated by \(B_{z}\) and \(B_{x}-R_{y}\). Consider the orbits of points in the plane \(Y=0.\) The orbit dimension is two for points outside the union of the line \(U+Z=0\) and the circle \(X^{2}+Z^{2}=U^{2},\) and is one on this union except for their common point. Thus the finite representatives of the orbits are \([0:0:0:1],\)\([0:0:1:1]\) and \([0:0:-1:1].\) For the first two points, the conics (11) lie inside the Darboux quadric and there is no real circles. For the point \([0:0:-1:1]\) the conic (11) is not smooth.
For infinite vertexes \(p_{0},\) the pencil \(z=k\) gives a non-smooth conic. Thus the pencil can be chosen as \(x=az+k\). A line from the pencil is tangent to the conic \(c_{r}\) with
\[r=\frac{(k-a)^{2}}{k^{2}-a^{2}-1}.\]
The points, where the lines \(l_{r}\) touch \(c_{r},\) are
\[x=\frac{k-a}{a^{2}-ak+1},\quad z=\frac{k^{2}-ak-1}{a^{2}-ak+1}.\]
Excluding \(k,\) we get the conic
\[x^{2}-axz-ax-z-1=0,\]
which is real only for real \(a\). Taking into account the action of the group generated by \(B_{z}\) and \(B_{x}-R_{y},\) we set \(a=0\) and get Type 8.
\(\bullet\)_Polar conic plane tangent to Darboux quadric, hyperbolic pencil_.
If the hyperbolic line does not contain the point where the polar conic plane touches the Darboux quadric then we sent this point to \((0,0,-1)\) and the polar line to \(Y=Z=0\). In the affine coordinates on the plane \(z=-1,\) the conics \(c_{r}\) have equations
\[(x-r)^{2}+(1-r^{2})y^{2}+1-r^{2}=0.\]
If the polar conic coincides with one of \(c_{r}\) then the web curvature
\[K_{B}=\frac{4(r^{2}-1)^{2}(x^{2}-1)(rx^{2}+ry^{2}+(r^{2}-3)x+r)(x^{2}+y^{2}-2rx+ 1)^{4}}{x^{4}y^{3}(x^{2}+1-2rx)^{6}}\]
vanishes only for \(r=\pm 1,\) the conic \(c_{r}\) being non-smooth for these values.
A line from the pencil \(y=y_{0}+k(x-x_{0})\) with finite vertex at \(p_{0}=(x_{0},y_{0},-1)\) is tangent to the conic \(c_{r}\) with
\[r=\frac{(x_{0}^{2}+1)k^{2}-2x_{0}y_{0}k+y_{0}^{2}+1}{2k(x_{0}k-y_{0})}.\]
Thus the points, where the lines \(l_{r}\) touch \(c_{r}\), are parametrized by \(k\) via
\[x=\frac{x_{0}(x_{0}^{2}-1)k^{3}-y_{0}(3x_{0}^{2}-1)k^{2}+x_{0}(3y_{0}^{2}+1)k-y _{0}(y_{0}^{2}+1)}{k((x_{0}^{2}-1)k^{2}-2x_{0}y_{0}k+(y_{0}^{2}-1))},\]
\[y=\frac{2(x_{0}k-y_{0})}{(x_{0}^{2}-1)k^{2}-2x_{0}y_{0}k+(y_{0}^{2}-1)}.\]
Excluding \(k\), we get the cubic
\[(x_{0}^{2}-1)y^{3}+(y_{0}^{2}-1)x^{2}y-2x_{0}y_{0}xy^{2}+2y_{0}x^{2}+2y_{0}y^{2 }-2x_{0}y_{0}x+(x_{0}^{2}-y_{0}^{2})y=0. \tag{12}\]
This cubic splits into a smooth real conic and a line in 3 cases:
1) for \(y_{0}=0\) equation (12) factors as \(y(x^{2}+(1-x_{0}^{2})y^{2}-x_{0}^{2})=0\) and we get Type 10,
2) for \(x_{0}=1\), \(y_{0}\neq 0\) equation (12) factors as \((x-1)((y_{0}^{2}-1)xy-2y_{0}y^{2}+2y_{0}x+(y_{0}^{2}-1)y)=0\),
3) for \(x_{0}=-1\), \(y_{0}\neq 0\) equation (12) factors as \((x+1)((y_{0}^{2}-1)xy+2y_{0}y^{2}+2y_{0}x-(y_{0}^{2}-1)y)=0\).
The cases 2) and 3) give Type 11, the substitution \(x\rightarrow-x\) reducing one to the other.
For infinite vertexes \(p_{0}\), the pencil \(x=k\) gives a non-smooth conic. Thus the pencil can be chosen as \(y=ax+k\). A line from the pencil is tangent to the conic \(c_{r}\) with
\[r=\frac{k^{2}+a^{2}+1}{2ak}.\]
The points, where the lines \(l_{r}\) touch \(c_{r}\), are
\[x=\frac{k(k^{2}-a^{2}+1)}{a(k^{2}-a^{2}-1)},\quad y=\frac{2k}{a^{2}-k^{2}+1}.\]
Excluding \(k\), we get the cubic
\[y^{3}+a^{2}x^{2}y+2axy^{2}+2ax+(1-a^{2})y=0.\]
The cubic splits into a line and a conic only for \(a=0\) or for \(a^{2}\pm 2ia-1=0\). The former case gives non-real and non-smooth conic \(y^{2}+1=0\), the latter - the non-real conic \(xy\pm i(y^{2}+2)=0\).
If the hyperbolic line contains the point where the polar conic plane touches the Darboux quadric then we sent this point to \((0,0,-1)\) and the polar line to \(X=Y=0\). In the affine coordinates on the plane \(z=-1\), the conics \(c_{r}\) are concentric circles. The case of concentric circles was considered above. We get Type 12 and Type 13.
\(\bullet\)_Polar conic plane tangent to Darboux quadric, elliptic pencil_.
If one vertex of the elliptic pencil coincides with the point where the polar conic plane touches the Darboux quadric then the polar curve is planar.
Thus we can sent the tangent point to \((0,0,-1)\) and the vertexes to \((\pm 1,0,0)\). The conics \(c_{r}\) have equations
\[(r^{2}+1)x^{2}+(y+r)^{2}=r^{2}+1.\]
If the polar conic coincides with one of \(c_{r}\) then the web curvature
\[K_{B}=-\frac{64(r^{2}+1)^{2}x(y^{2}+1)(rx^{2}+ry^{2}+(3+r^{2})y-r)(2ry-x^{2}-y^{2 }+1)^{4}}{(x^{2}-y^{2}-1)^{4}(r^{2}-x^{2}+1)^{6}}\]
vanishes only for \(r=\pm i\), the conic \(c_{r}\) being non-smooth for these values.
A line from the pencil \(y=y_{0}+k(x-x_{0})\) is tangent to the conic \(c_{r}\) with
\[r=\frac{(x_{0}^{2}-1)k^{2}-2x_{0}y_{0}k+(y_{0}^{2}-1)}{2(x_{0}k-y_{0})}.\]
The points, where the lines \(l_{r}\) touch \(c_{r}\), are
\[x=\frac{2k(x_{0}k-y_{0})}{(x_{0}^{2}+1)k^{2}-2x_{0}y_{0}k+y_{0}^{2}+1},\]
\[y=-\frac{x_{0}(x_{0}^{2}-1)k^{3}-y_{0}(3x_{0}^{2}-1)k^{2}+x_{0}(3y_{0}^{2}+1)k -y_{0}(y_{0}^{2}+1)}{(x_{0}^{2}+1)k^{2}-2x_{0}y_{0}k+y_{0}^{2}+1}.\]
Excluding \(k\), we get the cubic
\[(y_{0}^{2}+1)x^{3}-2x_{0}y_{0}x^{2}y+(x_{0}^{2}+1)xy^{2}-2x_{0}x^{2}-2x_{0}y^{ 2}+(x_{0}^{2}-y_{0}^{2})x+2x_{0}y_{0}y=0.\]
This cubic splits into a conic and a line in 4 cases:
1) for \(y_{0}=\pm i\) the cubic equation factors as
\[(\pm i-y)(\pm 2ix_{0}x^{2}-(1+x_{0}^{2})xy\mp i(1+x_{0}^{2})x+2x_{0}y)=0,\]
2) for \(y_{0}=\pm ix_{0}\) the cubic equation factors as
\[(x\pm iy)((1-x_{0}^{2})x^{2}\mp i(x_{0}^{2}+1)xy-2x_{0}x\pm 2ix_{0}y+2x_{0}^{2})=0,\]
3) for \(x_{0}=0\), the cubic equation factors as \(x((y_{0}^{2}+1)x^{2}+y^{2}-y_{0}^{2})=0\),
4) for \(x=\pm 1\) the cubic equation factors as \((x\mp 1)((y_{0}^{2}+1)x^{2}+2y^{2}\mp 2y_{0}xy\pm(y_{0}^{2}-1)x-2y_{0}y)=0\). In the cases 1) and 2) the conic is not real, the case 3) gives Type 14, for the case 4) the web curvature does not vanishes.
For infinite vertexes \(p_{0}\), the pencil \(x=k\) gives a non-smooth conic. Thus the pencil can be chosen as \(y=ax+k\). A line from the pencil is tangent to the conic \(c_{r}\) with
\[r=\frac{a^{2}-k^{2}+1}{2k}.\]
The points, where the lines \(l_{r}\) touch \(c_{r}\), are
\[x=-\frac{2ak}{k^{2}+a^{2}+1},\quad y=\frac{k(k^{2}-a^{2}+1)}{k^{2}+a^{2}+1}.\]
Excluding \(k\), we get the cubic
\[a^{2}x^{3}-2ax^{2}y+xy^{2}+(1-a^{2})x+2ay=0.\]
The cubic splits into a line and a conic in 2 cases:
1) for \(a=0\) the cubic equation factors as \(x(y^{2}+1)=0\) and the conic is non-smooth
2) for \(a=\pm i\) the cubic equation factors as \((x\pm iy)(x^{2}\pm ixy-2)=0\), and the conic is not real.
\(\bullet\)_Polar conic plane tangent to Darboux quadric, parabolic pencil_.
For non-planar polar curves, the points, where the polar line and polar conic plane touch the Darboux quadric, are different. Thus we can normalize the polar conic plane to \(z=-1\) and the polar line to \(x=0\), \(z=1\). This configuration is preserved by \(B_{z}\). The conics \(c_{r}\) have equations
\[\frac{r}{4}x^{2}+y=\frac{1}{r}.\]
If the polar conic coincides with one of \(c_{r}\) then the web curvature
\[K_{B}=-\frac{4096r^{5}xy^{2}(ry+2x^{2}+2y^{2})(ry-x^{2}-y^{2})^{4}}{(x^{2}-y^{2 })^{4}(r^{2}-4x^{2})^{6}}\]
vanishes only for \(r=0\), the conic \(c_{r}\) being non-smooth for this value.
A line from the pencil \(y=y_{0}+k(x-x_{0})\) is tangent to the conic \(c_{r}\) with
\[r=\frac{k^{2}+1}{y_{0}-x_{0}k}.\]
The points, where the lines \(l_{r}\) touch \(c_{r}\), are
\[x=\frac{2k(x_{0}k-y_{0})}{k^{2}+1},\]
\[y=\frac{x_{0}k^{3}-y_{0}k^{2}-x_{0}k+y_{0})}{k^{2}+1}.\]
Excluding \(k\), we get the cubic
\[x^{3}+xy^{2}-2x_{0}x^{2}-2x_{0}y^{2}+(x_{0}^{2}-y_{0}^{2})x+2x_{0}y_{0}y=0.\]
This cubic splits into a conic and a line in 2 cases:
1) for \(y_{0}=\alpha x_{0}\), where \(\alpha^{2}\pm 2i\alpha-1=0\), the cubic equation factors as
\[(x\pm iy)(\pm ixy-x^{2}+2x_{0}x\mp 2ix_{0}y-2x_{0}^{2})=0,\]
2) for \(x_{0}=0\) the cubic equation factors as \(x(x^{2}+y^{2}-y_{0}^{2})=0\).
Ii the case 1) the conic is not real, the case 2) gives Type 15 after rescaling by \(B_{z}\).
For infinite vertexes \(p_{0}\), the pencil \(y=k\) gives a non-smooth conic. Thus the pencil can be chosen as \(x=ay+k\). A line from the pencil is tangent to the conic \(c_{r}\) with
\[r=-\frac{a^{2}+1}{ak}.\]
The points, where the lines \(l_{r}\) touch \(c_{r}\), are
\[x=\frac{2k}{a^{2}+1},\quad y=\frac{k(1-a^{2})}{a(1+a^{2})}.\]
Excluding \(k\), we get the line \((a^{2}-1)x+2y=0\).
Thus we have considered all the cases with non-planar polar curve, the theorem is proved.
Hexagonal circular 3-webs with polar curves of degree three
All the known hexagonal circular 3-webs are "algebraic" in the sense that their polar curves are algebraic, namely, the irreducible components of polar curves are cubics, conics, and straight lines.
**Theorem 10**: _There is no hexagonal circular 3-web whose polar curve is a rational normal curve._
_Proof:_ Suppose there is such a web with rational normal curve \(N\) as the polar curve. Consider a family of (possibly complex) bisecant lines \(L\) of \(N\) that are tangent to the Darboux quadric. The points of tangency form a curve \(C\). As the curve \(N\), being of degree three, cannot have trisecant lines, the curve \(C\) must be a circle whose polar point \(p_{0}\) lies on \(N\) by Lemma 1. Consider the projection \(\pi\) from \(p_{0}\) to the plane of the circle \(C\). The image \(\pi(N)\) of \(N\) is a conic, the image \(\pi(C)\) of \(C\) is the circle \(C\) itself. Any busecant line \(L\) of the considered family meets the circle \(C\) at a point \(p_{L}\) and the curve \(N\) at two points \(p_{1},p_{2}\). Then the image \(\pi(L)\) touches \(C\) at \(\pi(p_{L})=p_{L}\) and intersects \(\pi(N)\) at \(\pi(p_{1}),\pi(p_{2})\). Consider one of the four (possibly complex) points \(q_{i}\) of intersection of the conics \(\pi(N)\) and \(C\). The curves \(\pi(N)\) and \(C\) cannot be tangent at \(q_{i}\) since for \(q_{i}=p_{L}\) we have \(\pi(p_{1})=\pi(p_{2})=p_{L}\) but \(p_{1},p_{2},p_{0}\) are not collinear. If \(p_{L}\) is not coinciding with any of \(p_{1},p_{2}\) then the 4 points \(\pi^{-1}(q_{i}),p_{1},p_{2},p_{0}\) on \(N\) are coplanar but \(N\) is of degree 3. Therefore one of the pints \(p_{1},p_{2}\) coincides with \(p_{L}\) and the plane of \(C\) intesect the curve \(N\) of degree three at 4 points \(q_{i}\). This contradiction finishes the proof. \(\square\)
The above proven theorem gives classification of a natural class of webs under study.
**Corollary 11**: _Suppose that the polar curve of a hexagonal circular 3-web is algebraic of degree three. Then the polar curve is either planar or the web is Mobius equivalent to one described by Theorems 6 and 9._
## 7 Hexagonal circular 3-webs with Mobius symmetry
The Mobius group in \(\mathbb{RP}^{3}\) can be realized as \(PGL_{2}(\mathbb{C})\), or equivalently, as the group of fractional-linear transformations of \(z=x+iy\), where \((x,y)\) are cartesian coordinates in the plane. A generator of any 1-dimensional subalgebra can be brought to the Jordan normal form by adjoint action. The generator can be chosen either as
\[\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right),\quad\mbox{or}\quad\left(\begin{array}{cc}\lambda&0\\ 0&-\lambda\end{array}\right),\]
where \(\lambda=\alpha+i\beta\) is some complex number with \(Re(\lambda)=\alpha\), \(Im(\lambda)=\beta\).
One way to obtain symmetric hexagonal 3-webs is provided by the Wunderlich construction (see [W-38] and Introduction).
Another easy way to produce hexagonal circular 3-webs is to choose two orbits of polar points such that one is a conic and the other is a coplanar straight line, these two orbits forming a polar curve. This construction may degenerate if there are 3 orbits which are coplanar straight lines.
For translations \((x,y)\mapsto(x,y+u)\), the orbit of a polar point for a circle \((x-a)^{2}+y^{2}=r\), parametrized by \(u\) as follows \([-2a:-2u:a^{2}+u^{2}-r-1:-a^{2}-u^{2}+r-1]\), is a conic in the
plane \(-X+a(Z+U)=0\). For a nonvertical line \(y=ax\), the orbit is a line \(Z+U=X+aY=0\), parametrized by \(u\) via \([a:-1:-u:u]\). Thus we immediately get the following hexagonal 3-webs symmetric by translations.
**T1. 3 families of parallel lines.**
Mobius orbits of such webs form a two-parametric family. The polar curve splits into 3 coplanar lines. Observe that any web of this family has a 3-dimensional symmetry group.
**T2. Polar curve splits into conic and coplanar line.**
There is only one Mobius class of such webs, any representative is formed by horizontal lines \(y=const\) and by the orbit of a circle which can be chosen as the unitary one centered at the origin.
**T3. Wunderlich's type.**
There are several types, depending on the position of generating curves.
1. Nondegenerate type. Webs are formed by vertical lines and by two different orbits of circles. The family of orbits is two-parametric: by translation and rescaling we can fix one orbit.
2. Coinciding circle orbits. There is only one Mobius class of this type. It was already obtained as Type 9 in the classification of Theorem 9.
3. Generated by a circle and a line. The family of orbits is one-parametric: by translation and rescaling we can fix the generating circle.
For dilatations \((x,y)\mapsto(ux,uy)\), the orbit of a polar point for a circle \((x-a)^{2}+(y-b)^{2}=r\), parametrized by \(u\) via \([-2au:-2bu:a^{2}+b^{2}-r-u^{2}:-(a^{2}+b^{2}-r)-u^{2}]\), is a conic in the plane \(bX-aY=0\) if \((a,b)\neq(0,0)\) and \(a^{2}+b^{2}\neq r\), the hyperbolic line \(X=Y=0\) if \((a,b)=(0,0)\), and the parabolic line \(bX-aY=U-Z=0\) if \(a^{2}+b^{2}=r\). For a line \(ax+by+c=0\) with \(c\neq 0\), the orbit is the parabolic line \(bX-aU=Z+U=0\), parametrized by \(u\) via \([au:bu:c:-c]\). The lines with \(c=0\) are invariant. Invoking the classification of the webs with 3 pencils, we list the following hexagonal 3-webs symmetric by dilatations.
**D1=T1. 3 families of parallel lines.**
**D2. 2 coplanar parabolic lines and hyperbolic line intersecting them.**
Family of parallel lines, the parabolic pencil with circles tangent to a line of the family through the origin, and the family of concentric circles with the center at the origin. There is only one Mobius class of such webs.
**D3. 2 dual parabolic lines and hyperbolic line through their common point.**
2 orthogonal families of parallel lines and the family of concentric circles with the center at the origin. There is only one Mobius class of such webs, we have already presented it on the right of Figure 2.
**D4. Polar curve splits into conic and coplanar hyperbolic line.**
Family of circles obtained by the dilatations from one not passing trough the origin and not having its center at the origin and the family of concentric circles with the center at the origin. Mobius orbits of such webs form a one-parameter family.
**D5. Polar curve splits into conic and tangent parabolic line.**
Family of circles obtained by dilatations from one not passing trough the origin and not having its center at the origin and the family of parallel lines orthogonal to the orbit of centers of the circles. Mobius orbits of such webs form a one-parameter family.
**D6. Wunderlich's type.**
1. Nondegenerate type, polar curve with 2 conics and elliptic line. Webs are formed by pencil of lines centered at the origin and by two different orbits of circles at general position. The family of Mobius orbits is 3-parametric: one can choose the centers of circles on a fixed circle centered at the origin and normalize by rotations.
2. Polar curve with conic and elliptic line. Webs are formed by pencil of lines centered at the origin and orbit of a circle. The family of Mobius orbits is 1-parametric, it is Type 5 in the classification of Theorem 9.
3. Polar curve with conic, hyperbolic and elliptic line. Webs are formed by pencil of lines centered at the origin, orbit of a circle, and the family of concentric circles with the center at the origin. The family of Mobius orbits is 1-parametric.
4. Polar curve with conic, parabolic and elliptic line. Webs are formed by pencil of lines centered at the origin, orbit of a circle, and a family of parallel lines. The family of Mobius orbits is 2-parametric.
5. Polar curve with hyperbolic line, elliptic line dual to hyperbolic, and parabolic line intersecting them. Webs are formed by the pencil of lines centered at the origin, the family of concentric circles with the center at the origin, family of parallel lines. There is only one Mobius type. This web is shown in the center of Figure 2.
6. Polar curve with 2 parabolic lines touching Darboux quadric at the same point and coplanar elliptic line. Webs are formed by the pencil of lines centered at the origin and 2 families of parallel lines. The family of Mobius orbits is 1-parametric.
7. Polar curve with 2 parabolic lines and elliptic line whose dual joins the touching points of parabolic lines with Darboux quadric. Webs are formed by the pencil of lines centered at the origin, a family of parallel lines and a parabolic pencil with the vertex at the origin. The family of Mobius orbits is 1-parametric. A representative of this web is shown in Figure 4.
For rotations \((x,y)\mapsto(x\cos(t)+y\sin(t),-x\sin(t)+y\cos(t))\), the orbit of a polar point for a circle \((x-a)^{2}+y^{2}=r\), parameterized by \(t\) via \([-2a\cos(t):-2a\sin(t):a^{2}-r-1:-a^{2}+r-1]\), is a circle in the plane \((a^{2}-r+1)Z+(a^{2}-r-1)U=0\) if \(a\neq 0\). For a line \(ax+by+c=0\) with \(c\neq 0\), the orbit of its polar is also a circle \([a\cos(t)-b\sin(t):a\sin(t)+b\cos(t):c:-c]\) in the plane \(Z+U=0\). The circles with \(a=0\) are invariant, the orbit of a polar point of a line with \(c=0\) is an elliptic line. Thus one can easily list the following hexagonal 3-webs symmetric by rotations.
**R1. Polar curve is a circle symmetric by rotations around \(z\)-axis and coplanar elliptic line.**
The family of Mobius orbits is 1-parametric.
**R2. Wunderlich's type.**
One foliation is formed by orbits of points by rotations and the other two are images of two circles by rotation, these circles can coincide or "degenerate" into straight lines. Examples of such webs with coinciding generators are shown in Figure 5 (center), Figure 6 (left), and Figure 9 (left). Considering The reader easily describes different types and compute corresponding Mobius orbit dimension of the webs.
**Theorem 12**: _Hexagonal circular 3-web with 1-dimensional Mobius symmetry is Mobius equivalent to one of the above described T-,D-, or R-types._
_Proof:_ A circular 3-web, symmetric by a given 1-parametric subgroup of the Mobius group and not obtained by the Wunderlich construction, is fixed by a choice of three curves, each being either a circle or a straight line, one from each foliation. Two circles can coincide: an orbit of circle still gives a (singular) 2-web. Moreover, one can move around these curves by the stabilizer of the infinitesimal generator of the subgroup.
\(\bullet\)_Webs symmetric by translations \(\partial_{y}\)._
Consider a point such that none of the leaf tangents is parallel to the field \(\partial_{y}\). If a generating curve \(C_{1}\) of some foliation is a circle then there are 2 circles from the orbit of \(C_{1}\) passing through this point. Thus there are two locally defined direction fields \(\partial_{x}+P_{\pm}\partial_{y}\) tangent to these 2 circles. Globally they are not separable: one direction swaps for the other upon running along \(C_{1}\).
Consider one of the foliations and the corresponding direction field \(\partial_{x}+P\partial_{y}\). Since the foliation is symmetric, the slope \(P\) does not depend on \(y\). Since all the leaves are circles of the same curvature, the foliation has a first integral
\[\frac{(P^{\prime})^{2}}{(P^{2}+1)^{3}},\quad\mbox{where}\quad P^{\prime}= \frac{dP}{dx}.\]
The differential equation
\[\frac{(P^{\prime})^{2}}{(P^{2}+1)^{3}}=A^{2}=const\]
has general solution
\[P(x)=\frac{A(x-x_{0})}{\sqrt{1-A^{2}(x-x_{0})^{2}}},\]
the zero value of \(A\) corresponding to the foliation by straight parallel lines. Let the slopes of the other two web foliations be \(Q\) and \(R\), and the connection form be \(\gamma=\alpha(x)dx+\beta(x)dy\). Hexagonality condition \(d\gamma=0\) implies \(\beta(x)=const\), which amounts to
\[k=\frac{P^{\prime}}{(P-Q)(P-R)}+\frac{Q^{\prime}}{(Q-P)(Q-R)}+\frac{R^{\prime }}{(R-P)(R-Q)}=const. \tag{13}\]
If all foliations are formed by circles and not by straight lines then the above identity is not possible. In fact, each of the slopes \(P,Q,R\), considered as function of complex \(x\), has two ramification points, for example, \(P\) has singularities at \(x_{\pm}=x_{0}\pm 1/A\). If at least one of the 6 ramification points is not coinciding with one of the others, then, supposing it be \(x_{+}\) of \(P\) and expanding the expression (13) for \(x=x_{+}+t^{2}\) by \(t\) at \(t=0\), one sees that it has a simple pole and therefore cannot be constant. Therefore each ramification point of any slope coincides with a ramification point of another slope.
The ramification points correspond to the group orbits (lines \(x=const\)) tangent to the foliation circles. If only two circles are tangent along some orbit \(x=const\) then, applying Lemma 1, we conclude that either this orbit belongs to the third foliation and the web is of Wunderlich's type or the two generating circles coincide. In the latter case the ramification points of the third foliation again must coincide with the common ramification points of the first two. This means that that all orbits of generating circles coincide and we do not have a 3-web.
Therefore either all leaves of all foliations are straight lines or one of the foliation, say the one corresponding to \(R\), is formed by straight lines with slope \(R=const\) and the other two are formed by orbits of the same circle and \(Q=-P\). Then it is immediate that \(k=0\) and \(R=0\). We get the type \(T2\). The webs of Wunderlich's type, which are hexagonal, are excluded from consideration by the coordinate choice.
\(\bullet\)_Webs symmetric by dilatations \(x\partial_{x}+y\partial_{y}\)_.
Stabilizer of the 1-dimensional algebra spanned by \(x\partial_{x}+y\partial_{y}\) is generated by rotation \(y\partial_{x}-x\partial_{y}\) and dilatation \(x\partial_{x}+y\partial_{y}\). The orbit of the polar of the generating curve is either a conic, or a hyperbolic line, or a parabolic line. Parabolic lines touch the Darboux quadric at one of the stationary point of the dilatation.
We use modified polar coordinates rectifying the symmetry
\[u=\arctan\left(\frac{y}{x}\right),\quad v=\frac{1}{2}\ln(x^{2}+y^{2}). \tag{14}\]
A curve \(x\mapsto(x,y(x))\) is a circle if and only if
\[y^{\prime\prime\prime}=\frac{3y^{\prime}(y^{\prime\prime})^{2}}{1+(y^{\prime} )^{2}},\]
therefore integral curves of a symmetric vector field \(\partial_{u}+P(u)\partial_{v}\) are circles if and only if
\[P^{\prime\prime}=\frac{3P}{P^{2}+1}(P^{\prime})^{2}-P(P^{2}+1). \tag{15}\]
The integral curves are straight lines if and only if \(y^{\prime\prime}=0\), which is equivalent to \(P^{\prime}=P^{2}+1\). Hence (apply \(z\mapsto\frac{1}{z}\)) the integral curves are circles of a parabolic pencil with the vertex at the origin if and only if \(P^{\prime}=-(P^{2}+1)\) thus giving \(P(u)=\tan(u-u_{0})\) and \(P(u)=-\tan(u-u_{0})\) respectively. The second order equation (15) has a first integral
\[\frac{(P^{\prime})^{2}}{(P^{2}+1)^{3}}-\frac{1}{P^{2}+1}=A=const,\]
allowing also to integrate it
\[P(u)=\frac{\sqrt{A+1}\tan(u-u_{0})}{\sqrt{1-A\tan^{2}(u-u_{0})}}. \tag{16}\]
To the hyperbolic pencil with the circles centered at the origin corresponds the solution \(P\equiv 0\).
For hexagonal webs the connection form is \(\gamma=\alpha(u)du+\beta(u)dv\). Hexagonality condition \(d\gamma=0\) implies \(\beta(u)=const\), giving again (13) (where the slopes of the other foliations are \(Q\) and \(R\)). Observe that the choice of local coordinates \(u,v\) excludes webs of Wunderlich's type and we have to show that hexagonal are only the types D1, D2, D3, D4, D5. This can be done as follows. The types D1, D2, D3 are webs with three pencils, the case being settled earlier. Therefore we study the webs whose polar curve includes at least one conic.
The approach used for translation works also in this case if we pass to complex webs: the singular points of solutions \(P,Q,R\) become complex if the corresponding generating circle on the Darboux quadric separates the stable points \((0,0,\pm 1)\) of the dilatation. Considering behavior of the expression \(k\) at singular points of \(P,Q,R\) we see that necessarily a singular point of slope for one foliation must coincide with a singular point for another. Singular points are the points (possibly complex) where the group orbit touches the generating circle, or lies either on a line of one foliation or on the common tangent to circles of parabolic pencil with the vertex at \((0,0)\).
Applying Lemma 1 as in the case of translation, we infer that, for webs of non-Wunderlich's type, having two foliations with conic polar curves and coinciding singular points, these polar curves must coincide and the third polar must be a line.
Consider the points where the circles from the common orbit are tangent. Lemma 1 implies that for non-Wunderlich's type the leaves of the third foliations must be also tangent to the circles and we get either type D4 or D5.
Finally, if two components of the polar curve are lines and the third is a conic then the lines must be parabolic for non-Wunderlich's type. In fact, considering the points where circles of the hyperbolic pencil are tangent to circles corresponding to conic, we conclude by Lemma 1 that the third foliation lines are also tangent to the circles at these points but then the singular points of the "conic" solution to (15) can not be compensated.
Thus the polar lines are parabolic. The expression \(k\) cannot be constant if the poles of \(Q,R\), corresponding to these lines, do not coincide with singular points of \(P\), which represent a conic in the polar curve. Therefore the generating circle of this "conic" solution does not separate the stationary point of the dilatation and the singular points are real. Then by Lemma 1 the web cannot be hexagonal.
\(\bullet\)_Webs symmetric by rotations \(y\partial_{x}-x\partial_{y}\)_.
Stabilizer of the 1-dimensional subalgebra spanned by \(y\partial_{x}-x\partial_{y}\) is generated by rotations \(y\partial_{x}-x\partial_{y}\) and dilatations \(x\partial_{x}+y\partial_{y}\). The polar orbit for a generating curve is either a circle or an elliptic line.
We again use polar coordinates (14) to describe symmetric vector fields \(\partial_{v}+P(v)\partial_{u}\). The integral curves of such vector field are circles (in coordinates \(x,y\), of course) if and only if
\[P^{\prime\prime}=\frac{3P}{P^{2}+1}(P^{\prime})^{2}+P(P^{2}+1). \tag{17}\]
The integral curves are straight lines if and only if \(P^{\prime}=-P(P^{2}+1)\). The integral curves are circles passing through the origin if and only if \(P^{\prime}=P(P^{2}+1)\). Solutions of these two equations are \(P(v)=\frac{1}{\sqrt{e^{2(v-v_{0})}-1}}\) and \(P(v)=\frac{1}{\sqrt{e^{-2(v-v_{0})}-1}}\) respectively. The second order equation (17) has a first integral
\[\frac{(P^{\prime})^{2}}{(P^{2}+1)^{3}}+\frac{1}{P^{2}+1}=A^{2}=const,\]
allowing also to integrate it
\[P(v)=\frac{\sqrt{A^{2}-1}\tanh(v-v_{0})}{\sqrt{1-A^{2}\tanh^{2}(v-v_{0})}},\ \ A^{2}\neq 1. \tag{18}\]
The value \(A^{2}=1\) gives the special solutions with generating circles passing through the stationary points of the symmetry, i.e. elliptic pencil with the lines passing through the origin with \(P\equiv 0\). Analysis of the behavior of \(k\) at singular points and use of Lemma 1, similar to the
ones performed above, show that only the type R1 is hexagonal, the Wunderlich types being excluded by the choice of variables. (In fact, multiplying by \(i\) the independent variable of the differential equation (15) reduces it to (17).)
\(\bullet\)_Loxodromic symmetry._
Finally we show that there is no hexagonal 3-webs symmetric by loxodromic vector field \(y\partial_{x}-x\partial_{y}+\kappa(x\partial_{x}+y\partial_{y})\) for any \(\kappa\neq 0\). Note that the Wunderlich construction does not give circular webs as the symmetry orbits are spirals. We use the following coordinates
\[s=\frac{\kappa}{2}\ln(x^{2}+y^{2})-\arctan\left(\frac{y}{x}\right),\quad t= \kappa\arctan\left(\frac{y}{x}\right)+\frac{1}{2}\ln(x^{2}+y^{2}),\]
the variable \(t\) being invariant by the symmetry. Integral curves of a symmetric vector field \(\partial_{t}+P(t)\partial_{s}\) are circles (in coordinates \(x,y\)) if and only if
\[P^{\prime\prime}=\frac{3P}{P^{2}+1}(P^{\prime})^{2}+\frac{(P^{2}+1)(\kappa P+ 1)(P-\kappa)}{(\kappa^{2}+1)^{2}}. \tag{19}\]
This equation has a first integral
\[\frac{(P^{\prime})^{2}}{(P^{2}+1)^{3}}+\frac{2\kappa P-\kappa^{2}+1}{(\kappa^ {2}+1)(P^{2}+1)}=A=const.\]
This integral does not allow to integrate (19) in elementary functions but allows to study the behavior of solutions at singular points. A singular point emerges when a symmetry orbit is tangent to a circle (or a line) of corresponding foliation.
Consider possible singularity types. If we exclude from consideration the stationary points of symmetry then the symmetry vector field touches a generic circle at two points and the tangency is simple. The corresponding solution \(P\) has two singularities if the tangency points belong to different orbits and only one if the points lie on the same orbit. There are circles, for which two tangency points merge to give only one singularity. If the generating curve is a line or a circle through the origin then the solution has only one singularity. Finally, there are two constant solutions, namely \(P=-1/\kappa\) corresponding to the invariant hyperbolic line \(X=Y=0\) and \(P=k\) corresponding to the invariant elliptic line \(Z=U=0\).
If a solution \(P\) has two singularities at \(t_{1},t_{2}\) then \(A\neq 0\) and the singularity is of the same type as for the non-loxodromic cases:
\[P(t)=\frac{1}{\sqrt[4]{4A}\sqrt{t-t_{i}}}+\{analytic\ function\ of\ \sqrt{t-t_{i}}\},\]
If a solution \(P\) is generated by a circle tangent to the symmetry trajectory \(t=t_{0}\) and the tangency is of second order then there is only one singularity of the following type at \(t=t_{0}\):
\[P(t)=c(t-t_{0})^{-\frac{2}{3}}+...,\]
where \(c\neq 0\) and the omitted terms are not essential for our analysis. The condition of double tangency is equivalent to \(A=0\). The corresponding generating circle \((x-a)^{2}+y^{2}=r^{2}\) verifies the relation \(r^{2}=\frac{(k^{2}+1)a^{2}}{k^{2}}.\) For an orbit \(t=t_{0}\) there is at most one such circle.
Let \(P,Q,R\) be solutions giving a hexagonal 3-web. These solutions, as well as the coordinates \(s,t\), are defined only locally but we can prolong them along a symmetry orbit, along a leaf of some of the 3 foliations, or along any curve, as long as we do not meet a singular point of one
of \(P,Q,R\). The condition of hexagonality (13) remains satisfied along any such prolongations. Therefore we cannot meet a singularity of only one of \(P,Q,R\), they emerge necessarily at least in pairs.
Suppose there is a symmetric hexagonal 3-web. Consider a non-singular point. There are 3 leaves passing through it. Each leaf can be considered as the generating curve of the respective foliation. At least one of the corresponding solutions \(P,Q,R\) has a singular point. Let us run along the respective leaf until we meet a singularity \(t_{1}\) of \(P,Q\) or \(R\). Then at least two of \(P,Q,R\) are singular at \(t_{1}\) and the orbit \(t=t_{1}\) is tangent to at least two web leaves at a singular point \(p_{s}\). Since the symmetry orbit is not a circle, Lemma 1 implies that either exactly two leaves at \(p_{s}\) are tangent and therefore coincide or all three leaves are tangent at \(p_{s}\).
In the former case suppose that the coinciding leaves are \(C_{Q}\) and \(C_{R}\). Then the leaf \(C_{P}\) at \(p_{s}\) is different from \(C_{Q}=C_{R}\) at \(p_{s}\). Let us go along \(C_{P}\) keeping track of \(Q,R\) until one of the two leaves corresponding to \(Q\) and \(R\) touches \(C_{P}\) at some \(\bar{p}_{s}\). Such point exists until all the foliations are formed by straight lines. Then by Lemma 1 this leaf coincide with \(C_{P}\) at \(\bar{p}_{s}\), thus all three generating curves of the web coincide and there is no 3-web. If all 3 generating curves are straight lines then \(C_{P}\) is necessary the line through the origin and \(P=\kappa\). Applying the map \(z\mapsto 1/z\) we transform the lines \(C_{Q}\) and \(C_{R}\) into circles and the above argument applies.
If all three leaves are tangent at \(p_{s}\) then at least one leaf, say \(C_{P}\), is different from any of the other two. Let us go again along \(C_{P}\) keeping track of \(Q,R\) until one of the two leaves corresponding to \(Q\) and \(R\) touches \(C_{P}\). Let it be \(C_{Q}\). Repeating the above used argument we conclude that such point \(\bar{p}_{s}\) exists and \(C_{P}\) coincides at \(\bar{p}_{s}\) with \(C_{Q}\). Then \(C_{P}\) coincides with \(C_{Q}\) also at \(p_{s}\). Now either all 3 generating curves \(C_{P},C_{Q},C_{R}\) coincide at \(p_{s}\) and we do not have 3-web, or \(C_{R}\) is different from \(C_{P}=C_{Q}\) at \(p_{s}\). Now we repeat the trick with prolongation, this time along \(C_{R}\), and conclude that \(C_{P}=C_{Q}=C_{R}\) at \(p_{s}\). Thus all three generating curves coincide and we can get at most 2-web.
## 8 Concluding remarks
### Circular hexagonal 3-webs on surfaces
Pottmann, Shi, and Skopenkov [PSS-12] classified circular hexagonal 3-webs on nontrivial Darboux cyclides: such surfaces carry up to 6 one-parameter families of circles, 3 families can be picked up in 5 different ways to form a hexagonal web.
### Erdogan's approach to Theorem 6
The first attempt to prove Theorem 6 appeared in [E-89]. The idea was to choose a Mobius normalization sending one of the vertexes to infinity, to set \(y=0\), and to obtain "sufficient number of equations" to fix the pencil configurations. The author claimed that the curvature equation for \(y=0\) is a polynomial one of degree 6 in \(x\), though the calculation itself was not present. Nowadays, armed with a powerful computer (32GB of RAM is enough) and a symbolic computation system like Maple, one can perform this computations and check that the degree may be much higher (in fact, up to 18) for some choices of polar line types.
Anyway, brute computer force does work: with the above mentioned equipment the author of this paper managed to derive the classification results. The treating has the following steps:
1. Choosing an initial Mobius normalization.
2. Computing the curvature.
3. Isolating and factoring the highest homogeneous part of the curvature equation.
4. Mobius renormalization adjusted to the geometric information obtained in the previous step and repeating from the step 2 until one makes the curvature vanish.
### Boundaries of regular domain for hexagonal 3-webs
To avoid heavy computation of the curvature in proving Theorem 6, the author of [S-05] suggested to use the structure of web singular set (see Lemma 1). The presented proof was not correct. The author argued that the web equation \(u_{3}=F(u_{1},u_{2})\), relating first integrals \(u_{i}\) of the web foliations \({\cal F}_{i}\), may be rewritten as \(u_{3}=f(\alpha(u_{1})+\beta(u_{2}))\) and used this form on the curve of singular points \(\Gamma_{1}\). This argument is definitely wrong as the functions \(\alpha,\beta\) typically have singularities on \(\Gamma_{1}\): a simple counterexample is the web equation \(u_{3}=u_{1}u_{2}\).
### Hexagonal 3-subwebs
Consider the autodual tetrahedron with vertexes at \([1:0:0:0]\), \([0:1:0:0]\), \([0:0:1:0]\) and \([0:0:0:1]\). Lines joining the vertexes of this tetrahedron give a 6-web \(A_{6}\) with 6 pencils of circles. Any 3-subweb of this 6-web is hexagonal: any 3 lines are either coplanar or give a polar curve of a hexagonal 3-web. By direct computation (better use computer!) one checks the following claim.
**Proposition 13**: _The rank of the 6-web \(A_{6}\) is maximal, i.e. is equal to 10._
Another remarkable feature of this autodual 6-web is that the infinitesimal operators, corresponding to the pencils in the sense of Proposition 3, form the basis \(R_{x},R_{y},R_{z},B_{x},B_{y},B_{z}\) of the Lie algebra of the Mobius group so that the commutator of any two of them is either zero or an operator of the basis.
There is also autodual 4-web \(A_{4}\) whose polar curve is the union of 4 pencils corresponding to \(R_{z},B_{z},B_{x}-R_{y},R_{x}+B_{y}\). This web has similar properties: any its 3-subweb is hexagonal, the commutator of any two of the four operators is either zero or an operator of the set, its rank is maximal. One finds more examples with hexagonal subwebs among symmetric webs of Wunderlich's type (see Section 7).
### Conjecture
The polar curve of a hexagonal circular 3-web is an algebraic curve such that each its irreducible component is a planar curve of degree at most 3.
## Appendix: curvature equation
\[(A^{2}-4B)(R^{2}+AR+B)[(AR+2B)A_{xx}+A(R^{2}-B)A_{xy}-BR(A+2R)A_{yy}+\]
\[-(A+2R)B_{xx}+(A^{2}-2R^{2}-2B)B_{xy}+R(A^{2}-2B+AR)B_{yy}+\]
\[(4B-A^{2})R_{xx}+A(A^{2}-4B)R_{xy}+B(4B-A^{2})R_{yy}]+\]
\[(A^{2}-4B)^{2}(A+2R)R_{x}^{2}+(A+2R)(A^{2}-4AR-4R^{2}-8B)B_{x}^{2}+\]
\[-(2A^{3}R^{2}+A^{2}R^{3}+7A^{2}BR+4ABR^{2}+4BR^{3}+4AB^{2}-4B^{2}R)A_{x}^{2}+\]
\[(B-R^{2})(2A^{3}R+A^{2}R^{2}+A^{2}B+4BR^{2}+4B^{2})A_{x}A_{y}-A(A^{2}-4B)^{2}(A+2 R)R_{x}R_{y}+\]
\[(2A^{3}R-A^{4}+4A^{2}R^{2}-8AR^{3}-8R^{4}+8A^{2}B-16BR^{2}-8B^{2})B_{x}B_{y}+\]
\[BR(2A^{3}R+7A^{2}R^{2}+4AR^{3}+A^{2}B+4ABR-4BR^{2}+4B^{2})A_{y}^{2}+\]
\[-R(A^{4}-A^{3}R-6A^{2}R^{2}-4AR^{3}-8A^{2}B-8ABR+8B^{2})B_{y}^{2}+\]
\[B(A^{2}-4B)^{2}(A+2R)R_{y}^{2}+(A^{2}-4B)(A^{2}R-AR^{2}-AB-8BR)A_{x}R_{x}+\]
\[(3A^{3}R+13A^{2}R^{2}+8AR^{3}+A^{2}B+12ABR-4BR^{2}+12B^{2})A_{x}B_{x}+\]
\[2(A^{2}-4B)(A^{2}+AR+R^{2}-3B)B_{x}R_{x}+\]
\[(5A^{2}R^{3}-A^{4}R-A^{3}R^{2}+4AR^{4}+A^{2}BR+4ABR^{2}-4BR^{3}-4AB^{2}-4B^{2}R )[A_{x}B_{y}+A_{y}B_{x}]+\]
\[(A^{2}-4B)(A^{2}R^{2}+A^{2}B+2ABR-2BR^{2}-2B^{2})[A_{x}R_{y}+A_{y}R_{x}]+\]
\[-A(A^{2}-4B)(A^{2}+AR+R^{2}-3B)[B_{x}R_{y}+B_{y}R_{x}]\]
\[B(A^{2}-4B)(A^{2}R-AR^{2}-AB-8BR)A_{y}R_{y}+\]
\[+(4B-A^{2})(A^{3}R+A^{2}R^{2}-A^{2}B-6ABR-6BR^{2}+2B^{2})B_{y}R_{y}+\]
\[-R(2A^{4}R+5A^{3}R^{2}+3A^{2}R^{3}-A^{2}BR+4ABR^{2}+4BR^{3}+8AB^{2}+20B^{2}R)A_ {y}B_{y}=0.\]
## Acknowledgements
This research was supported by FAPESP grant # 2022/12813-5.
|
2308.08922 | Relational Quantum Mechanics and Contextuality | This paper discusses several issues around Relational Quantum Mechanics.
First, I discuss possible ontologies underlying the interpretation, before
settling on the hypothesis that RQM follows from contextuality of measurements,
due to quantum measurements changing the system measured. I then examine how
the approach to quantum logic in the consistent histories formalism can be used
to clarify which information about a system can be shared between different
observers. Finally I discuss the similarities and differences between special
relativity and RQM. | Calum J. Robson | 2023-08-17T11:25:35Z | http://arxiv.org/abs/2308.08922v2 | # Relational Quantum Mechanics and Contextuality
###### Abstract
This paper discusses several issues around Relational Quantum Mechanics. First, I discuss possible ontologies underlying the interpretation, before settling on the hypothesis that RQM follows from contextuality of measurements, due to quantum measurements changing the system measured. I then examine how the approach to quantum logic in the consistent histories formalism can be used to clarify which infomation about a system can be shared between different observers. Finally I discuss the similarities and differences between special relativity and RQM.
## 1 Introduction
This paper aims to clarify some of the claims of the Relational Quantum Mechanics interpretation, and to respond to some recent criticisms. Relational Quantum Mechanics was first introduced by Carlos Rovelli in [42]. Since then, it has been challenged or developed in various ways, with growing interest in the past few years- for example [15][29][11][12][5][39][28][30][6][35]. In particular there is the debate conducted between Rovelli and Dibiago, and Pienaar and Brucker [5][39][12]. In this paper, I want to provide some further clarifications to some of the issues raised in that debate.
There are three main areas of discussion in the literature. The first of these is over the presence or absence of nonlocality in RQM [44][33]. I will not address this issue in this paper, though I hope to engage with it in future work. The second is about the ways in which different observers can or cannot compare measurements of a particular system. Rovelli constrasts,'relative facts' which are only true for individual observers with,'stable facts' which are true for multiple observers. [10] This distinction has come in for criticism2, and there has also been misunderstanding over what RQM means by the term, 'fact' and how it relates to a measurement. The third area is the claimed analogy [42] between Special Relativity and Quantum Mechanics on the grounds that they both are in some sense relational. Rovelli and others have responded to these criticisms - see, for example [12] and [6] - and here I wish to offer some additional responses.
Footnote 2: Most recently in [30]. See also the response by Rovelli et. al. in [6]
This paper will have three sections. In the first, I will outline the main features of the RQM interpretation. I will then discuss some of the different ontologies which could be associated with the interpretation before suggesting my own preferred option. In the second, I will look at how the approach to quantum logic developed by Griffiths [21][22] can be used to clarify exactly which facts are stable and which are relative for which observers. I will present the, 'quantum reasoning' approach to quantum logic, and explain how it is applied to families of histories of quantum states. I will then show how this mathematical framework can be given a very different physical meaning
interpretation, being very clear how this differs from their use in the consistent histories interpretation. Finally, I will look at the similarities and differences in relationality between RQM and Special Relativity. This will lead to a classification which distinguishes between a purely Classical theory, a theory which is Relational, and a theory which is Quantum.
## 2 Relational Quantum Mechanics
In this section I will present the main points of the RQM representation, drawing on [10]. The interpretation relies on the insight that Quantum measurements are a relation between an observing and an observed system, and that therefore different observers can assign different quantum states to the same system. This statement admits a number of ontological interpretations, as I will discuss toward the end of this section.
Before that, though, we must clarify some things. First, an essential point of RQM is that there is no ontological difference between observed and observing systems. This is in contrast to the standard Physics Textbook presentation which distinguishes between classical (observing) and quantum (observed) systems2. On the contrary, RQM denies that there is such a two level distinction in Being, claiming that all systems are equally describable by Quantum Mechanics. An important corollary of this is that the line between systems can be drawn anywhere- there are a large number of ways to divide the world into systems and subsystems, and any of these divisions is equally well described by quantum mechanics.
Footnote 2: This is especially the case for notions of Complimentarity, which are based on there being two types of reality, classical and quantum, with properties in the quantum reality corresponding to properties in the classical reality, but not being identical to them in any straightforward way.
Second, and tied to this, though the language of, 'observer' and 'observing' is used, RQM does not require that observers are in any sense concious. Any interaction between systems is a measurement within RQM. When we divide systems into, 'observer' and, 'observing' we are implicitly making use of the assumption within RQM that a system cannnot measure itself- therefore any interaction must describe changes within one system from the point of view of a second system. Rovelli [12] has claimed this is a consequence of certain no-go theorems (e.g. [16]), however, the applicability of these theorems to the issue of self-measurement has been challenged [35][30] and so for this paper I shall take the following statement:
1. A system cannot measure itself, and therefore cannot ascribe a quantum state to itself
to be an axiom of RQM.
### Outline of the Interpretation
Relational Quantum Mechanics belongs to a class of theories sometimes called, 'Copenhagenish' [31][39]. These theories satsify four principles, which I am here quoting verbatim from Pienaar [39].
" 1. **Measurement Outcomes for a given observer are unique**- i.e contrary to many-worlds interpretations, there do not exist multiple copies of the same observer that observe different outcomes
2. **The Quantum State is of a broadly epistemic character**, i.e. it represents, 'information', 'knowledge' or, 'beliefs'
3. **Quantum Theory is a universally applicable theory**, i.e. it can be consistently applied to arbitrary scales, systems and parameter regions
4. **Quantum Theory is a Complete Theory**, i.e. does not require completion by the addition of supplemental or, 'hidden' variables. "
RQM differs from most other Copenhagenist interpretations due to its principle of relative facts- the idea that facts about the world are true and false, not absolutely, but relative to particular observers. Other Copenhagenist interpretations which involve relative facts are the QBism and Consistent Histories interpretations, though in different ways. We shall discuss both these theories in more detail below.
The most recent account of RQM is [10], which I shall mainly draw on here. The clarifications in Rovelli and DiBiago's reply [12] to Pienaar [28] and Brucker [5] are also useful. To introduce the RQM interpretation, I shall quote at length from the article [10] since it provides the most up to date description of the interpretation from its inventor:
RQM interprets QM as a theory about physical events or facts. The theory provides transition amplitutes of the form \(W(b,a)\) that determine the probability \(P(b,a)=|W(b,a)|^{2}\) for a fact (or collection of facts) \(b\) to occur, given that a fact (or collection of facts) \(a\) has occured... The insight of RQM is that the transition amplitutes \(W(b,a)\) must be interpreted as determining physical facts only if the physical facts \(a\) and \(b\) are relative to the same system.
I shall discuss the definition of, 'fact' shortly. Rovelli distinguishes between Classical and Quantum Mechanics as follows:
Classical mechancs can equally be interpreted as a theory about physical facts, described by values of physical variables (points in phase space). But there are three differences between quantum facts and the corresponding facts of classical mechanics. First, their dynamical evolution laws are genuinely probabilistic. Second, the spectrum of possible facts is limited by quantum discreteness (for instance; energy or spin can only have certain values). Thirdly, facts are sparse and relative.
Rovelli goes on to clarify the meaning of these two last terms
Facts are sparse: they are realised only at the interactions between any two physical systems...
Facts are relative to the systems that interact. That is, they are labelled by the interacting systems. This is the core idea of RQM3.
Footnote 3: This idea of a, βRelative Stateβ is taken by Rovelli from the Everettian interpretation of Quantum Mechanics, though given a different physical meaning
### Facts and Wigner's Friend
So RQM is about facts which come into existence in an interaction, but are relative to the pair of interacting systems, and cannot be taken to be true outside this context or automatically compared with other facts relative to other systems4. All of this rests upon the concept of a fact, and here I think the account in [10] is incomplete. In practice, facts do not only involve actual physical interactions. Rovelli's statement that RQM involves
the probabilites \(P(b,a)\) for a fact \(b\) to occur given a fact \(a\) implies that hypothetical or potential interactions are facts, as well as actual interactions. Indeed, there seem to me to be three sorts of facts:
1. Facts obtained as the result of an actual measurement interaction
2. Facts considered as potential results of measurement interactions
3. Assumptions about systems based upon knowledge not derived directly from interaction
An example of the third category would be assuming a particle is has either spin up or spin down, because it comes from a device which we know produces such particles. In this example, we would describe it as \(\frac{1}{\sqrt{2}}\left|\uparrow\right\rangle+\frac{1}{\sqrt{2}}\left|\downarrow\right\rangle\).
We can analyse the different types of facts using the Wigner's Friend thought experiment. This involves Wigner's friend measuring the state of a particle, followed by Wigner measuring the state of the combined Friend-particle system.
Initially, the friend begins with the particle in a completely unknown state. However, they intend to measure the spin of the particle, and so their background knowledge of physics leads them to believe that there is a 50% chance they will measure spin up or spin down (A fact of type 3), and so they ascribe the particle the state \(\frac{1}{\sqrt{2}}\left|\uparrow\right\rangle+\frac{1}{\sqrt{2}}\left| \downarrow\right\rangle\), giving the probabilities of the potential measurement (This is a fact of type 2)5. The friend then performs the measurements, and finds that the particle is in state \(\left|\uparrow\right\rangle\). This is now a fact of type 1.
Footnote 5: In the Consistent Histories formalism, these potential states are called, βpre-probabilitiesβ
At this point Wigner turns up. He knows his friend is scheduled to perform this experiment today, and so assignes his friend and the particle the combined state
\[\frac{1}{\sqrt{2}}\left|A\right\rangle\otimes\left|\uparrow\right\rangle+\frac {1}{\sqrt{2}}\left|B\right\rangle\otimes\left|\downarrow\right\rangle \tag{1}\]
Where \(\left|A\right\rangle\) and \(\left|B\right\rangle\) represent the friend measuring spin up and spin down respectively. This is again a mixture of type 2 and type 3 facts.
Finally, Wigner performs a measurement on the friend-particle entangled system, and finds either the state \(\frac{1}{\sqrt{2}}\left|A\right\rangle\otimes\left|\uparrow\right\rangle\) or the state \(\frac{1}{\sqrt{2}}\left|B\right\rangle\otimes\left|\downarrow\right\rangle\). This is a fact of type 1.
This thought experiment is usually considered to be a paradox. This is due to the fact that
* The Friend assigned the particle a definite state after their measurement- either \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\) whereas Wigner considers it to be in a superposition via the entangled state in equation (1).
* Wigner assigns his friend the superposition in equation (1) but the friend is presumably already in a definite state, having either observed \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\). Furthermore, if we assume that the quantum state has some physical meaning, then what does it feel like to be in a superposition? What is the phenomology of Wigner's friend whilst Wigner measures them?
How does RQM handle this supposed paradox? Because facts are relative to observers, RQM sees no problem at all with Wigner and his Friend having different descriptions of the same state. Both Wigner and the Friend assign states based on the infomation they have about the world (through their history of interactions with other systems), and there is no reason for that infomation to be the same.
This implies that the description of a system by an observer as being in a quantum state does not necessarily imply anything physical about the system. Wigner assigning the Friend the state (1) does not mean that the Friend is really in some kind of physical superposition state- it just means that Wigner knows that the properties of his Friend and the particle are entangled, and that he will find these properties with certain probabilities.
As a final point, we might worry about (for example) the Friend measuring \(\left|\downarrow\right\rangle\), but Wigner measureing \(\left|A\right\rangle\otimes\left|\uparrow\right\rangle\). A full analysis of this situation is given in [42], the conclusion of which is that comparing two systems requires a third observer, and this observer will always make consistent measurements of the two systems (either both measuring up or both measuring down). In the absence of a third observer, it is meaningless to compare Wigner's description with the Friend's description, as all facts are relative to an observer.
### The Measurement Problem
RQM addresses the measurement problem in a similar way. When the friend assigns the particle the state \(\frac{1}{\sqrt{2}}\left|\uparrow\right\rangle+\frac{1}{\sqrt{2}}\left| \downarrow\right\rangle\) before the measurement, and the state \(\left|\uparrow\right\rangle\) after the measurement, this does not mean that there has been a wavefunction collapse. It just means that the friend had infomation that they would measure \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\), each with probability \(1/2\); and after the measurement they had infomation that it was in the definite state \(\left|\uparrow\right\rangle\). As Rovelli says [10]:
In RQM, it [the wavefunction] is a bookkeeping of known facts, and a tool for predicting the probability of unknown facts, on the basis of the available knowledge
That said, this does not mean that there is no physical meaning to such a change. If we know that the particle has a random polarization before the measurement, and assign it the wavefunction \(\frac{1}{\sqrt{2}}\left|\uparrow\right\rangle+\frac{1}{\sqrt{2}}\left| \downarrow\right\rangle\) because we are passing it through a vertical polarizer, then there is a definite (and stochastic) change to the particle via passing through the polarizer, which leads to it having the eigenstate \(\left|\uparrow\right\rangle\). So whilst RQM interprets the wavefunction as being a measure of the infomation which one system has about the other, the fact that it deals with physical interactions means that the change in information a system A has about a system B is usually due to some kind of physical measurement. This means that the wavefunction that A assigns to B changes after this measurement. However this is not because the wavefunction directly represents a physical entity changed in the interaction, but because it represents the infomation that A has about B, which changes when they interact 6
This can lead to confusion. The misunderstanding that RQM always assigns direct physical meaning to states is behind the arguments against RQM found in Brucker [5] and Pienaar [28], and answered in Rovelli and DiBiago [12]. Indeed, that paper is even titled, 'Quantum Mechanics is about Facts, not States'. With this in mind, it is best to interpret the Wavefunction of a system in RQM relative to an observer as describing the infomation that observer has about the system, and not automatically as meaning the system is physically in that state 7. For a discussion of some of the mathematical issues involved, see [36].
Footnote 6: An example of a change with no interaction would be if Wignerβs friend just tells Wigner the measurement resultβ if Wigner trusts his friendβs experimental skills, then he might choose to assign the particle the state \(\left|\uparrow\right\rangle\) without performing the measurement himself
Footnote 7: though this is also possible, and if system is in an eigenstate then this must be physical(Type 1))
### Stable and Relative Facts
Once we have the picture of multiple observers with their own descriptions (sets of facts) of a system, we can ask whether any elements of these descriptions are shared. Rovelli and DiBiago give an analysis of this in [12], distinguishing between stable facts and relative facts. Given a set \(\mathcal{O}\) of observing systems, and an observed system \(\mathcal{S}\), stable facts are those which are assigned by all the systems in \(\mathcal{O}\) to \(\mathcal{S}\), whereas relative facts are only assigned to \(\mathcal{O}\) by some, or even just one, of the systems in \(\mathcal{O}\).
It follows that stable facts obey the standard conditional probability law
\[P(b)=\sum_{i}P(b|a_{i})P(a_{i}) \tag{2}\]
This is in fact taken by DiBiago and Rovelli to define stable facts. They go on to define relative facts:
Relative facts are defined to happen whenever one physical system interacts
with another system
Every observing system has relative facts corresponding to its own interaction events, and some relative facts are shared between systems, and so become stable facts. RQM appeals to decoherence to explain why the world we see (on a classical scale) seems to be made up of stable facts which are observer-independent. Rovelli [10] also points out that most other interpretations of Quantum Mechanics implicitly only deal with facts which are stable, because they do not use the RQM principle that facts are relative to particular observers. Later in the paper I will explore methods for determining which facts are stable for a particular set of observers. For now, though, I want to discuss possible ontologies underlying the existence of relative facts in the RQM description.
### Three approaches to the Ontology
The central claim of RQM is that quantum states are relative to observers. The interpretation of RQM therefore involves asking why this is the case8. Without intending to be exhaustive, I think there are three main approaches to the ontological underpinnings of the theory. I think discussing these will clarify the RQM interpretation. I shall call them the Haecceist, Subjective, and Constructive approaches.
I favour the third, Constructive approach, but it is vital to realise (as I shall explain) that RQM is primarily about the description of systems and is not directly committed to any underlying ontology. Nevertheless we can try to explain why RQM is a good way to descibe systems, and I think that the Constructive model is a natural way of doing this which illuminates other interpretational issues (like nonlocality or the analogy between RQM and special relativity) which I shall address in later sections.
I should stress I am not claiming that any of these views are held by Rovelli or any other proponents of RQM. As stated above, the default position (which I agree with) is that [10], 'Quantum Mechanics is a theory about events', and the physical content of the theory consists of measurement interactions and the facts that they reveal. I am asking what further ontological conclusions might be drawn from the fact that this gives a good description of reality. More details about Rovelli's view on the underlying ontology can be found in Section V of [9]
The first way the theory could be taken is to say that each system simultaneously possess multiple sets of properties relative to different observers. This would make RQM into a quantum version of Leibnitz' Monadic theory, where every system contains either potential or actual properties relative to every other system. I do not find this particularly convincing, and I only mention it because it is easy to talk about RQM as if this is true. For example the following passage from [9]:
Instead of seeing the physical world as a collection of objects with different properties, quantum theory invites us to see the physical world as a net of relations. Objects are its nodes.
The radical consequence is that to attribute properties to something when it does not interact is superfluous and may be misleading. It is talking about something that has no meaning: for there are no properties outside of interactions
The emphasis are in the original. The passage states that there are no properties outside of interaction, but can be read as appealing to a propertyless,'something' which is there prior to any interactions. The image which comes to my mind is of a world of bare haecceties which obtain properties when they interact. I do not think that Rovelli holds this view, and it is very hard to talk about a relational ontology in normal language (I am certain I will have made similar misleading expressions in this paper), but I quote this to show how the haeccetist misconception could arise when reading about RQM.
Why don't I find this haecceitism convincing? First of all, there is an epistemic issue. If all properties are relative to observers, which observers measure the property of having-multiple-simultanous-properties? Secondly, there is the fact that this seems divide the universe into a finite number of systems each with its own haecciety, and each posessing properties relative to some or all of these fixed systems. However, a key claim of RQM is that we can draw the boundary between systems anywhere we want- so the division into systems is not a fixed feature of the universe. Therefore to maintain the multiple-properties version of RQM we need reality to consist of a superposition of different possible divisions into systems, each system in each division posessing actual or potential properties relative to all other systems in that division (and perhaps even all future divisions). At this point, I think we have ended up with a version of a pure Everettian Many-Worlds theory, which I would not commit myself to for other reasons.
The second possibility is that the properties assigned to the observed system are subjective, and a system has different properties relative to different observers due to the different observers assigning different properties for subjective reasons (for example, they may only have experimental knowledge of certain properties, or may have different degrees of confidence in the reliability of their equipment). This, however, is precisely the QBism interpretation of Quantum Mechanics. This is based on a subjectivist interpretation of probability theory, and assumes that the probabilities ascribed to a state are subjective and depend on an individual's own beliefs about the probability, rather than being, for example, linked to the objective frequency of the measurement outcomes. The similarities and differences between RQM and QBism are expertly discussed in [39], and the difference between them can be summed up by the fact that for RQM, an observed system being in a definite eigenstate (probability 1) relative to an observer is a statement about the observed system, that it has a definite physical property, whereas in QBism, it is a statement about the beliefs of the observer, that they will certainly measure the system to be in that eigenstate. Following Pienaar [39] this has the important consequence that whereas in RQM, an observing system does not have to be a conscious being, in QBism by definition an observer
must be conscious (i.e. able to have beliefs about probabilities).
Nevertheless in practice, in QM we must make certain subjective assumptions, which I have referered to above as type 3 facts. This could be the assumption that gravity can be neglected when describing the system, or the assumption that the system can be given a polarisation using a certain piece of apparatus. The distinction is that in RQM these assumptions are presumed to represent definite physical properties, whereas in QBism they represent a tool for obtaining results an individual deems to be rational, without claiming any specific capacity for representing external reality. That being said, as Pienaar points out, a tool can tell you a lot about the objects it allows you to work with, and so neither RQM or QBism are completely subjective or completely objective. However, RQM still claims that quantum states represent elements of physical reality, even if the ascription of states depends upon the knowledge of a particular observer, whereas in QBism, the quantum state is a description of the observers beliefs about a system, and may or may nor directly correspond to elements of physical reality.
### Measurement as Construction
If we do not want to take either of these two interpretational routes, then what remains? Without claiming that this exhausts the options, I will now present what I take to be the most plausible underlying ontology for RQM. Suppose we measure the temperature of a pan of water. The thermometer gets a reading by taking heat from the pan, and therefore the temperature measured by the thermometer is not the exact temperature of the pan before the measurement, but will be slightly less.
For usual applications, this does not matter- the difference in heat will be negligable, probably far less than the precision of the thermometer. But now suppose the thermometer is large relative to the size of the pan. Then the thermometer will draw so much heat out of the pan in the measurement process that the reading on the thermometer is not an accurate measurement of what the temperature was before the measurement took place.
Now, we can try and fix this by using a smaller thermometer, but Quantum Theory seems to indicate that there is a smallest length scale to everything (and a smallest energy, temperature, etc). Therefore there will come a point where we cannot decrease the size of the measuring instrument any further relative to what we are measuring. This means that we have no way of measuring the properties of the system before the act of measurement. Instead we can only measure what the system has become after we have interacted with it 9 There are therefore two sorts of measurement. The first sort do not change the thing being measured10, whereas the second sort changes the system so that it cannot be assumed that what is measured is the same as what the value before the measurement. The first sort of measurement is, 'Classical', the second is, 'Quantum'. The essence of Quantum Theory is then that it deals with situations where the measurement interactions required to gain knowledge fundamentally change the system which is being known.
Footnote 9: This is an example of Contextualityβ in particular the condition that a measurement should give the value of that property before the measurement is called, βFaithful Measurement (FM)β in discussions of the Kochen-Spekler and Bell Theorems [7]. The above argument implies that Quantum Theories violate Contextuality via not satisfying the FM critereon, and that this is the resolution of the dilemma posed by Bellβs Theorem. I shall discuss this in more detail in a later section
Footnote 10: At least not significantly, where significance is defined on a case by case basis
Why does this give RQM? Because according to this analysis, Quantum measurements give the properties of an object as they are changed through a particular interaction/measurement. This justifies Rovelli's dictum that all measurements are relative to a particular observer- if
each measurement gives the value of a variable after an interaction, then each measurement changes the system. So the values of the variables in the measurements are relative to the system measuring, and to the particular act of measuring. In fact, more than this, we could posit that the properties of a system are relative to a particular history of measurement- we shall explore this in the next section.
The final thing to say is that though this ontological account explains why RQM describes the quantum world, it should not be taken as a direct account of what is going on when we assign a quantum state in RQM. In RQM the wavefunction is the infomation a system A has about a system B, and so includes both background infomation (type 3) and potential measurement outcomes (type 2) as well as the results of physical measurements (type 1). It is also important to note, though, that unlike QBism, the states in the wavefunction always have a definite physical interpretation, whether actual or potential. This ontology also explains why Decoherence results in Stable Facts. On a large scale, there are a very large number of interactions taking place, causing different changes to all the parts of a large object. Most of these will cancel each other out, and so what emerges is a kind of average arising from the constructive interference amongst the interactions. I use the word emerges here in a strong sense- in this analysis Decoherence is a form of Emergence. I think this would be a good avenue to persue in future research.
Usually, measurements which radically change the system are known as, 'destructive measurements'. I have chosen to refer to them as, 'constructive' since, 'destructive measurement' implies that the pre-existing property is the subject of the measurement, despite the fact it is unknowable- what is actually measured is the property as it comes into equilibrium with the measuring device, hence a, 'constructed' property.
I also want to note that this idea that measurement is creative and changes the things measured goes right back to Heisenberg [26]. In fact, the best statement in the literature of these ideas that I have found is from the introduction to Dirac's Principles of Quantum Mechanics [38]:
At this stage it becomes important to remember that science is concerned only with observable things, and that we can observe an object only by letting it interact with some outside influence. An act of observation is thus necessarily accompanied by some disturbance in the object observed. We may define an object to be big when the disturbance accompanying our observation of it may be neglected, and small when the disturbance cannot be neglected...
We have to assume that there is a limit to the fineness of our powers of observation and the smallness of the accompanying disturbance- a limit whch is inherent in the nature of things and can never be surpassed by improved technique or increased skill on the part of the observer. If the object under observation is such that the unavoidable limiting disturbance is neglibable, then the object is big in the classical sense and we may apply classical mechanics to it. If, on the other hand, the limiting disturbance is not negligable, then the object is small in the absolute sense, and we require a new theory for dealing with it...
If a system is small we cannot observe it without producing a serious disturbance and hence we cannot expect to find any causal connection between the results of the observations....the equations which will be set up to describe an undisturbed system will be differential equations expressing a causal connection between conditions at one time and conditions at a later time... they will be connected only indirectly with the results of observations. There is an unavoidable indeterminancy in the calculation of observational results, the the
ory enabling us to calculate in general only the probability of our obtaining a particular result when we make an observation.
Another example is given by Feynman's discussion of the double slit experiment [14]. Feynman points out that we can only detect which slit the particle goes through by closing one or the other of them (as the detector blocks the particle). When we only partially close the slit (and only sometimes detect the particle), the interference pattern re-emerges in inverse proportion to how closed the slit is, and hence how often we block the particle.
### Contextuality
With any interpretation of Quantum Mechanics, it is worth considering how it satisfies Bell's Theorem, This account of Quantum Theory, which does not contain multiple universes, physical wavefunction collapse, and other things of this sort, is unusually realist, and therefore should therefore be some suspicion that it is a hidden variables theory in disguise.
The reason it is not is that hidden varables theories state that every quantum property revealed by a measurement exists before the measurement. It is consistent 11 with the approach in this paper to say that each Quantum system has definite properties before the measurement- but since measurement changes the system, we have no way of knowing if the properties we measure are the ones the system had before the measurement. This is very different from a hidden variables theory where the measurement always reveals pre-existing properties.
The denial that measurements necessarily reveal pre- existing properties is an example of contextuality. The Stanford Encyclopedia of Philosophy [7] defines noncontextuality as
Footnote 11: but not necessary, though I think it is probably true
If a quantum mechanical system possess a property (value of an observable) then it does so independently of any measurement context (i.e. independently of how that value is eventually measured)
Conversely, contextuality occurs when the properties a quantum system posseses are dependent on the context of the measurement. RQM states that, 'facts', including measurement probabilites are relative to potential observers. This is inherently contextual, since the, 'facts' pertaining to a system depend upon which observer is measuring.
This means that the assumption of the Bell paradoxes that this interpretation does not satisfy is the property usually denoted as something like, 'there is no classical observer-independent reality'. There is no such reality because if the act of observation changes reality then by definition reality cannot be observer independent (another way to put this would be to say that the observer is part of reality whether they like it or not).
It is important to recall that in RQM the world is still inherently probabilistic. Rovelli suggests both that quantum interactions are stochastic, and that some of the indeterminacy comes from the fact that observers can never fully describe their own role in the measurement (since in RQM we cannot assign a quantum state to ourselves), and therefore there must be some element of the causal process left out of the description [10].
This seems to fit with this constructive measurement approach to RQM. Usually, when we observe some change in a system, we can explain it either in terms of smaller parts of the system, or in terms of the measuring device. But in the limit where the observed system has no smaller parts then any change cannot be explained in terms of those smaller parts, and so must either be stochastic or dependent entirely upon the measuring system.
But now imagine that the measuring system is also at the quantum level, and so has no smaller parts. Then there can be no causal explaination in terms of parts. We might suggest that the environment is the cause, but we can imagine that the systems are sufficiently isolated, so that only the two partless systems are involved. This implies that at least some interactions must be purely stochastic, with no deterministic causal explaination 12. However, given that most interactions involve at least one system with parts, we can assume that Rovelli's second suggestion, that the enviroment and measuring system contribute in some nonlocal way, will also apply in most physical cases. In this context it is interesting to recall Bohm's Quantum potential thory, considered [4] as a nonlocal effective field theory, though I don't see any obvious path to interpret Bohmian mechanics as describing the influence of the environment on the system.
Footnote 12: A discussion of the distinction in quantum theory between intrinsically quantum probabilities and probabilities based on epistemic ignorance is given in [43]
A very unrealistic example might clear up what I mean. Suppose we have an observer which is able to observe without disturbing anything, perhaps some species of djinn or fairy. This observer would see reality as a graph- like network, where each node was an interaction. Every node would have definite properties at every time, but the graph would shift over time as different interactions occured. However, though every property would have a definite value, the new values gained in the interactions would be stochastic, and not determinable even in theory. Also, whilst this fairy would have total knowledge of the rest of the universe from its perspective at each time, this would not be a total description of the universe, because the fairy could never describe itself.
Having discussed the ontology and interpretation of RQM, I now want to look in more detail at the question of stable and relative facts. To do this, I will need to discuss the consistent histories formalism of quantum mechanics, which will provide the necessary mathematical framework.
## 3 RQM and Consistent Histories
The Consistent Histories (CH) formulation of Quantum Mechanics is based on the work of Griffiths [19], Omnes [37], and Gell-Mann and Hartle[17]. Each of these figures has contributed to the mathematics of the interperation, however they have differing perspective on the physical conclusions to be drawn- for a history and discussion, see [41]. The description of the CH interpretation given here mainly follows Griffiths, being a summary of [23], supplemented by [21]- though I should clarify that the physical meaning I am attaching to the histories is very different from the standard one, as I shall discuss below.
In this section, closely following [23], I will approach the CH formalism as defining a non-standard logic on quantum sample spaces, which Griffiths terms Quantum Reasoning. This is a consequence of the non-commutativity of quantum operators. Next, I will discuss how this is extended to histories, or chains of operators at different times.
After this overview, I will suggest how this quantum logic can be used to give a method for determing whether facts shared by different observers are stable or not (in the RQM sense). I then discuss how the use of CH methods fits with the ontology developed in the first part of this paper. Finally, in a concluding section, I make clear how the use of Quantum Reasoning I am suggesting for RQM differs from its orginal intended use within the Consistent Histories formalism.
### The Consistent Histories Formulation
There are two major features of the Consistent Histories approach, following Griffiths. The first is that Quantum Mechanics is fundamentally stochastic. The second is that the noncommutative nature of the operators in Quantum Theory necessitates a special kind of logical framework known as, 'quantum reasoning[20][22]'. This is implemented by treating the Hilbert Space as a quantum analog of a classical Phase Space as I shall now outline.
In Classical Mechanics, the possible values of our variables can be labelled as points in a phase space. Suppose we are considering a set of values corresponding to a region \(\gamma\) of the phase space. We can define indicator functions which pick out that set of points by
\[P(x)=\begin{cases}1&x\in\gamma\\ 0&x\in\gamma^{c}\end{cases} \tag{3}\]
By this construction the phase space splits neatly into \(\gamma\) and \(\gamma^{c}\), where \(\gamma^{c}\) is the complement of \(\gamma\).
In the Quantum case, our quantum phase space is a Hilbert Space \(\mathcal{H}\). Properties now correspond to subspaces of \(\mathcal{H}\), which are picked out by projection operators \(\mathcal{P}\).
\[\mathcal{P}_{\Gamma}\ket{\phi}=\begin{cases}1&\ket{\phi}\in\Gamma\\ 0&\ket{\phi}\notin\Gamma\end{cases} \tag{4}\]
We define the projector \(\neg\mathcal{P}\) to be \(\mathbb{1}-\mathcal{P}\). Now we come to the key difference between the classical and quantum cases. In the classical case, every point lies either in \(\Gamma\) or \(\Gamma^{c}\). In the quantum case this no longer applies. Suppose \(\Gamma\) is the subspace of \(\mathcal{H}=\mathrm{Span}\big{(}\ket{\phi_{1}},\;\ket{\phi_{2}}\big{)}\) spanned by \(\ket{\phi_{1}}\). Then the state vector \(\ket{\phi_{1}}+\ket{\phi_{2}}\) is neither in \(\Gamma\) or \(\Gamma^{c}\). How can we interpret such states?
Suppose we have two subspaces, \(P\) and \(Q\). In classical terms, \(P\wedge Q\) is an intersection of \(P\) and \(Q\), but this is ambiguous in Quantum theory. This is because the projectors corresponding to \(P\) and \(Q\) might not commute, and so it is ambiguous whether we should choose \(\mathcal{PQ}\) or \(\mathcal{QP}\) as the appropiate projector for \(\mathcal{P}\wedge\mathcal{Q}\).
Consistent Histories attempts to solve this by defining the projector \(\mathcal{PQ}\) iff \(\mathcal{P}\) and \(\mathcal{Q}\) commute. Otherwise it says that \(P\wedge Q\) is meaningless 13. This gives a logic which no longer follows the principle of the excluded middle- it allows a proposition to be three things; either true, false or non- defined. That said, as we shall see in the next section we can only draw conclusions when the sample space contains no undefined propositions. Once we have the Quantum phase space, along with the consistent histories logic, we can use it to calculate probabilities.
Footnote 13: Another strategy was adopted by Von Neumann in his 1930s development of, βQuantum Logicβ [3]
### Probabilities and Quantum Reasoning
We can think of a probability theory as giving a triple \(\big{(}\mathcal{S},\mathcal{E},\mathcal{M}\big{)}\) where
* \(\mathcal{S}\) is a sample space- the underlying objects or situations we are working with
* \(\mathcal{E}\) is an event algebra- the set of actual events involving elements of the sample space we want to know the probabilty of. The algebra structure is given by unions and intersections.
* \(\mathcal{M}\) is a probability measure
For the classical physics, the sample space is given by regions of the phase space, and the event algebra is given by the corresponding indicator functions. In the quantum case, each divison of the Hilbert Space into subspaces gives a different sample space, and the event algebra is given by the projectors onto those subspaces. As these examples show, there is a one-to-one relation in these cases between the event algebra and sample spaces, and so we shall refer to them interchangeably. In both the quantum and classical cases, the probabilities are usually not given intrinsically and are assigned via theoretical or empirical considerations.
The main different between the quantum and classical cases, according to the consistent histories interpretation, is that in the classical case, the event algebra is well defined for any set of subspaces, so in practice we can compare events in different probability frameworks, or change the probability framework that we are using, without any problems.
in Quantum Mechanics on the other hand14, the event algebra is only well defined when the projectors commute (otherwise we define expressions like \(P\wedge Q\) to have no meaning, as explained above). Therefore we cannot straightforwardly add new events into the event algebra, or compare probabilities between different frameworks. We must check to make sure that the events and frameworks which we are comparing give a well defined event algebra.
We say that two sample spaces (projective decompositions) of a Quantum operator are compatible if all the projectors of one sample space commute with the projectors in the other. If \(\{{\cal P}_{i}\}\) and \(\{{\cal Q}_{i}\}\) are the event algebras, then we require
Footnote 14: at least according to consistent histories
\[{\cal P}_{j}{\cal Q}_{k}={\cal Q}_{k}{\cal P}_{j}\ \ \forall j,k \tag{5}\]
If the frameworks are compatible, then we can combine them15. Otherwise, if the frameworks are incompatible, then we adopt the Single Framework Rule, which is the central feature of the Consistent Histories interpretation.
This states that we cannot directly compare probabilities from incompatible frameworks-we must only use probabilities from a single compatiable framework 16. Finally, we can define measurement operators in the usual way as combinations of projectors. These operators then form the event algebra. For some worked examples of compatible and incompatible frameworks, see section 3.4 of [23].
Footnote 15: Sometimes this involves a process called refinement, which I am not discussing in this brief introductionβ see [21] or [23]
Footnote 16: Though this framework may be made of of several compatiable frameworks joined togetherβ see below
### Histories
So why is the interpreation called, 'Consistent Histories'? That comes about because we want to compare measurement at different times, and to discusss sequences of measurements. We first define a time-graded Hilbert space for sucessive times \(t_{0},t_{1}...,t_{n}\) as
\[{\cal H}={\cal H}_{0}\odot{\cal H}_{1}\odot...\odot{\cal H}_{n} \tag{6}\]
where the \(\odot\) represents a tensor product- the different symbol being used as a reminder that each Hilbert space and its corresponding projectors is associated with a different time. We then define a Quantum history as a tensor product of projectors for each fixed-time Hilbert Space:
\[Y^{a}=P_{0}\odot P_{1}\odot...\odot P_{n} \tag{7}\]
These \(Y^{a}\) form a sample space so long as \(\sum P_{i}=\mathbb{1}_{i}\) for each \(\mathcal{H}_{i}\). In this case, \(\sum Y^{a}=\mathbb{1}\), where \(\mathbb{1}\) is identity on \(\mathcal{H}\)- the tensor product of the identities \(\mathbb{1}_{i}\) on the fixed time Hilbert Spaces. This defines our Event Algebra and Sample Space17. The remaining task is to assign probabilities to the histories \(Y^{a}\). To do this, we construct Chain kets
Footnote 17: remember these are dual since defining a sample space of subspaces automatically picks out an event algebra based on their projectors, and vice versa
\[|Y^{a}\rangle=P_{n}T(t_{n},t_{n-1})P_{n-1}T_{(t_{n-1},t_{n-2})}...T(t_{2},t_{1}) P_{1}T(t_{1},t_{0})P_{0} \tag{8}\]
Note that this is an operator on a single Hilbert space at time \(t_{n}\). The operators \(T(t_{i},t_{i-1})\) are unitary operators which describe time evolutions. What they is can vary according to the system described. Usually, they will be given by the Schroedinger equation. If we do not wish to take into account time evolution of a state then we can just set \(T(t_{i},t_{i-1})\) to be the identity operator. We then assign each history the probability
\[Pr(Y^{a})=\langle Y^{a}|Y^{a}\rangle \tag{9}\]
provided that all the histories projectors satisfy the consistency condition
\[\langle Y^{a}|Y^{a^{\prime}}\rangle=0\text{ for }a\neq a^{\prime} \tag{10}\]
We call such a set \(\{Y^{a}\}\) a consistent family of histories. It defines a compatible framework at time \(t_{n}\). We can think of \(|Y^{a}\rangle\) as a series of rotations/ dilations from the unitary operators \(T(t_{i},t_{i-1})\) and projections from the \(P_{i}\), which together make up a single operator which turns the initial subspace corresponding to \(P_{0}\) to a final state corresoonding the the projector \(Y^{a}\).
Note that states with probability zero now include those where the dynamics are impossible-i.e. where the combinations of rotations, dilations and projections lead to the empty set.
### Stable facts and compatiable frameworks
Now we have given the mathematical details of the CH formalism, we can apply it to RQM. This will make it clearer what RQM means by, 'facts' and in particular it can be used to give a more rigorous definition of,'stable facts'.
I suggest that a history is taken to be the set of facts held by an observer about the system it describes, at different time steps. This helps us in two ways.
First, if we understand CH as a kind of logical apparatus for working with contextual propositions, and determining their probabilities, then we can define, 'facts' as elements of the event algebra. Type 1 facts refer to actual interactions, Type 2 facts refer to potential interactions, and type 3 facts refer to assumptions made on the basis of prior knowledge- but all these are equally acceptable as logical statements, whose mutual probabilities we can determine using Quantum Theory. This demonstrates Rovelli's principle that [10]
[Quantum] theory determine(s) the probability... for a fact (or collection of facts) \(b\) to occur given that a fact (or collection of facts) \(a\) has occured.
Second, Consistent Histories gives a precise way of talking about stable and relative facts between systems. Given two consitent families, CH gives a critereon for determining when the two families can be combined into a single framework, and an algorithm for how to do so. If we associate the two families with the facts determined by each system, then we can say that the facts are stable relative to both systems if the families associated with those facts are compatiable.
Stable facts are those which come from a consistent family of histories. Within a given family of histories, the event algebra automatically follows the rules of classical probability. So Rovelli and DiBiago's [12] definition of a relative fact as one for which
\[P(b)=\sum_{i}P(b|a_{i})P(a_{i}) \tag{11}\]
holds is always satisfied. This gives a well defined notion of which facts are stable and which are not.
So what is this condition? Given two consistent families of histories, \(\{K_{i}\}\) and \(\{Y_{j}\}\), we can combine them into a single family of histories iff the following two conditions are satisfied:
1. The operators for each family at each time step \(t_{i}\) must commute
2. The family of histories \(\{K_{i}Y_{j}\}\), given by taking the tensor product of \(\{K_{i}\}\) and \(\{Y_{j}\}\) at each time step, must be a consistent family. Essentially these families represent \(\{K_{i}\wedge Y_{j}\}\), which are well-defined due to condition 1
This makes physical sense in the RQM context- condition 1 means that the measurements made by each subsystem at each time do not interfere with one another, whereas condition 2 guarantees that the pairs of histories considered together define a consistent space of probabilities. I will now give some examples to illustrate this. Note that in order to check compatiability, we must consider the whole sample space, and must therefore include all the histories which could have been measured given the experimental setup, not just the one result which was actually obtained.
First, suppose that we have two observing systems \({\cal O}_{1}\) and \({\cal O}_{2}\) measuring the same system \({\cal S}\). They each attribute the same initial state, \(|\psi_{0}\rangle\), to \({\cal S}\) at time \(t_{0}\) and they both measure the spin of \({\cal S}\) in the \(x\) direction at time \(t_{1}\). At time \(t_{2}\), \({\cal O}_{1}\) measures the momentum eigenstate in the \(x\) direction of \({\cal S}\), and \(O_{2}\) measures the momentum eigenstate in the \(y\) direction. Assume all time evolution is unitary, following Schroedinger's equation. Then the two families of histories are18
Footnote 18: I am making a simplification here- technically we should add in histories with operators of the form \(\mathbbm{1}-\sum o_{i}\) at each time step (where \(o_{i}\) are the projection operators at each time step) to ensure that the sum of the projectors at each time step gives the identity
\[K_{i} =|\phi_{0}\rangle\odot\sigma_{x,k}\odot p_{x,l}\] \[Y_{j} =|\phi_{0}\rangle\odot\sigma_{x,m}\odot p_{y,n} \tag{12}\]
Each history represents one possible pair of measurement outcomes at \(t_{1}\) and \(t_{2}\). Here \(\sigma_{x,k}\) is eigenstate \(l\) of the operator \(\sigma_{x}\) and so on. As mentioned above, I am summing over all the possible measurement outcomes, even though only one of these outcomes will be measured for both \({\cal O}_{1}\) and \({\cal O}_{2}\) because it is necessary to check that the sample space of all possibilities is consistent.
The operators at each time value commute, and we can easily show that the pairs of opeators form a consistent family due to the orthogonality of the operators in each family at \(t_{2}\)19. Therefore we can combine these operators into a single framework. This means that the facts relative to \({\cal O}_{1}\) and \({\cal O}_{2}\) are stable at each timestep, so they can agree on which properties \({\cal S}\) has at each \(t_{i}\).
What would be an incompatiable pair of families? Suppose that \({\cal O}_{1}\) still measured the spin in the \(x\) direction at time \(t_{1}\), but \({\cal O}_{2}\) now measures the spin in the \(y\) direction at \(t_{1}\). Then
the two families of histories
\[K_{i} =|\phi_{0}\rangle\odot\sigma_{x,k}\odot p_{x,l}\] \[Y_{j} =|\phi_{0}\rangle\odot\sigma_{y,m}\odot p_{y,n} \tag{13}\]
are incompatible because the projectors \(\sigma_{x}\) and \(\sigma_{y}\) at time \(t_{1}\) do not commute. Therefore \({\cal O}_{1}\) and \({\cal O}_{2}\) have only relative facts, and not stable ones.
These are elementary examples, and further work is needed to explore the behaviour of stable facts, and potentially to improve this algorithm for determining them. For example, is there a way to have stable facts not only at a single time, but across multiple times? Additionally, if the facts are stable until \(t_{i-1}\) is there a way to,'regain' stability of facts from a later time \(t_{i+1}\) after losing it at \(t_{i}\)? These are topics I hope to explore in future work.
### Consistent Histories and Constructive Measurement
We can try to understand why this might be true using the constructive understanding of Quantum theory above- though I want to emphasise that the analysis of relative facts and stable facts here does not depend on any particular ontology to work.
If we assume that different measurement interactions disturb a system in different ways, then it makes sense that this disturbances will affect some properties and not others. Therefore there will be some measurements (eg \(x\) and \(p_{y}\)) which can be mutually stable because the act of measurement one does not affect the system in ways which disturb the value of the other properties.
However, other measurements (say, \(x\) and \(p_{x}\)) are such that measuring one does disturb the value of the other. In order to measure the position of a particle, I must contact it, e.g. by hitting it with a photon. This will affect its momentum. If I try and measure it's momentum first, then this requires interaction with some medium over a period of time, which will affect the final position relative to where it would have been had I not measured the momentum first. So the position after the momentum measurement cannot be taken as the position at that time had the momentum measurement not taken place, and vice versa. This is the contextuality property in action.
The relation between different measurements and their properties is give by the relation between operators on the Hilbert space 20, and Consistent Histories provides a framework for working out which series of measurements do not disturb the relevent properties (ie those which are mutually stable) and which do not.
This allows us to view a Quantum Theory as a non-standard logical framework for analysing situations in which the truth of propositions depends upon the ordering of these propositions. In more concrete terms, it is a logical framework for addressing situations where the act of measurement changes the object being measured. We could think of this as a Contextual Logic.
A consequence of this is that rather than speak of, 'Classical' and 'Quantum' systems, we should speak of, 'Classical' and 'Quantum' measurements, or interactions21. Classical measurements are those which do not change the system in a relevant way, and Quantum measurements are those which do change the system in a relevant way. The mathematical framework for a Classical system is a classical phase space; the framework for a Quantum
system is a general Hilbert Space, the key feature of which is the noncommuting operators22. We can see classical theories as special cases of quantum theories in which all the operators commute. This implements the insight of Rovelli that all interactions can be described by Quantum Theory.
The relationality of Quantum Mechanics then follows from this as a consequence. If a system is changed as a result of measurement, then the history of interaction between an observing and observed system determines the results of any future measurements. Therefore the properties of a system are relative to a particular observing system.
Footnote 22: We can think of this Hilbert Space as a quantum phase space, with a noncommuting geometry. See Moyal [34][2]
## 4 Differences between RQM and CH
Griffiths has indicated in private correspondance that he does not consider the use I have made of his theory of Quantum Reasoning to be valid. I therefore want to make clear how the way I am using Quantum Reasoning differs from its intended use within the context of Consistent Histories. I shall first discuss the conceptual difference between the two interpretations, before giving some examples through discussing measurement in both theories.
### Differences in interpretation
The main difference between my approach in this paper, and Griffith's original approach [21][22] is the physical interpretation attached to the choice of framework. For Griffiths, the choice of framework is determined by the following four principles, quoted verbatim from [23]:
" (R1) **Liberty**. The physicist is free to employ as many frameworks as desired when constructing descriptions of a particular quantum system, provided the principle R3 below is strictly observed.
(R2)**Equality**. No framework is more fundamental than any other; in particular, there is no 'true' framework, no framework that is'singled out by nature'.
(R3)**Incompatibility**. The single framework rule: incompatible frameworks are never to be combined into a single quantum description. The (probabilistic) reasoning process starting from assumptions (or data) and leading to conclusions must be carried out using a single framework.
(R4) **Utility**. Some frameworks are more useful than others for answering particular questions about a quantum system."
, The idea here is that any probability framework is just as good as any other probability framework- we just need to be sure to only use a single framework with a consistent event algebra. In particular, there is no physical meaning attached to any choice of framework- all are equally valid. In [22], he says that different frameworks reveal different aspects of reality, and that the fact that a single quantum reality can be described by different (and possibly incompatible) frameworks is the new nature of things which quantum mechanics reveals to us.
Therefore the move from the classical to the quantum world involves the abandonment of the idea that there is a single correct description of the universe (Griffiths calls this idea the Principle of Unicity) Instead, there are multiple correct (and incompatible) descriptions of the same events. Griffiths uses the classical analogy of looking at different properties of
an object in order to answer different questions about it (e.g. the capacity of a mug and its material) but is clear that this is just an analogy- the different aspects on the quantum level are more fundamental (as can be seen from the fact that the families used to describe them can be incompatiable).
This approach is similar to RQM's statement that there is not a unique description of the universe, but that there is a description relative to each observer. In RQM the different descriptions of the universe come about from the different observers and their different interactions with the universe. In Consistent Histories, the different descriptions are given relative to different frameworks, which are different, observer-independent, descriptions of the same system (in principle any observer could use any framework). This is what motivates my approach in this paper to assign different frameworks to different observers and their histories of interactions. The different frameworks in CH become different potential sets of interactions between observer and system in RQM.
### Measurements in CH
A good example of the distinction between these approaches comes from the CH treatment of measurement. As outlined in [24], a key feature of CH is that it denies any special status to measurement as opposed to any other kind of event. RQM also denies that measurements have a special ontological status- every interaction between two systems is a measurement of each by the other- but it does give measurements a special epistemic status relative to an observer, since the measurement updates the observer's description of the measured system.
In CH, we can model a measurement in the following way [24]. Let \(\left|\phi_{0}\right\rangle\) be the initial state of the measured system, and let \(\left|M_{0}\right\rangle\) be the initial state of the detector. Suppose we can decompose \(\left|\phi_{0}\right\rangle=1/2\left|\uparrow\right\rangle+1/2\left|\downarrow\right\rangle\)\(\&\) let \(\left|M_{u}\right\rangle\) and \(\left|M_{d}\right\rangle\) be the states in which the detector measures \(\left|\uparrow\right\rangle\) and \(\left|\downarrow\right\rangle\) respectively. Finally, let \(\left[Y_{i}\right]\) denote the projector corresponding to the state \(\left|Y_{i}\right\rangle\), and \(\left\{\left[Y_{i}\right]\right\}\) indicate one history for each choice of \(\left[Y_{i}\right]\)
Now, consider the following family of histories, describing a measurement, with appropiate time evolution:
\[\left[\phi_{0}\right]\otimes\left[M_{0}\right]\odot\left[ \uparrow\right]\otimes\left[M_{0}\right]\odot\left[\uparrow\right]\otimes M_{u}\] \[\left[\phi_{0}\right]\otimes\left[M_{0}\right]\odot\left[ \downarrow\right]\otimes\left[M_{0}\right]\odot\left[\downarrow\right]\otimes M _{d} \tag{14}\]
Letting \(\left|s_{1}\right\rangle=\left|\uparrow\right\rangle\), and \(\left|s_{2}\right\rangle=\left|\downarrow\right\rangle\)f, this framework allows us to conclude that the conditional probability of the system being in state \(s_{i}\) at time \(t_{1}\) given that we measured \(M_{i}\) at \(t_{2}\) as
\[P\big{(}[s_{i}]_{t_{1}}|M_{j,t_{2}}\big{)}=d_{ij} \tag{15}\]
If we measured \(M_{i}\) at \(t_{2}\) we can conclude with probability 1 that the combined state of the system and detector at \(t_{1}\) was \(\left|s_{i}\right\rangle\otimes M_{0}\). Now consider a different family
\[\left(\phi_{0}\right]\otimes\left[M_{0}\right)\right]\odot\left( \left[\phi_{0}\right]\otimes\left[M_{0}\right]\right)\odot\left[\left[\downarrow \right]\otimes\left[M_{d}\right] \tag{16}\]
Since this family does not include the states \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\) at \(t_{1}\), we can say nothing about whether or not the system was in one of these states at that time. Here we come to the difference between the application of Quantum Reasoning to RQM which I have outlined in this paper, and its original use in CH. In CH, we are free to choose any framework to describe a system. So even if we cannot draw any conclusions about whether the system was in one of the states \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\) from the family (16), we can just as well use the family
( 14) to draw such a conclusion.
In RQM, on the contrary, I am suggesting that each history has a physical interpretation as referring to the observer's history of interactions (and hence knowledge of the system being described at each time step). Therefore family (14) represents a situation in which the observer knows that the system is definitely in one of the states \(|\!\uparrow\rangle\) or \(|\!\downarrow\rangle\) at \(t_{1}\). In this case, measuring \(|M_{i}\rangle\) at \(t_{2}\) allows us to conclude that the system was in state \(s_{i}\) at \(t_{1}\).
However, family (16) represents a case where the observer does not know that the system is definitely in the state \(|\!\uparrow\rangle\) or \(|\!\downarrow\rangle\), only that it has probability \(1/2\) to be found in either state under an appropiate measurement.
In RQM these are two different situations, and we are not free to choose between them. The first family would represent, for example, a situation where we know our initial state \(|\phi_{0}\rangle\) comes from a machine which will produce either \(|\!\uparrow\rangle\) or \(|\!\downarrow\rangle\), whereas the second family could represent a situation where the particle had an unknown and random polarisation, so that we know that we would measure either \(|\!\uparrow\rangle\) or \(|\!\downarrow\rangle\) upon passing it through a vertically-polarised measuring device, but we do not have any other infomation about its state at \(t_{1}\). These are physically different situations and we are not at liberty to choose between them.
### Other Considerations
There are two final differences I want to mention. First, in CH, we are free to assign a quantum state to ourselves- indeed Gell-Mann and Hartle have used CH to explore Quantum Cosmology by assigning a quantum state to the whole universe. In RQM, we cannot assign a quantum state to ourselves, and therefore there is always implicitly a extra observer, i.e. we ourselves, assigning the quantum histories under consideration.
Secondly, Griffiths does not believe that quantum mechanics is contextual [25]. However, his arguments for this are directed against a particular formulation of contextuality used by Bell. The way I am understanding contextuality in this paper is precisely to say that we can only consider compatiable families, and therefore possible measurements are constrained by our choices of other measurements. Griffiths agrees with this [24], but believes it is a seperate property which should be called, 'Multitextuality'.
## 5 Special Relativity and Quantum Mechanics
I shall conclude the paper by examining how the analysis given here applies to the debates around the different ways in which Special Relativity and RQM are relational. An argument in favour of RQM [42] is the alleged similarity between the relational, observer dependent description of phyiscal systems in that theory with the frame dependent description of physical systems in Special Relativity (Think, for example, of the way that events which are spacelike seperated from two observers can be given different temporal orderings by each observer).
Penaar objects [28] that SR has a specific covariant structure (given by the underlying Minkowski Geometry; in particular the frame invariant spacetime interval between pairs of events). Relational Quantum Mechanics has no such invariant structure. Rovelli and Di Biago respond that it is precisely this which shows the true radicality of the RQM position.
The analysis earlier in the paper can shed light upon this. I have argued that in RQM, measurement is a creative act which ontologically changes the thing measured. In SR (as for any Classical theory), this is not the case. Looking at a system from one reference frame rather than another does not alter which events happen- hence the invariance of the underlying Minkowski spacetime, which we are simply approaching from different points
of view.
In QM, on the other hand, two different hypothetical measurements of the same system create ontologically different states, and therefore there cannot be an underlying invariant structure which different measurements simply reveal from different perspectives. If there were, in fact, we would be in some Hidden Variables formulation of the theory.
The equivalent of the invariant spacetime structure in QM is, I think, twofold. First, there is the consistency of measurements- by which I mean that if A measures B; and C measures A measuring B, then A and C will always ascribe the same state to B23. Another way of putting this is that the Hilbert Space structure and the rules for determining consistent and inconsistent histories remain the same for all observers.
The second invariant structure in QM is the underlying laws determining the interactions. Measurements of the same type (e.g. position, momentum, energy etc) create the same types of states (eigenstates of the relevent operators), and all states evolve according to the Schroedinger equation when they are not being disturbed by measurement24.
Another point to note is that there is an analogy with the single framework rule in Consistent Histories when analysing paradoxes in Special Relativity. Take, for example, the twin paradox. In this paradox, there are two twins, one of whom remains on earth whilst the other one heads at some large fraction of the speed of light to the nearest star, and then returns to earth. The paradox comes about due to the fact that, since time is slower in a moving frame, the first twin (who remains on earth) should assume less time has passed for his twin (who is moving very fast relative to earth). However, the twin on the spaceship, who is at rest in his own reference frame, should also assume that less time has passed for the earthbound twin, since the earthbound twin is moving very fast relative to the spaceship, just in the opposite direction. Therefore the spaceship twin should also conclude that less time has passed for the earthbound twin. When they meet again upon the spaceship's return to earth, which of the twins, therefore, will actually be younger?
A careful analysis of this situation is given in [32], where the conclusion is that the paradox comes from the fact that the twin on the spaceship is using two reference frames- the one on the outward journey and the one on the return journey. The issue comes from combining two inconsistent reference frames into a single description. To quote from [32]
Footnote 23: See Rovelli [42]for a proof of this
Footnote 24: We could think of this as a meta-formal cause in the sense that, unlike a formal cause (e.g. the shape of an enzyme which allows it to bind to certain proteins), the laws of phyisics do not constitute the shape of material things, but they do set the constraints for the possible Forms physical things can have. I developed this idea by analogy with Rahnerβs notion of a Quasi-Formal cause [40], though I do not think the two notions are identical.
(The spaceship twin) changes his time reckoning in mid course... by changing the rules for dating events on earth, and so naturally gets his calulations arwy. The earth-bound twin has an uninterrupted view of what is happening to his travelling brother, and so his view of the matter is undistorted
That is, the twin on earth only uses a single reference frame, and therefore has the correct account of the situation.
We can also consider the well known example of the pole moving a relativistic speeds through a barn shorter than it [45]. Suppose that the pole is 20m long in its rest frame, and the barn is 10m long. Now suppose that the pole is moving sufficiently fast relative to the barn that its length \(l^{\prime}\) in the barn's rest frame is 10m. Then, in the rest frame of the barn, the pole must at some point in time fit entirely in the barn; whereas in the rest frame of the pole then this cannot be the case. The paradox comes from the supposition that this is inconsistent with the fact that what is true in one reference frame must be true
in all reference frames.
The resolution of this paradox [45] comes from the fact that, since simultaneity is relative, whilst the front end of the pole exiting the barn and the rear end of the pole entering it happen at the same time in the rest frame of the barn, they do not happen at the same time in the rest frame of the pole. The front end of the pole leaves the barn in this frame before the rear end enters it. Each reference frame gives a consistent account of events- the paradox comes from directly comparing the perspectives of observers in different Lorentz frames.
Here, we again see that we have an analog of the single framework rule- we cannot directly compare events between different reference frames. However, in Special Relativity, if something is true in one reference frame, it must be true in all reference frames, whereas in Quantum Mechanics, different frameworks can correspond to different physical situations.
### Classification of Theories
This suggests that there is a difference between Classical Theories, which assume a unique or normative representation, and relational theories, which describe the same reality from many perspectives, without assuming there is some absolute standard of reference25. There is also a distinction between Classical theories, where measurements do not significantly change the system being measured, and Quantum theories, where the interactions comprising the measurement do change the system being measured. All Quantum systems are Relational (If measurement interactions change the system then there can be no absolute state of the system independent of measurements), however there can be relational theories which are not Quantum- the most prominent example of this being Special Relativity. This gives us the classification
Footnote 25: Though a theory may be classical in some respects and relational in othersβ Special Relativity assumes an absolute standard of acceleration, for example
* I. Classical Theories, in which there is both an (implicit) absolute frame of reference, and the properties of observed systems are independent of measurement (and the order of measurements)
* II. Relational Theories, in which there is no absolute frame of Reference, but measurement interactions do not change the systems being measured. This means that, whilst different observers will have different descriptions of the same systems, the descriptions relative to one observer can be translated into descriptions relative to a different observer via an appropiate (global) symmetry transformation. As discussed above, Special Relativity is the paradigmatic example of such a theory.
* III. Quantum theories, which are Relational and, in addition, measurement interactions change the systems being measured. Therefore there are no global transformations between the view of a system relative to different observers26. Footnote 26: One of my students suggested the example of being in a bad mood and one of your friends asking you how you were feelingβ being asked would make you feel better, and so the act of asking changes the mood and hence changes the answer
## 6 Further Work and Conclusion
In this paper I have tried to clarify some points of the interpreation of Relational Quantum Mechanics. To begin with, I have discussed the underlying ontology of the theory. After discussing several possibilities, I suggested that the best way to understand the relational
nature of the theory was via the fact that quantum measurements change the thing measured. I think that this is the difference between a quantum measurement, and a classical one which does not (significantly) affect the thing being measured. I then looked at relative and stable facts, and discussed how that the Consistent Histories formulation of Quantum Mechanics might provide a way to make the distinction between them more mathematically precise. Finally, I looked at the similarities and differences between RQM and Special Relativity, leading to a classification of physical theories into Classical, Relational and Quantum.
This begs the question, where does GR fall in this classification? First of all, it is clearly Relational. Although different coordinate systems are usually viewed as formal mathematical objects, we can also view them as describing the spacetime from the perspective of accelerating reference frames [1]. There is no preferred or absolute coordinate system in GR (the principle of Background Independence), but a diffeomorphism can transform the description of the spacetime in one coordinate system into that in another coordinate system.
Is Gravity Quantum? Does the choice of coordinate system actually change the spacetime? The answer is of course, No. The coordinate system describes the spacetime from the perspective of an accelerating frame, but this is different from the changing coordinate system actually describing how the spacetime physically changes according to different choices of the coordinate frame. In general, you cannot create new physical fields simply by choosing a different reference frame.
There are possible exceptions to this. First of all, the phenomenon of Unruh radiation [8], where an accelerating observer will observe a thermal bath of radiation where an inertial observer would perceive absolute vacuum. Second, QFT on a curved background also provides topics for comparison- for example the dynamic Casimir effect and Hawking Radiation [46]. These would be excellent things to explore in future work.
Another possibility would be to look at Gauge Theories. In particular, the idea that the different gauges can be used to represent the points of view of different observers is worth investigation- in this regard, I would want to make a comparision with the work of Gomes [18].
A different route to further the work in this paper would be to deepen the analysis of RQM itself. The discussion of relative facts and histories in section **?** gives a method for determining which facts are shared between observers, but given Rovelli's original axioms [42]
1. Any system can be described by a finite amount of infomation
2. It is always possible to extract more information from any system.
If we view a quantum history as a way of extracting the infomation from a system, it would be useful to be able to assign units of infomation to different histories, and to be able to say how incompatible measurements change this. Does a single incompatible measurement reset the amount of infomation in a history to zero, and if not is there a way to clarify which bits of infomation are lost when particular inconsistent measurements are added to a history? A useful way to begin this would be to analyse the various quantum paradoxes using the quantum reasoning method applied to RQM, as outlined in this paper, and comparing the results to the analyses in the standard CH framework- see, for example, the later chapters in [21].
Finally, as I have mentioned at various points in this paper, a final direction I want to explore is the issue of nonlocality in RQM, and especially to use the RQM interpretation
together with the Consistent Histories methods developed in this paper to analyse the EPR paradox [44][33].
|
2304.10260 | Learning Representative Trajectories of Dynamical Systems via
Domain-Adaptive Imitation | Domain-adaptive trajectory imitation is a skill that some predators learn for
survival, by mapping dynamic information from one domain (their speed and
steering direction) to a different domain (current position of the moving
prey). An intelligent agent with this skill could be exploited for a diversity
of tasks, including the recognition of abnormal motion in traffic once it has
learned to imitate representative trajectories. Towards this direction, we
propose DATI, a deep reinforcement learning agent designed for domain-adaptive
trajectory imitation using a cycle-consistent generative adversarial method.
Our experiments on a variety of synthetic families of reference trajectories
show that DATI outperforms baseline methods for imitation learning and optimal
control in this setting, keeping the same per-task hyperparameters. Its
generalization to a real-world scenario is shown through the discovery of
abnormal motion patterns in maritime traffic, opening the door for the use of
deep reinforcement learning methods for spatially-unconstrained trajectory data
mining. | Edgardo Solano-Carrillo, Jannis Stoppe | 2023-04-19T15:53:48Z | http://arxiv.org/abs/2304.10260v1 | # Learning Representative Trajectories of Dynamical Systems via Domain-Adaptive Imitation
###### Abstract
Domain-adaptive trajectory imitation is a skill that some predators learn for survival, by mapping dynamic information from one domain (their speed and steering direction) to a different domain (current position of the moving prey). An intelligent agent with this skill could be exploited for a diversity of tasks, including the recognition of abnormal motion in traffic once it has learned to imitate representative trajectories. Towards this direction, we propose DATI, a deep reinforcement learning agent designed for domain-adaptive trajectory imitation using a cycle-consistent generative adversarial method. Our experiments on a variety of synthetic families of reference trajectories show that DATI outperforms baseline methods for imitation learning and optimal control in this setting, keeping the same per-task hyperparameters. Its generalization to a real-world scenario is shown through the discovery of abnormal motion patterns in maritime traffic, opening the door for the use of deep reinforcement learning methods for spatially-unconstrained trajectory data mining.
Domain-Adaptive Trajectory Imitation
## 1 Introduction
This paper is generally concerned with the problem of learning the distribution of trajectories generated by a dynamical system, when only partial information about its evolution rule is known. Such systems -- evolving as \(s_{t+1}=f(s_{t})\), with \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) being an unknown _stochastic_ function and \(s_{0}\) having a known distribution -- are obiquitous in science and engineering; a reason why advances in their understanding (which are independent of their state representation) have the potential of impacting a number of research fields. Such a global understanding is one of the goals of this work, which we exemplify by developing a model for learning statistics of \(f\) that is benchmarked for a diversity of synthetic systems and then used (with slight modifications) in a real-world scenario with a completely different state representation and geometry.
The starting point for our analysis is recognizing that the state of the system \(s_{t}\in\mathbb{R}^{d}\) and the representation \(\hat{a}_{t}\in\mathbb{R}^{m}\) of the partial knowledge of its evolution rule belong to two different geometric manifolds, \(\mathcal{S}\) and \(\mathcal{A}\), respectively. These are connected by a known _deterministic_ function \(g:\mathcal{S}\times\mathcal{A}\to\mathcal{S}\) defining such a knowledge, in a way that makes \(\hat{s}_{t+1}=g(\hat{s}_{t},\hat{a}_{t})\) approximate the state \(s_{t+1}\) at each time step. In our formulation, the stochasticity of \(f\) is taken on by the random variables \(\hat{a}_{t}\) and, since \(g\) is not necessarily bilinear, a wide range of systems may be considered. By observing an ensemble of sequences \(s_{0:t}\) of states, our interest is then to learn the distribution of the corresponding sequences of _decision_ variables \(\hat{a}_{0:t}\) that generate the evolution of such states.
An example of the meaning of such formulation is provided by a cheetah learning to chase gazelles. In a given trial, the cheetah has to decide -- from a snapshot of the current position \(s_{t}\) of the gazelle -- the velocity vector \(\hat{a}_{t}\) making its position \(\hat{s}_{t}\) follow \(s_{t}\) as close as possible. Certainly, after many trials, the cheetah learns how to infer \(\hat{a}_{t}\) from complex environmental cues, after crucially discovering simple kinematics laws encapsulated by \(g\). We refer to this learning process as _domain-adaptive trajectory imitation_, since it involves learning from a distribution of trajectories \(s_{0:t}\) of a dynamical system, by adapting information from one domain \(\mathcal{A}\) to a different one \(\mathcal{S}\), using partial knowledge of the evolution of \(s_{t}\) encoded by \(g\). Since this task engages a decision maker, our approach is suitable for deep reinforcement learning methods (Lazaridis et al., 2020).
Although the ideas developed here may be extended to any dynamical system, we are motivated by a concrete practical application: _the detection of anomalies from real-time tracking data_. In particular, the results in this paper lead to a source of information for anomaly detection in maritime traffic that is alternative to the one that we have already exploited from a computer vision perspective (Solano-Carrillo et al., 2021), and then of great potential for maritime situational awareness. It may be useful in the detection of motion patterns corresponding to illegal activities, close in spirit with using agent-based simulations to match the empirical spatio-temporal distribution of crime locations from large-scale human activity data (Roses et al., 2020). Our approach is principled: detecting abnormal behavior by first _learning the distribution_ of what is considered normal behavior and then measuring deviations from this at inference time. For this reason, it may be applied to time series featuring different characteristics (e.g., non-stationarity, irregular sampling rate, missing points, etc.) for which a plethora of different methods have been proposed per characteristic (Freeman et al., 2022).
Our key contributions in this paper are therefore:
* We introduce an OpenAI Gym environment (Brockman et al., 2016) supporting arbitrary families of reference trajectories (i.e. solutions to the mechanics of abstract dynamical systems) for the trajectory imitation problem. Four built-in families are provided for benchmarking existing and new learning methods;
* inspired by image to image translation (Zhu et al., 2017);
* We explore, for the first time, the application of deep reinforcement learning for spatially-unconstrained trajectory data mining; in particular, anomaly detection from tracking data, using maritime traffic as a testbed.
The presentation of this work is structured in such a way as to highlight how a single model can learn the main statistical properties of a variety of dynamical systems: from synthetic to a real-world application, keeping (nearly) the same architecture and hyperparameters.
## 2 Related work
Since the work on dynamical systems is vastly represented in many research fields, here we restrict only to recent methodologies which inspire our present viewpoint.
**Motion imitation.** At the core of our approach is the acquisition of locomotive skills by an agent that learns to imitate motion. There has recently been increasing interest on this. Haarnoja et al. (2019) taught a robot how to walk from scratch with minimal per-task hyperparameter tuning and a modest number of trials to learn. A similar task was carried out by a drone learning how to fly to a goal marker (Becker-Ehmck et al., 2020). More agile locomotion skills have been learned by imitating (from video motion capture) animals (Peng et al., 2020), complex human acrobatics (Peng et al., 2018), basketball diribbling (Liu and Hodgins, 2018); and by simulating realistic human motion from a model of the muscle contraction dynamics (Lee et al., 2019). The above methods use agents with a fixed embodiment. Hafner et al. (2020) have proposed a learning framework for core locomotive skills that work for wide variety of legged robots keeping hyperparameter setting and reward scheme. More aligned with our domain-adaptive approach is learning from experts which are different from the agents due to a mismatch of viewpoint, morphology or dynamics (Raychaudhuri et al., 2021; Kim et al., 2020). Nevertheless, none of these methods deal with the problem of imitating center-of-mass trajectories with significant spatiotemporal extension and complex shapes, as we pursue in this paper.
**Trajectory control.** The problem of making a vehicle follow a pre-defined path in space is mainly approached in two ways: trajectory tracking (Lee and Kim, 2017), which demands tracking a timed reference signal; and path following (Rubi et al., 2020), where the time dependency is removed and only the geometry is considered. Applications are mostly found in the control of multirotor unmanned aerial vehicles; although moving object grasping is of interest to realize intelligent industrial assembly lines (Chen and Lu, 2021). Using learning-based methods, these applications include adapting the popular DDPG reinforcement learning method (Lillicrap et al., 2016) for solving the path following problem in a quadrotor with adaptive velocity and considering obstacle avoidance (Rubi et al., 2021, 2021). On the other hand, several inverse reinforcement learning approaches have been designed for tracking control (Xue et al., 2021, 2021). Our work is closer in spirit to Choi et al., 2017, where a representative trajectory is extracted from demonstrations and a reward function learned to imitate such trajectory. We train an agent to directly generate the representative trajectories (not necessarily observed in the training set) without learning a reward function though.
**Trajectory data mining.** The aim is to automatically discover interesting knowledge from trajectory datasets, which are typically generated from social, traffic, and operational dynamics (Wang et al., 2020). Traditional clustering and classification methods have served as the basis for more in depth pattern mining and anomaly detection. For pattern mining, different methods have been used for different kinds of patterns, such as periodic (Li et al., 2010, 2012), frequent (Giannotti et al., 2006, 2007), and collective patterns (Zheng, 2015). For anomaly detection, the techniques are often based on clustering methods and its extensions (Belhadi et al., 2020); although supervised learning approaches have also been considered (Meng et al., 2019). The use of reinforcement learning for anomaly detection has mainly focused on road traffic (Oh and Iyengar, 2019), for which the motion
is constrained by the road networks. In a spatially-unconstrained context, such as the maritime traffic, the trajectory shape complexity increases, encouraging the use of computer vision for trajectory classification (Kontopoulos et al., 2021), or graph methods for the detection of representative trajectories (Zygouras et al., 2021). We contribute with a novel deep reinforcement learning method capable of discovering both periodic and non-periodic patterns in spatially-unconstrained trajectory data using a single model.
We emphasize here that, although our work is aligned in objectives with the solution of the problem of motion imitation and trajectory control, we are not interested on the design of controllers allowing a safe navigation of an embodied agent (e.g. autonomous vehicle) under uncertain environments. This requires a model of the inertial properties of the vehicles and their interactions with the environments. We rather aim at a disembodied agent which learns to imitate the typical patterns in traffic to then be able to tell the atypical ones.
## 3 Background
In the following, we formulate the learning problem, describe how it can be reframed within the traditional imitation learning and optimal control settings, and introduce DATI in the next section as a novel method to solve the problem.
### Preliminary
The evolution \(s_{t+1}=f(s_{t})\) of complex dynamical systems from an initial state \(s_{0}\) is often hard to estimate. The difficulty lies in: 1) our ignorance of their intricate underlying mechanics, and 2) our ignorance of the nature of the noise source. We are interested in learning the joint distribution, \(p\), of trajectories \(s_{0:t}\equiv(s_{0},s_{1},\cdots,s_{t})\) of a dynamical system, provided we have knowledge of a local approximation to the underlying evolution rule. This is expressed by a deterministic decomposition of the update rule, \(\hat{s}_{t+1}=g(\hat{s}_{t},\hat{a}_{t})\), in terms of random decision variables \(\hat{a}_{t}\). Restricting our focus to Markovian processes, the joint distribution (assumed absolutely continuous and then admitting density1) can be written as
Footnote 1: We implicitly assume the support of all distributions to be divided into a grid with small cell size implied in practice by \(\varepsilon\) in definition 3.1. Notations such as \(p_{0}(s_{0})\) then refer to the probability that the initial state is in a cell containing \(s_{0}\).
\[p(s_{0:T})=p_{0}(s_{0})\prod_{t=0}^{T-1}p_{t}(s_{t+1}|s_{t}), \tag{1}\]
and the main task is learning how to sample from the _unknown_\(p_{t}(s_{t+1}|s_{t})\). Since knowledge of \(s_{t}\) and the true action \(a_{t}\) imply knowledge of \(s_{t+1}\) with complete certainty (by means of \(g\)), the task is equivalent to estimating \(p_{t}(a_{t}|s_{t})\). This is pursued by optimizing a policy (neural network) \(\pi_{\theta}^{\star}(\hat{s}_{t})\rightarrow\hat{a}_{t}\) which samples actions from the distribution \(p_{\theta}(\hat{a}_{t}|\hat{s}_{t},t)\), and from which the next state is approximated as \(\hat{s}_{t+1}=g(\hat{s}_{t},\pi_{\theta}^{\star}(\hat{s}_{t}))\equiv\pi_{ \theta}(\hat{s}_{t})\). This optimization brings \(p_{\theta}(\hat{a}_{t}|\hat{s}_{t},t)\) close to \(p_{t}(a_{t}|s_{t})\) with respect to some distance/divergence discussed later.
### Problem formulation
Given an ensemble of trajectories \(s_{0:T}\sim p\) generated by a dynamical system, the task is to find a policy \(\pi_{\theta}\) that replicates the shape of a _typical_ trajectory \(s_{0:T}^{*}\) after rolling out
\(\pi_{\theta}(\hat{s}_{t})\rightarrow\hat{s}_{t+1}\). That is, consider a partition of the time interval \([0,T]\) into \(\tau\) equally-spaced timesteps.2 Then, the predicted sequence \(\hat{s}_{0:T}^{\theta}\equiv(\hat{s}_{0},\pi_{\theta}(\hat{s}_{0}),\cdots,\pi_{ \theta}(\hat{s}_{T-1}))\) and the reference sequence \(s_{0:T}^{*}\equiv(s_{0}^{*},s_{1}^{*},\cdots,s_{T}^{*})\), having the same starting state \(\hat{s}_{0}=s_{0}^{*}\), should match their shapes at optimal \(\hat{\theta}\). For concreteness, the shape similarity is with respect to the dynamic time warping distance (Salvador and Chan, 2004) -- popular for comparing two time series not necessarily aligned in time -- so the optimal model has
Footnote 2: This assumption may be relaxed for real-world applications with irregular sampling rate of the data and varying horizons \(T\), as in section 6.
\[\hat{\theta}=\operatorname*{arg\,min}_{\theta}D_{\text{dtw}}(\hat{s}_{0:T}^{ \theta},s_{0:T}^{*}). \tag{2}\]
In practice, \(n\) reference trajectories \(s_{0:T}^{*}\) not seen during training of \(\pi_{\theta}\) are sampled from \(p\) at inference time, and policies trained with \(n\) different seeds are rolled out from the corresponding initial conditions. This leads to our notion of typicality, and hence of a learned representative trajectory:
**Definition 3.1** (Representative trajectory).: Given a small \(\varepsilon>0\), a representative trajectory \(\hat{s}_{0:T}^{\theta}\) is said to be learned by a policy \(\pi_{\theta}\) if there is a reference trajectory \(s_{0:T}^{*}\) (out of the \(n\) in the test set) for which \(D_{\text{dtw}}(\hat{s}_{0:T}^{\theta},s_{0:T}^{*})<\varepsilon\).
Since we will be discussing different methods to bring \(\hat{p}_{t}(\hat{a}_{t}|\hat{s}_{t})\) close to \(p_{t}(a_{t}|s_{t})\), as mentioned is 3.1, our optimization objective in (2) is chosen as a common metric to compare these methods. In the following, we focus on a simple 2D representation of the state \(s_{t}\) (\(d=2\)), as a proof of concept. This allows us to design an intuitive scenario similar to the cheetah chasing a gazelle but which does not end after predation.
### Mouse and hidden cheese game
As a prototype for our synthetic experiments in section 5, consider a mouse (agent) chasing intermittently-hidden cheese (trajectory demonstrator). At timestep \(t\), the mouse has to decide and act \(\hat{a}_{t}=(u_{t},\xi_{t})\) to make its speed \(u_{t}\) and steering direction \(\xi_{t}\) to take it from its current position \(\hat{s}_{t}\) to the currently _unknown_ position of the cheese. The latter is only revealed at timestep \(t+1\) to be \(s_{t+1}\equiv(x_{t+1},y_{t+1})\), and the mouse is perfectly rewarded if
\[s_{t+1}=g(\hat{s}_{t},\hat{a}_{t})\equiv\hat{s}_{t}+(u_{t}\cos(\xi_{t}),u_{t} \sin(\xi_{t}))\,dt, \tag{3}\]
making it is able to taste the cheese (here \(dt=T/\tau\)). The second equality in (3) defines \(\hat{s}_{t+1}\), so the cheese is tasted when \(\hat{s}_{t+1}=s_{t+1}\). Since \(\hat{s}_{t}\) is stochastically generated, it is unlikely that this takes place, unless we impose a minimal distance between \(\hat{s}_{t}\) and \(s_{t}\) (e.g., being within the same cell, as mentioned in footnote 1) within which tasting really happens. In practice, this is implied by the selection of \(\varepsilon\) in definition 3.1.
The function \(g\) in this case represents an intuition about kinematics that the mouse has in advance. It is based only on partial information about the true evolution rule prescribed by \(s_{t+1}=f(s_{t})\), namely, an \(O(dt)\) approximation, with a two-dimensional decision space \(\mathcal{A}\). A different decomposition is obtained with an \(O(dt^{2})\) approximation, introducing accelerations as part of the decision variables \(\hat{a}_{t}\). Note that a different geometry of the state space may also be considered, as in Eq. (8), or even problems not defining physical motion.
The degree to which the decision variables \(\hat{a}_{t}\) (implied by a given decomposition) lead to a satisfactory replication of the shape of reference trajectories leads to the concept of a perfect decision maker:
**Definition 3.2** (Perfect policy).: Given a reference trajectory \(s^{*}_{0:T}\), a perfect policy rollout \(\pi^{*}(\hat{s}_{t})\rightarrow\hat{s}_{t+1}\) replicates this trajectory indentically, by means of Eq. (3). That is, \(\hat{s}_{t}=s^{*}_{t}\), \(\forall t\), making \(D_{\mathrm{dtw}}(\hat{s}^{\theta}_{0:t},s^{*}_{0:t})=0\). This means that the agent has full knowledge of the underlying mechanics of the process, guessing the next state, \(s_{t+1}\), and taking corresponding actions \(a_{t}=(u_{t},\xi_{t})\) with speed \(u_{t}=\|s_{t+1}-s_{t}\|/dt\) and steering angle \(\xi_{t}=\tan^{-1}[(s_{t+1}-s_{t})\cdot\hat{y}\,/\,(s_{t+1}-s_{t})\cdot\hat{x}]\) in order to get there.
Since life is not perfect, the ever hungry and intelligent mouse will learn, after many trials (containing similarly shaped \(s_{0:T}\)), to discover the main features of the cheese trajectories. At inference, the mouse is fooled with no cheese signal, but its trajectory is collected and compared with the reference \(s^{*}_{0:T}\) to judge how representative it is. In the following, we describe different approaches that we will compare later for the solution of this problem.
### Imitation learning
Imitation learning aims to learn a policy \(\pi^{il}_{\theta}(s_{t})\rightarrow\hat{a}_{t}\) mimicking demonstrations \(\mathcal{D}=((s_{0},a_{0}),(s_{1},a_{1}),\cdots,(s_{T},a_{T}))\) from an expert whose actions \(a_{t}\sim\pi^{*}\) are collected after observation of many instances of environmental state sequences \(s_{0:T}\). Several approaches have been developed for this, see e.g. Zheng et al. (2021) for a survey. The simplest baseline, which we adopt here for benchmarking the synthetic experiments, is _Behavioral Cloning_ (BC). This learns the policy by considering the setting as a supervised regression problem over \(\mathcal{D}\)(Pomerleau, 1991; Ross and Bagnell, 2010). That is, the unknown transition distribution \(p_{t}(a_{t}|s_{t})\) of relevance for (1) is estimated as \(p_{\theta}(\hat{a}_{t}|s_{t})\) after minimizing the Kullback-Leibler divergence between the two. This amounts to a maximum likelihood optimization of \(p_{\theta}(\hat{a}_{t}|s_{t})\).
The application of these method to the present problem demands the interpretation of the environment states \(s_{t}\) as comprising the trajectories \(s_{0:T}\) to be imitated. The actions \(a_{t}\) in the demonstration set \(\mathcal{D}\) are generated by the best expert: a perfect policy \(\pi^{*}\), according to definition (3.2). At inference, when the observations \(s_{0:t}\) are removed, the predicted \(\hat{s}_{t}\)'s are used instead when rolling out the learned policy, i.e. \(\hat{s}_{t+1}=g(\hat{s}_{t},\pi^{il}_{\theta}(\hat{s}_{t}))=\pi_{\theta}(\hat{ s}_{t})\).
### Reinforcement learning using sparse rewards
In the standard reinforcement learning framework, the agent-environment interaction is modeled as a Markov decision process \((\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma,p_{0})\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action spaces, respectively, \(\mathcal{P}(s^{\prime}|a,s)\) is the transition distribution of the environment, \(r=r(s,a)\) is the reward function, \(\gamma\in(0,1)\) is the discount factor, and \(p_{0}=p_{0}(s)\) is the initial state distribution of the environment.
As a motivation behind our method, we adapt the DDPG agent (Lillicrap et al., 2016) to the trajectory imitation problem (denoted as DDPG-TI). This still consists of two actor-critic models -- the learned \((\pi^{l}_{\theta},c^{l}_{\theta_{c}})\) and target \((\pi^{r}_{\beta},c^{r}_{\beta_{c}})\) networks, the latter slowly tracking the former during training -- the actors guided by the return \(R_{t}=\sum_{i=t}^{T}\gamma^{i-t}\,r(s_{i},a_{i})\) from a state. However, instead of having tuples \((s_{t},a_{t},r_{t},s_{t+1})\) in the replay buffer during maximization of the expected return from the starting distribution, we use \((\hat{s}_{t},\hat{a}_{t},r_{t},\hat{s}_{t+1},t)\), where \(t\) is
processed by a scale-invariant embedding of time (Kazemi et al., 2019) which is shared by the networks. The reward signal here is \(r_{t}=\pm 1\) according to whether or not \(\tilde{D}_{\text{dtw}}(\hat{s}_{0:t}^{\theta},s_{0:t})<\varepsilon\), where the tilde denotes an exponentially-smoothed and normalized dynamic time warping distance. This is inspired by the constant rewards used by Reddy et al. (2020).
In this formulation, the agent does not make decisions \(\hat{a}_{t}\) during training based on the current state \(s_{t}\), but rather on its current prediction of that state \(\hat{s}_{t}\). This change is necessary to avoid the DDPG-TI agent taking the same actions regardless of the current state, an observation from early experimentation which motivated the introduction of the embedding of time. At inference, the learned policy leads to the predictions \(\hat{s}_{t+1}=g(\hat{s}_{t},\pi_{\theta}^{l}(\hat{s}_{t},t)+\eta_{t})=\pi_{ \theta}(\hat{s}_{t})\), where \(\eta_{t}\) is the noise used for exploration of the environment (originally taken as an Ornstein-Uhlenbeck process by Lillicrap et al., 2016).
## 4 Domain-adaptive trajectory imitation
Since we are interested in learning the distribution of the trajectories to be imitated, adding the noise \(\eta_{t}\) to the output of the actor entails a fake stochasticity in the results. So, building from DDPG-TI, the first observation is to change the actor \(\pi_{\theta}^{l}(\hat{s}_{t},t)\rightarrow\hat{a}_{t}+\eta_{t}\) to the actor \(\pi_{\theta}^{\intercal}(\hat{s}_{t},\eta_{t},t)\rightarrow\hat{a}_{t}\), taking us to the realm of generative models; in particular, the adversarial generative models (GANs) which recover the data distribution (Goodfellow, 2017).
With the above observation in mind, the main idea behind DATI may be informally stated as considering the imitation of \(s_{0:t}\) by \(\hat{s}_{0:t}\) as a _style_ transfer problem between the domains spanned by these set of trajectories. Cycle-consistent generative adversarial networks (CycleGAN) (Zhu et al., 2017) have shown significant success in the style transfer task for unpaired image-to-image translation, so we extend them here to the domain-adaptive trajectory imitation problem, adding a novel reinforcement signal.
As with DDPG-TI, we have two actor-critic networks: \((\pi_{\theta}^{\intercal},c_{\theta_{c}}^{\intercal})\) and \((\pi_{\beta}^{\intercal},c_{\beta_{c}}^{\intercal})\). However, each pair is now trained adversarially using the Wasserstein loss (Arjovsky et al., 2017). Specifically, for the first pair, we take an actor \(\pi_{\theta}^{\intercal}(\hat{s}_{t},\eta_{t},t)\rightarrow\hat{a}_{t}\) which learns how to sample from the distribution \(p_{\theta}(\hat{a}_{t}|\hat{s}_{t},t)\) (sometimes just denoted \(\hat{p}_{t}(\hat{a}_{t}|\hat{s}_{t})\)) by accessing the noise prior \(p_{\eta_{t}}\). This is accomplished by valuing the chosen decisions with a critic \(c_{\theta_{c}}^{\intercal}(\hat{a}_{t},\tau_{t},t)\) -- lying in the space of 1-Lipschitz functions3 (denoted \(\|c_{\theta_{c}}^{\intercal}\|_{L}\leq 1\)), and \(r_{t}\) being the sparse rewards of section 3.5 -- and both networks optimized as
Footnote 3: In our experiments, this condition is kept by adopting the method from Gulrajani et al. (2017).
\[\max_{\|c_{\theta_{c}}^{\intercal}\|_{L}\leq 1} \operatorname*{\mathbb{E}}_{a_{t}\sim p_{\eta_{t}}(a_{t}|s_{t})}[c _{\theta_{c}}^{\intercal}(a_{t})]-\operatorname*{\mathbb{E}}_{\eta_{t}\sim p _{\eta_{t}}}[c_{\theta_{c}}^{\intercal}(\pi_{\theta}^{\intercal}(\eta_{t}))], \tag{4}\] \[\max_{\pi_{\theta}^{\intercal}} \operatorname*{\mathbb{E}}_{\eta_{t}\sim p_{\eta_{t}}}c_{\theta_{c}}^{ \intercal}(\pi_{\theta}^{\intercal}(\eta_{t})),\]
where we have omitted some arguments of the functions for simplicity. That is, the actor is trained to maximize the value that the critic assigns to its decisions (bottom of (4)), whereas the critic is trained to separate this value from the value of perfect decisions (top of (4)). These perfect decisions, \(a_{t}\), are those from a perfect policy \(\pi^{*}\) according to definition 3.2.
This optimization procedure guarantees -- under plausible continuity assumptions for the actor -- that the distribution \(p_{\theta}(\hat{a}_{t}|\hat{s}_{t},t)\) converges to the true distribution \(p_{t}(a_{t}|s_{t})\) with respect to the Wasserstein (a.k.a Earth Mover) distance (Arjovsky et al., 2017). As
mentioned, knowing the true \(a_{t}\) reproduces the next state as \(s_{t+1}=g(s_{t},a_{t})\), so the \(p_{t}(a_{t}|s_{t})\) estimated by DATI through \(p_{\theta}(\hat{a}_{t}|\hat{s}_{t},t)\) basically leads to an approximation of the \(p_{t}(s_{t+1}|s_{t})\) that are of interest in (1).
The networks in the second pair, \((\pi_{\beta}^{\star},c_{\beta_{c}}^{\star})\), have the same architecture as their respective networks in the first pair. However, the actor \(\pi_{\beta}^{\star}(\hat{a}_{t},\eta_{t},t)\to\hat{s}_{t}^{\star}\) is concurrently trained to _undo_ the action of \(\pi_{\theta}^{\star}(\hat{s}_{t},\eta_{t},t)\to\hat{a}_{t}\), i.e. by reconstructing the \(\hat{s}_{t}\) that is targeted by \(\hat{s}_{t}^{\star}\). This "backward" actor, \(\pi_{\beta}^{\star}\), is valued by a critic \(c_{\beta_{c}}^{\star}(\hat{s}_{t}^{\star},r_{t},t)\), both trained similar to (4)
\[\begin{array}{c}\max_{\|c_{\beta_{c}}^{\star}\|_{L}\leq 1}\;\underset{ \hat{s}_{t}\sim\hat{p}_{t}(\hat{s}_{t}|\hat{a}_{t})}{\mathbb{E}}[c_{\beta_{c}} ^{\star}(\hat{s}_{t})]-\underset{\eta_{t}\sim p_{\eta_{t}}}{\mathbb{E}}[c_{ \beta_{c}}^{\star}(\pi_{\beta}^{\star}(\eta_{t}))],\\ \max_{\pi\overline{\hat{s}}}\;\underset{\eta_{t}\sim p_{\eta_{t}}}{\mathbb{E}} c_{\beta_{c}}^{\star}(\pi_{\beta}^{\star}(\eta_{t})),\end{array} \tag{5}\]
where samples from \(\hat{p}_{t}(\hat{s}_{t}|\hat{a}_{t})\) are obtained by rolling out the "forward" policy \(\pi_{\theta}^{\star}\). This requires further explanation, since we we have said in the previous paragraph that \(\pi_{\theta}^{\star}\) samples from the distribution \(\hat{p}_{t}(\hat{a}_{t}|\hat{s}_{t})\), not the posterior \(\hat{p}_{t}(\hat{s}_{t}|\hat{a}_{t})\). The question is: if \(\pi_{\theta}^{\star}(\cdot,\eta_{t},t)\) maps \(\hat{s}_{t}\) deterministically into \(\hat{a}_{t}\), is the value of \(\hat{s}_{t}\) implied by having only knowledge of \(\hat{a}_{t}\) (and, of course, of \(\eta_{t},t\))? The answer is yes, provided that \(\pi_{\theta}^{\star}(\cdot,\eta_{t},t)\) is a _bijective_ mapping. Therefore, whenever a value of \(\hat{a}_{t}\) is fetched from the replay buffer -- which collects the tuples \((\hat{s}_{t},\hat{a}_{t},r_{t},\eta_{t},t)\) -- there is only one possible value of \(\hat{s}_{t}\) which produced that value of \(\hat{a}_{t}\) using \(\pi_{\theta}^{\star}\), namely, the one recorded in the same tuple.
The bijective nature of the actors \(\pi_{\theta}^{\star}\) and \(\pi_{\beta}^{\star}\) is enforced using _cycle consistency_. That is, by making both \(\pi_{\beta}^{\star}\circ\pi_{\theta}^{\star}\) and \(\pi_{\theta}^{\star}\circ\pi_{\beta}^{\star}\) approximate the identity mapping:
\[\begin{array}{c}\min_{\pi\overline{\theta}^{\star},\pi\overline{\hat{s}}}\; \underset{\hat{a}_{t}\sim\hat{p}_{t}(\hat{a}_{t}|\hat{s}_{t})}{\mathbb{E}}\| \pi_{\theta}^{\star}(\pi_{\beta}^{\star}(\hat{a}_{t}))-\hat{a}_{t}\|_{1},\\ \min_{\pi\overline{\theta}^{\star},\pi_{\beta}^{\star}}\;\underset{\hat{s}_{t} \sim\hat{p}_{t}(\hat{s}_{t}|\hat{a}_{t})}{\mathbb{E}}\|\pi_{\beta}^{\star}( \pi_{\theta}^{\star}(\hat{s}_{t}))-\hat{s}_{t}\|_{1}.\end{array} \tag{6}\]
Note that the bottom of (6) is a statement of minimization of the reconstruction error \(\|\hat{s}_{t}^{\star}-\hat{s}_{t}\|_{1}\): the output of \(\pi_{\beta}^{\star}\) is supervised by confronting it against the ground truth \(\hat{s}_{t}\). To achieve a comparable supervision for \(\pi_{\theta}^{\star}\), we enforce its output \(\hat{a}_{t}\) to approximate the ground truth \(a_{t}\) via \(L_{1}\) penalty:
\[\min_{\pi_{\theta}^{\star}}\;\underset{\hat{s}_{t}\sim\hat{p}_{t}(\hat{s}_{t} |\hat{a}_{t}),\;a_{t}\sim p_{t}(a_{t}|s_{t})}{\mathbb{E}}\|\pi_{\theta}^{\star}( \hat{s}_{t})-a_{t}\|_{1}, \tag{7}\]
with the real \(a_{t}\)'s being obtained, as before, from a perfect policy \(\pi^{\star}\) according to definition 3.2. At inference, the "forward" actor is used for the trajectory predictions according to \(\hat{s}_{t+1}=g(\hat{s}_{t},\pi_{\theta}^{\star}(\hat{s}_{t},\eta_{t},t))=\pi_{ \theta}(\hat{s}_{t})\)
Finally, the motivation for using the scale-invariant embedding of time by Kazemi et al. (2019) in our method is to obtain a positional encoding of the time series representing the trajectories which is able to capture both periodic and non-periodic patterns in the data, regardless of whether we use the index \(i\) of the time \(t=i\,dt\) of each event. This will prove to be beneficial for our method from the ablation studies in section 5.2.
## 5 Synthetic experiments
The synthetic experiments are carried out on a variety of families of solutions to the mechanics of hypothetical dinamical systems. These are characterized by the initial state distribution \(p_{0}(s_{0})\) in (1), and a prescription to generate the subsequent states \(s_{1:t}\) of the trajectory deterministically, given \(s_{0}\). The corresponding ensembles are denoted by \(\mathcal{F}_{\boldsymbol{\alpha}}\): comprising a family of trajectories with shape parameters \(\boldsymbol{\alpha}=\{\alpha,\cdots\}\) -- all fixed except \(\alpha\), which is sampled uniformly within given intervals for each episode.
We provide an OpenAI Gym environment (Brockman et al., 2016) supporting (arbitrary) user-defined ensembles, \(\mathcal{F}_{\boldsymbol{\alpha}}\), serving as demonstrations \(s_{0:T}\) of the cheese signals for the mouse. We focus on four families defined in table 1 and visualized in Fig. 1:
1) _FixedStart_ provides trajectories with the same starting point, having an inflection point near the beginning of the journey and then possibly confusing gradient descent methods aiming only at learning the mapping \(x\to y\) (geometry without the time component).
2) _UShaped_ provides trajectories with a reflection symmetry about a vertical axis passing through their lowest points, and having varying starting and ending features.
3) _Circles_ provide the simplest expression of a periodic pattern in the trajectories. Nevertheless, they are complex enough, since the agents must keep their speed constant.
Figure 1: Family of trajectories \(\mathcal{F}_{\boldsymbol{\alpha}}\) considered for the domain-adaptive imitation task (from top left to bottom right: FixedStart, UShaped, Circles and Ribbons). These may be thought of as solutions to abstract dynamical systems with a two-dimensional state representation.
4) _Ribbons_ provide trajectories to test how the networks disentangle space and time: by presenting a point in space that is visited twice with different headings.
### Experimental setup and results
The experiments are carried out targeting robustness of the models to the variation of the learning tasks. So we keep the same hyperparameters for all the families of trajectories. Additionally, we impose a limit in the number of transitions for the models to learn the requested task. That is, we sample 100 episodes (a trajectory from \(\mathcal{F}_{\boldsymbol{\alpha}}\) per episode), each with a number of \(\tau=200\) timesteps. Code is supplementary provided for reproducibility.
**Architectures**. The actor-critic models for DATI are multilayer perceptrons with feature extractors having 16 units, time embedding dimension of 76, and 4 hidden layers - before the network outputs - having 32 units each. The critic networks process the rewards \(r_{t}\in\{-1,1\}\) by tiling the input to have the dimensionality of the sum of all other extracted-feature dimensions, passing this to a dense layer with non-negative kernel contraints and concatenating the output with such other extracted features before entering the 4 hidden layers previous to the (elu-activated) network output. The novel idea behind is that this propagates through the critic networks the signal of having low output for negative rewards and high output for positive rewards. Finally, the latent dimension of \(\eta_{t}\) for the actors is 64.
DDPG-TI has similar actor-critic architectures, but since the actors do not include a latent dimension, an increase of the feature extractors to 80 units is needed for their 4 hidden units before the output to observe the same input dimensionality as DATI. On the other hand, BC is trained using the imitation library (Wang et al., 2020b). In order to get actor architectures comparable to DDPG-TI and DATI, feature extractors with 156 units are used (making up the time embedding + latent dimensions + 16), followed by 4 hidden layers of 32 units. For the maximum likelihood estimation of \(p_{\theta}(\hat{a}_{t}|s_{t})\), the latent features before the output of \(\pi_{\theta}^{il}\) are linearly transformed to define the mean and diagonal covariance of a Gaussian distribution \(p_{\theta}(\hat{a}_{t}|s_{t})\) network. Hyperparameter selection for the different methods may be found in appendix A.2.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Family \(\mathcal{F}_{\boldsymbol{\alpha}}\)} & \multicolumn{2}{c}{\(s_{t}=(x_{t},y_{t})\)} \\ & \(x_{t}\) & \(y_{t}\) \\ \hline FixedStart & \(\sqrt{\alpha t}\) & \(\cos(\omega t)\,e^{-\kappa t}\) \\ UShaped & \(\omega t\) & \(\cos(\omega t)-\alpha\cos(2\omega t)/2\) \\ \multirow{2}{*}{Circles} & \(\alpha\cos(\omega t)\) & \(\alpha\sin(\omega t)\) \\ & \(R_{1}-R_{2}\cos(\omega t/4)\) & \(R_{1}-R_{2}\cos(\omega t/4)\) \\ \multirow{2}{*}{Ribbons} & \(\times\cos(\omega t+\alpha)\) & \(\times\sin(\omega t+\alpha)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Defining equations for the ensembles of trajectories \(\mathcal{F}_{\boldsymbol{\alpha}}\) used for benchmarking models. For the fixed values of the shape parameters complementary to \(\alpha\), see appendix A.1.
**Results**. The models are trained with 10 seeds and evaluated with respect to 10 random reference trajectories \(s^{*}_{t}\) not seen in the training set. We compare the performance by reporting the best exponentially smoothed and normalized dynamic time warping distance \(\tilde{D}_{\text{dtw}}(\hat{s}^{\theta}_{0:T},s^{*}_{0:T})\) over the 10 test trials. The normalization constant is the dynamic time warping "diameter" \(D^{>}_{\text{dtw}}\), defined as the distance between the blue and green boundary trajectories in Fig. 1. The results are shown in table 2, with a visualization of the shapes of the best trajectories attained at inference in Fig. 2.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & FixedStart & UShaped & Circles & Ribbons \\ \hline DDPG-TI & 0.489 & 2.138 & 0.972 & 0.172 \\ BC & 0.124 & 0.365 & 0.106 & 0.113 \\
**DATI** & **0.065** & **0.231** & **0.058** & **0.033** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Lowest \(\tilde{D}_{\text{dtw}}(\hat{s}^{\theta}_{0:T},s^{*}_{0:T})\) over 10 test trials.
Clearly, DATI is able to generate trajectories that look closer to the ones in the families \(\mathcal{F}_{\boldsymbol{\alpha}}\). Note that DATI and DDPG-TI were both trained using the reward signal \(r_{t}=\pm 1\) according to whether or not \(\tilde{D}_{\mathrm{dtw}}(\hat{s}_{0:t}^{\theta},s_{0:t})<\varepsilon\). For the experiments, the \(\varepsilon\) in definition 3.1 is taken as 10% of the dynamic time warping diameter \(D_{\mathrm{dtw}}^{>}\). So only DATI is able to learn representative trajectories in most of the cases (numbers below 0.1 in table 2). We further investigate now the importance of some design choices in the actor-critic models in DATI.
### Ablation study
We take the family of Circles trajectories and apply the following transformations -- all other things being equal -- to the architecture of the actor-critic models in DATI:
* _No time embedding_: we remove the notion of time exogenously imposed on the networks.
* _No reward reinforcement_: we do not let the critics know about the goal of minimizing the dynamic time warping distance between actor rollouts and demonstrations.
In order to get sensible statistical results (Agarwal et al., 2021) of the effect of these changes, it suffices to monitor the best and second best \(\tilde{D}_{\mathrm{dtw}}(\hat{s}_{0:T}^{\theta},s_{0:T}^{*})\) over the 10 test trials. These are shown in table 3. We observe a significant negative impact on the performance when the time embedding is removed. Similarly, removing the reward signal has appreciable negative effects. These are therefore essential ingredients in the design of DATI.
The conclusion from these experiments is that DATI is a successful method for learning representative trajectories, being robust to changes in their geometries -- i.e. keeping the same architecture and hyperparameters it can represent a rich set of spatiotemporal phenomena. In order to further test this, we evaluate its generalization to a real world scenario, taking maritime traffic as an example.
## 6 Real-world experiments
We consider vessel traffic between the surroundings of Miami and the entrance to the gulf of Mexico in 2015 (UTM zone 17). The dataset for this is publicly available, 4 the variables of interest being the longitude (\(\lambda\)), the latitude (\(\varphi\)), the speed over ground (SOG), the course over ground (COG), and the timestamp of every AIS signal reported by vessels moving with SOG \(>\) 3 knots. We filter out some of the trajectories going to or coming from the
\begin{table}
\begin{tabular}{c c c} \hline \hline Transformation & \multicolumn{2}{c}{\(\tilde{D}_{\mathrm{dtw}}(\hat{s}_{0:T}^{\theta},s_{0:T}^{*})\)} \\ & top-1 & top-2 \\ \hline Original setup & **0.058** & **0.074** \\ No time embedding & 1.829 & 1.976 \\ No reward reinforcement & 0.305 & 0.451 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Changes in the metric used in table 2 under ablation of some layers in the actor-critic models in DATI. The experiments are carried out for the family of Circles trajectories.
eastern Greater Antilles as well as on the north of the Florida Keys. From the reminder, we keep trajectories which have at least 900 timesteps, leaving a dataset of about 3.2M records extracted from a totality of about 36.5M records. This is clustered into 3 categories: trajectories always going _up_ (\(-90^{\circ}\leq\text{COG}\leq 90^{\circ}\)), trajectories always going _down_, and the rest of the trajectories in _other_. These are partially shown in Fig. 3, after removal of outliers (Young, 2017) and segmentation at stop points (Bay, 2017) of more than an hour. They fit inside a region of interest (ROI) whose boundary is clearly appreciable from Fig. 3.
### Partial knowledge about the update rule
The motion of vessels is spatially-unconstrained, giving rise to maneuvers that are not seen on road traffic. Moreover, it occurs on a curved geometry and the reported \(dt\) is stochastic (Last et al., 2014). Nevertheless, it is our general goal to include a partial knowledge of the update rule into the models. This is obtained by approximating the motion on the surface of the Earth as (see appendix A.3 for details)
\[\begin{split}\varphi_{t+1}&=\varphi_{t}+\tfrac{1} {60}\cos(\text{COG}_{t})\,\text{SOG}_{t}\,dt,\\ \lambda_{t+1}&=\lambda_{t}+\tfrac{1}{60}\sin(\text{ COG}_{t})\,\text{SOG}_{t}\,dt\,/\cos(\varphi_{t}),\end{split} \tag{8}\]
provided \(dt\) (measured in hours) is small enough. Defining the state as \(s_{t}=(\lambda_{t},\varphi_{t})\) and the actions targeted by the agent as \(a_{t}=(\text{SOG}_{t},\text{COG}_{t},dt)\), the equations in (8) define the functions \(g_{\lambda}\) and \(g_{\varphi}\) representing the partial knowledge that the agent has about the evolution of the state to be imitated. Compared to the synthetic experiments, here the agent has the extra task to learn the distribution of \(dt\) for the next AIS record of a vessel to arrive.
### Detection of abnormal motion patterns
We train DATI with the same architecture and hyperparameters as in the synthetic experiments (except that \(\pi_{\theta}^{\star}\) now has three instead of two outputs). To define the train and test sets, we notice that the cluster of _up_ trajectories has 170 in total, the cluster of _down_ trajectories has 1973 in total, and the cluster of _other_ trajectories has 887 in total. By definition, the
Figure 3: Vessel traffic of interest between the Miami surroundings (north east) and the entrance to the gulf of Mexico (south west) in 2015. (a) trajectories always going up (b) trajectories always going down (c) rest of the trajectories.
cluster _other_ has trajectories which either go up or down but not monotonically, therefore having room for exotic vessel maneuovers. To discover abnormal motion, a subset of this cluster is then used as test set.
In order to have training conditions as in the synthetic experiments, we target a similar number of training episodes. For this reason, we take the whole cluster of _up_ trajectories, giving 170 episodes, and downsample the cluster of _down_ trajectories to also have 170 trajectories (the same is done for testing on the _other_ cluster). These are the numbers shown in Fig. 3. Furthermore, to avoid the encoding of _up_ and _down_ as an extra feature of the state space, two DATI instances are trained, one for each cluster type.
The (exponentially-smoothed) dynamic time warping distance \(D_{\mathrm{dtw}}(\hat{s}^{\theta}_{0:T},s^{*}_{0:T})\) is now measured with respect to the great-circle distance on the surface of the Earth (unlike the Euclidean distance used for the synthetic experiments). Since the trajectories chosen for training comprise more than 300 km in length, we choose a relatively small \(\varepsilon\!=\!500\) m to reward DATI with \(r_{t}=\pm 1\) according to whether \(D_{\mathrm{dtw}}(\hat{s}^{\theta}_{0:t},s_{0:t})<\varepsilon\) or not.
**Probing the generated distribution**. To make sense of how DATI conceives the data distribution, it is helpful to think of the main features that make up a vessel trajectory. Since there is an incentive to take the safest and shortest navigable route, trajectories may display many straight-line segments. We adopt a threshold for a significant change of course to be \(\Delta\mathrm{COG}=10^{\circ}\), which we call a _kink_. Fig. 4 shows the amount of kinks in the train sets of both clusters. It is observed that the original cluster _down_ is highly skewed toward trajectories with few kinks. The distribution is multimodal, with only 7% of the cluster not containing kinks, and 63% of the trajectories having from 1 to 4 kinks, in the proportion of 12%, 19%, 18%, and 13%, respectively. In contrast, 49% of the cluster _up_ does not feature any kink, which means that the trajectories in this cluster are often smooth. It is then expected that these observations are reflected in the inductive biases of the learned models.
Figure 4: Number of course changes with \(\Delta\mathrm{COG}>10^{\circ}\) (kinks) per track in the cluster _up_ and in a uniformly sampled subset of 170 trajectories from the cluster _down_.
We rollout the learned policies starting from 170 random points along the vertical boundary at the entrance to the gulf of Mexico for the cluster _up_ and along the horizontal boundary near Miami for the cluster _down_. The results are shown in Fig. 5(a)-(b). As expected, DATI learns to smoothly generate trajectories always going up, with shapes resampling the reference trajectories during training. It does so even in starting regions not observed during training, as can be seen by comparing Figs. 3(a) and 5(a). On the other hand, the model trained with the downsampled cluster of _down_ trajectories learns to generate mainly 2 kinks from the most populated mode -- this is different from the mode collapse sometimes encountered in GANs (Durall et al., 2021). However, in the attempt to fit the other types of trajectories, the end result does not resemble the shape of the trajectories observed during training, as seen by comparing Figs. 3(b) and 5(b). A solution for this may be found in downsampling this cluster not uniformly but by extracting much more smooth trajectories than with kinks.
An immediate observation from the generated trajectories is a sticking effect when the actors hit the boundaries of the ROI. This (i.e. \(\hat{s}_{t+1}\to\hat{s}_{t}\) if \(\hat{s}_{t+1}\notin\,\text{ROI}\)) happens until the agent decides to continue exploring the interior of the ROI. It is intentionally implemented as termination condition of the episodes in all the experiments of this paper. This ensures the same number of timesteps for the generated and reference tracks. By having a lower (big) bound on the number of timesteps (\(>900\)) per episode, the bias from having variable horizon environments (Kostrikov et al., 2019) is then alleviated.
The state space of DATI may easily be enlarged to include more information such as destination of the trajectories -- fixed in this work by the nature of the dataset. With this in mind, it could be used as a method of pathfinding for ocean voyages, given enough reference trajectories between source and destination. The advantage over modern methods which optimize for the shortest route (Rospotniuk and Small, 2022) is that DATI is data-driven, and therefore can learn (as part of the distribution) highly dynamic shipping patterns often encountered in reality, which may deviate from the shortest route, due to many varying external factors (Zygouras et al., 2021).
Figure 5: Trajectories generated by DATI after randomly sampling 170 initial states: (a) using the model trained with the cluster _up_ (b) using the model trained with the cluster _down_. (c) Performance on the test set of trajectories that go up in the cluster _other_, showing corresponding anomalies identified by DATI (in red).
**From normal to abnormal motion patterns**. We leave for a different paper how to deal with multimodal distributions of trajectories. For the proof of concept of this work, we then restrict to the analysis of abnormal patterns in the 143 trajectories \(s^{*}_{0:T}\) belonging to the cluster _other_, which start near the entrance to the golf of Mexico. DATI is run for each corresponding initial state, and \(D_{\text{dtw}}(\hat{s}^{\theta}_{0:T},s^{*}_{0:T})\) is calculated and normalized with respect to the maximum value. We then ask for a threshold \(\Lambda\) for which about 10% of the trajectories are tagged as abnormal by setting \(D_{\text{dtw}}(\hat{s}^{\theta}_{0:T},s^{*}_{0:T})>\Lambda\). This is obtained to be \(\Lambda=0.75\), and the corresponding trajectories are shown in red in Fig. 5(c). Independently, about 10% of the most salient anomalies in the test set are manually annotated and confronted with the predictions by DATI, resulting in a weighted F1-score of 0.78. This is very promising, given that the DATI architecture was chosen to optimize the synthetic experiments.
## 7 Conclusion and outlook
We have presented a novel method to learn representative trajectories of dynamical systems for which partial information of their update rule is known. The method does not make any assumption regarding the state representation, making it very appealing for knowledge discovery across a wide range of applications. We have demonstrated this by learning to generate representative trajectories in maritime traffic (with corresponding anomaly detection) with the same model architecture and hyperparameters with which we benchmarked the performance on synthetic datasets. Future research avenues within the trajectory data mining field include (but are not limited to) the detection of abnormal motion in more complex maritime scenarios with multi-modal distributions, in road and air traffic, pedestrian dynamics, etc. By providing a reinforcement learning environment capable of representing any family of trajectories, we encourage the research community to use a standard benchmark for trajectory imitation tasks.
## Appendix A Shape parameters of synthetic families
For all families of trajectories we take \(T=2\pi/\omega\). The parameter set for each family is as follows. _FixedStart_ has \(\boldsymbol{\alpha}=\{\alpha,\omega,\eta\}\) for which \(\omega=\eta=0.9\) are fixed and \(\alpha\) is sampled uniformly in \([5,10]\) for each episode. _UShaped_ has \(\boldsymbol{\alpha}=\{\alpha,\omega\}\) for which \(\omega=0.9\) is fixed and \(\alpha\) is sampled uniformly in \([0.2,0.8]\) for each episode. _Circles_ has \(\boldsymbol{\alpha}=\{\alpha,\omega\}\) for which \(\omega=0.4\) is fixed and \(\alpha\) is sampled uniformly in \([0.5,1.0]\) for each episode. _Ribbons_ has \(\boldsymbol{\alpha}=\{\alpha,\omega,R_{1},R_{2}\}\) for which \(\omega=0.4\), \(R_{1}=1\), \(R_{2}=2\) are fixed and \(\alpha\) is sampled uniformly in \([-\pi,\pi]\) for each episode.
### Hyperparameters for the synthetic experiments
DATI updates the critic networks 5 times before updating the actor networks per training batch. The learning rates for the actors is \(10^{-4}\), and for the critics \(10^{-5}\). They are optimized using Adam with \(\beta_{1}=0.5\) and \(\beta_{2}=0.9\). All \(L_{1}\) losses are optimized with a learning rate of \(10^{-3}\) and weighted (with respect to the total loss) with a coefficient of 10. The 1-Lipschitz condition for the critics is achieved by gradient penalty (Gulrajani et al., 2017) with \(\lambda=10\)
Finally, the noise \(\eta_{t}\) is chosen as the best between Ornstein-Uhlenbeck or Gaussian (with \(\mu=0\) and \(\sigma=0.3\)). DDPG-TI optimizes the actor and critic networks using Adam with learning rates \(10^{-4}\) and \(2\times 10^{-4}\), respectively. The discount factor is \(\gamma=0.9\) and the rate of update of the target networks is \(10^{-3}\). BC optimizes the networks using Adam with a learning rate \(10^{-3}\). The maximum likelihood procedure searches for a distribution with maximum entropy, the latter condition weighted with a coefficient of \(10^{-3}\). Exponential smoothing of \(D_{\mathrm{dtw}}\) is done with a smoothing factor of \(0.9\).
### Update equation for vessel motion
Given a path of length \(d\) on the surface of the Earth, connecting the points with geographical coordinates \((\lambda_{1},\varphi_{1})\) and \((\lambda_{2},\varphi_{2})\) -- with \((\lambda,\varphi)=(\mathrm{longitude},\mathrm{latitude})\) -- the angle \(\theta\) subtended by the path is related to the Earth radius as \(\theta=d/R\). The haversine of \(\theta\) is defined as \(\mathrm{hav}(\theta)\equiv\sin^{2}(\theta/2)\) and obeys
\[\mathrm{hav}(\theta)=\mathrm{hav}(\varphi_{2}-\varphi_{1})+\cos(\varphi_{1}) \cos(\varphi_{2})\,\mathrm{hav}(\lambda_{2}-\lambda_{1}). \tag{9}\]
For small travelled distances \(d/R\ll 1\) and \(\Delta\varphi\equiv\varphi_{2}-\varphi_{1}\ll 1\) and \(\Delta\lambda\equiv\lambda_{2}-\lambda_{1}\ll 1\). In this limit, (9) becomes, after Taylor expansion,
\[(d/R)^{2}=\Delta\varphi^{2}+\cos^{2}(\varphi_{1})\Delta\lambda^{2}. \tag{10}\]
This expreses how the curved geometry on a sphere looks locally flat (i.e. Euclidean) as long as the axis of \(\lambda\) is rescaled with \(\cos(\varphi_{1})\). With COG representing the angle along which the vessel moves (with respect to the geographical North), and \(\mathrm{SOG}\,\Delta t\) the small distance travelled during \(\Delta t\)
\[\begin{array}{rl}\Delta\varphi&=\frac{1}{60}\cos(\mathrm{COG})\,\mathrm{SOG }\,\Delta t\\ \cos(\varphi_{1})\Delta\lambda&=\frac{1}{60}\sin(\mathrm{COG})\,\mathrm{SOG} \,\Delta t,\end{array} \tag{11}\]
where the factor \(\frac{1}{60}\) is used to convert from knots to degrees: \([\mathrm{SOG}]=1\,\mathrm{knot}=1\,\mathrm{nmi}/\mathrm{hour}\) and \(60\,\mathrm{nmi}\sim 1^{\circ}\) of longitude / latitude (i.e. The equatorial earth radius is \(R=6378.137\,\mathrm{km}\) so, with \(\theta=\pi\,\mathrm{rad}/180\), \(d=6378.14*\pi/180\,\mathrm{km}=111.319\,\mathrm{km}=\underline{60.1\,\mathrm{ nmi}}\). On the other hand, the polar radius is \(R=6356.752\,\mathrm{km}\) so \(d=6356.752\,\mathrm{*}\,\pi/180\,\mathrm{km}=110.946\,\mathrm{km}=\underline{59.9 \,\mathrm{nmi}}\)).
|
2302.06954 | Looking for Signatures of AGN Feedback in Radio-Quiet AGN | (Abridged) In this article, we discuss the state of ``AGN feedback'' in
radio-quiet (RQ) AGN. This study involves heterogeneous samples of nearby
Seyfert and LINER galaxies as well as QSOs that have been observed at low radio
frequencies (few ~100 MHz) with the GMRT and ~GHz frequencies with the VLA and
VLBA. These multi-frequency, multi-resolution observations detect a range of
arcsecond-scale radio spectral indices that are consistent with the presence of
multiple contributors including starburst winds and AGN jets or winds; steep
spectrum ``relic'' emission is observed as well. Polarization-sensitive data
from the VLA and GMRT suggest that the radio outflows are stratified (e.g., in
IIIZw2, Mrk231); distinct polarization signatures suggest that there could
either be a ``spine + sheath'' structure in the radio outflow, or there could
be a ``jet + wind'' structure. Similar nested biconical outflows can also
explain the VLBA and SDSS emission-line data in the KISSR sample of
double-peaked emission-line Seyfert and LINER galaxies. Furthermore, the
modeling of the emission-lines with plasma modeling codes such as MAPPINGS
indicates that parsec-scale jets and winds in these sources can disturb or move
the narrow-line region gas clouds via the ``shock + precursor'' mechanism.
Apart from the presence of ``relic'' emission, several Seyfert and LINER
galaxies show clear morphological signatures of episodic jet activity. In one
such source, NGC2639, at least four distinct episodes of jets are observed, the
largest one of which was only detectable at 735 MHz with the GMRT.
Additionally, a ~6 kpc hole in the CO molecular gas along with a dearth of
young stars in the center of its host galaxy is observed. This suggests a link
between episodic jet activity in RQ AGN and ``AGN feedback'' influencing the
evolution of their host galaxies. | Preeti Kharb, Silpa Sasikumar | 2023-02-14T10:25:16Z | http://arxiv.org/abs/2302.06954v1 | # Looking for Signatures of AGN Feedback in Radio-Quiet AGN
###### Abstract
In this article, we discuss the state of "AGN feedback" in radio-quiet (RQ) AGN. This study involves heterogeneous samples of nearby Seyfert and LINER galaxies as well as quasi-stellar objects (QSOs) that have been observed at low radio frequencies (few \(\sim\)100 MHz) with the upgraded Giant Meterwave Radio Telescope (GMRT) and \(\sim\)GHz frequencies with the Karl G. Jansky Very Large Array (VLA) and Very Long Baseline Array (VLBA). These multi-frequency, multi-resolution observations detect a range of arcsecond-scale radio spectral indices that are consistent with the presence of multiple contributors including starburst winds and AGN jets or winds; steep spectrum "relic" emission is observed as well. Polarization-sensitive data from the VLA and GMRT suggest that the radio outflows are stratified (e.g., in IIIZw2, Mrk231); distinct polarization signatures suggest that there could either be a "spine + sheath" structure in the radio outflow, or there could be a "jet + wind" structure. Similar nested biconical outflows can also explain the VLBA and SDSS emission-line data in the KISSR sample of double-peaked emission-line Seyfert and LINER galaxies. Furthermore, the modeling of the emission-lines with plasma modeling codes such as MAPPINGS indicates that parsec-scale jets and winds in these sources can disturb or move the narrow-line region (NLR) gas clouds via the "shock + precursor" mechanism. Apart from the presence of "relic" emission, several Seyfert and LINER galaxies show clear morphological signatures of episodic jet activity. In one such source, NGC2639, at least four distinct episodes of jets are observed, the largest one of which was only detectable at 735 MHz with the GMRT. Additionally, a \(\sim\)6 kpc hole in the CO molecular gas along with a dearth of young stars in the center of its host galaxy is observed. Multiple jet episodes on the 10-100 parsec scales and a \(\sim\)10 parsec hole in the molecular gas is also observed in the Seyfert galaxy NGC4051. This suggests a link between episodic jet activity in RQ AGN and "AGN feedback" influencing the evolution of their host galaxies. However, a similar simple relationship between radio outflows and molecular gas mass is not observed in the Palomar-Green (PG) QSO sample, indicating that "AGN feedback" is a complex phenomenon in RQ AGN. "AGN feedback" must occur through the local impact of recurring multi-component outflows in RQ AGN. However, global feedback signatures on their host galaxy properties are not always readily evident.
quasars; Seyfert galaxies; LINER galaxies; radio continuum emission; polarimetry; very long baseline interferometry 2023
## 1 Introduction
Galaxy evolution is one of the leading open questions in astrophysics (e.g., [1]). The observational findings of a close link between the galaxy properties, such as bulge mass and stellar velocity dispersion, and its central supermassive black hole (SMBH) have led astronomers to believe that the SMBH with its parsec-scale sphere of influence can affect the kpc-scale galaxy bulge through the process of "Active Galactic Nuclei (AGN) feedback" [2; 3]. AGN are believed to regulate galaxy growth by injecting energy into the surrounding gas, which has the effect of
either heating and/or expelling star-forming gas ("negative feedback") or facilitating localized star-formation ("positive feedback") [2; 4; 5; 6]. "AGN feedback" can occur through radiative power ("quasar mode", e.g., [7; 8]) or mechanical power fed back into the galaxy through AGN winds and jets ("maintenance/jet mode", e.g., [9; 10; 11]). The observational and energetic signatures of this "AGN feedback" are, however, still far from being unambiguous [3; 12]. While highly collimated jets cannot be efficient agents of "AGN feedback", presumably due to the smaller working surfaces at their advancing ends, relatively isotropic impacts via changes in jet direction can be highly effective, as can broader AGN outflows and winds [3].
Radio-quiet (RQ) AGN that comprise greater than 80% of the AGN population, however, have small-scale jets on the parsec to hundreds of parsec scales [13; 14; 15; 16], but typically not extending beyond \(\sim\)10 kpc [17; 18]; their total radio luminosities do not overwhelm their optical luminosities [19]. Recent work using 6 GHz Very Large Array (VLA) data of SDSS quasi-stellar objects (QSOs) by Kellermann et al. [20] has suggested that RQ AGN have \(21\leq\log[\mathrm{L_{\mathrm{\delta}}(W~{}Hz^{-1})}]\leq 23\). RQ AGN also tend to be low luminosity AGN (LLAGN) and comprise Seyfert nuclei and LINER galaxies [21; 22]. Ho [23] have identified LLAGN as those with H\(\alpha\) line luminosities ranging from \(10^{37}\) to \(10^{41}\) erg s\({}^{-1}\) or with bolometric luminosities in the range L\({}_{bol}\lesssim 10^{37}-10^{44}\) erg s\({}^{-1}\) and Eddington ratios in the range L\({}_{bol}/L_{Edd}\sim 10^{-9}-10^{-1}\)[24]. In this article, we use LLAGN and RQ AGN interchangeably when referring to Seyferts, LINERs, and RQ quasars (which we use in lieu of QSOs).
jet-interstellar medium (ISM) interaction has been inferred or observed in several RQ AGN in the literature. For example, in IC5063, NGC5643, NGC1068, and NGC1386 (as part of the Measuring Active Galactic Nuclei Under MUSE Microscope (MAGNUM) Survey; [25]), IC5063 [5; 26; 27], NGC1266 [28], HE 0040-1105 from the Close AGN Reference Survey (CARS) survey [29], and subset of sources from the Quasar Feedback Survey [30; 31; 32; 33]. Evidence for an interaction between the AGN-driven winds and ISM has also been reported for RQ AGN in the literature (e.g., [34; 35; 36; 37; 38; 39]). Therefore, despite the lack of large and powerful radio outflows in RQ AGN, they can be effective agents for AGN feedback.
In this review, we discuss the radio observations at multiple spatial resolutions of samples of Seyferts, LINERs, and RQ quasars. We review the results from multi-frequency as well as multi-scale observations going from arcsecond scales with the upgraded Giant Meterwave Radio Telescope (GMRT) and the Karl G. Jansky Very Large Array (VLA) in Sections 2-4, to milli-arcsec scales with the Very Long Baseline Array (VLBA), of nearby low luminosity or radio-quiet AGN in Section 5, and we discuss the state of "AGN feedback" in them. All sources lie at redshifts \(<\)0.1, which correspond to spatial scales smaller than \(\sim\)2 parsec on milli-arcsec and \(\sim\)2 kpc on arsec scales. We assume a cosmology with H\({}_{0}\) = 73 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{mat}\) = 0.27, \(\Omega_{vac}\) = 0.73. Spectral index \(\alpha\) is defined such that the flux density at frequency \(\nu\) is \(S_{\nu}\propto\nu^{\alpha}\).
## 2 Low Radio Frequency Observations of RQ AGN
Low-frequency radio (few \(\sim\)100 MHz) observations with the GMRT have typically detected radio emissions from both the AGN as well as stellar-related activity from the host galaxies of Seyferts or LINERs (e.g., [40; 41; 42; 18]). In these data, radio emission from the Seyferts/LINERs is lobe-like or bubble-like but often simply in the form of extended diffuse emission without distinct morphological features. The radio spectral index images cannot distinguish between the AGN and stellar-related emission either. While the stellar-related emission is typically steep (\(\alpha\leq-0.7\)), the AGN emission even in the unresolved "cores" can be either flat (\(\alpha\geq-0.5\)) or steep. Steep spectrum "cores" are likely to be due to the large beam of the GMRT including optically thin jet or lobe emission on sub-arcsec scales. The presence of additional lobes on arcsec scales has been indicated in the GMRT 325 and 610 MHz observations of the Seyfert galaxies, NGC3516, NGC5506, NGC5548, NGC5695 (Rubinur et al. in prep.).
GMRT observations can probe additional activity episodes as large steep-spectrum diffuse lobes/bubbles or as "relic" lobe emission that is no longer being supplied with particles and fields from the AGN (e.g., [40; 43; 44; 45]). The lobes of the Seyfert galaxy NGC4235 with abruptly changing spectral indices between two sets of lobes represent one such case (see Figure 1; [40]). Both the 610 MHz image and the 325-610 MHz spectral index image of NGC4235 hinted at diffuse steep-spectrum radio emission just beyond the well-defined western lobe and enveloping it. While the average spectral index is \(-0.29\pm 0.19\) in the western lobe and \(-0.56\pm 0.24\) in the eastern lobe, the average spectral index is \(-1.82\pm 0.17\) in this extended region. The average spectral index errors come from a spectral index noise image. The robustness of the steep spectrum emission is discussed in greater detail by Kharb et al. [40]. This surrounding steep-spectrum emission is reminiscent of "relic" radio emission, as observed in the lobes of the radio galaxy 3C388 by Roettiger et al. [46], where a lobe with an average spectral index of \(\sim\)\(-\)0.8 was surrounded by the steep-spectrum emission of spectral index \(\sim\)\(-\)1.5 from a previous AGN activity episode. Based on a simple spectral aging analysis in NGC4235, the relic outer lobe appeared to be at least two times older than the present lobe. This implied that the AGN in NGC4235 was switched "off" for the same time that it has been "on" for the current episode [40].
## 3 Jet Driven Feedback: The Case of NGC2639
Sanders [47] had predicted that Seyfert activity must be episodic on timescales of 10\({}^{4}\)-10\({}^{5}\) years during its statistical lifetime of 3-7 \(\times\) 10\({}^{8}\) years. They had arrived at these numbers based on the typical extent of the radio outflows as well as the NLR in Seyfert galaxies. A Seyfert galaxy must therefore undergo \(\sim\)100 activity episodes during its lifetime. Clear examples
Figure 1: The 325β610 MHz spectral index image from the GMRT in color overlaid with the 610 MHz radio contours for the Seyfert galaxy NGC4235. The average spectral index is \(\sim\)\(-\)0.6 in the lobes and \(\leq\)\(-\)1.8 in the βrelicβ lobe (emission between the red dotted lines). The contour levels are in percentage of the peak intensity and increase by factors of two. The peak intensity and lowest contour levels are 2.3 \(\times\) 10\({}^{-2}\) Jy beam\({}^{-1}\) and \(\pm\)0.3% of the peak intensity, respectively. The contour image is convolved with a circular beam of size 8 arcsec. Image reproduced from Kharb et al. [40].
of episodic jet activity in Seyfert and LINER galaxies have been identified in only a couple of sources, viz., Mrk6 [48], NGC2992 [49], and NGC2639 [50]. Sebastian et al. [51] had found tentative signatures for multiple jet episodes in a majority (\(\sim\)55%) of their small Seyfert galaxy sample of nine sources. This fraction appeared to be higher than that has been reported in radio-loud (RL) AGN (\(\sim\)10-15% in radio galaxies [52]) (see also the 3D GRMHD jet simulations of Lalakos et al. [53] that reproduce these low fractions). However, signatures of \(\sim\)100 episodes are almost never observed in Seyferts or LINERs, at least in the radio outflows. This might not be due to their true absence but rather due to the difficulty in their identification due to the low surface brightness of Seyfert/LINER lobes, their small spatial extents, lack of collimated jets, and confusion with the radio emission from stellar-related activity (star-formation, supernovae, and starburst winds) arising in the host galaxy.
Sebastian et al. [50] had noted the presence of at least three episodes in NGC2639 with two sets of bipolar radio lobes detected with the VLA and oriented nearly perpendicular to each other, similar to what was observed in Mrk6, as well as a parsec-scale core-jet structure detected with the Very Long Baseline Array (VLBA) and oriented at least 30 degrees from the sub-kpc east-west oriented radio lobes. GMRT observations at 325 and 735 MHz showed the presence of an additional pair of radio lobes, oriented nearly 45 degrees from the previously known north-south lobes (see Figure 2; [45]). It is worth pointing out that there is no continuous connecting radio emission between different jet episodes. Rather, the jets and lobes in each episode have clearly defined hotspots or edges distinguishing them as independent events, making a single precessing jet model fitting all of the radio emission improbable.
Ages of the three pairs of lobes were derived using the spectral aging software BRATS, which stands for Broadband Radio Astronomy Tools [54], and they turned out to be, respectively, \(34^{+4}_{-6}\) million years, \(11.8^{+1.7}_{-1.4}\) million years, and \(2.8^{+0.7}_{-0.5}\) million years, with the GMRT lobes being the oldest. Using the "on" and "off" times of these jets or lobes (using the spectral age of a given set of lobes and the spectral age difference between two sets of lobes, respectively), the AGN jet duty cycle in NGC2639 turns out to be \(\sim\)60%. Based on the molecular gas data from the EDGE [55]--Calar Alto Legacy Integral Field Area (CALIFA [56]) survey, Ellison et al. [57] have found that the gas fraction in the central region of NGC2639 is a factor of a few lower than in star-forming regions, suggesting that the AGN has partially depleted the central molecular gas reservoir. Like the CO (1-0) molecular gas image, which shows a hole with a diameter of \(\sim\)6 kpc, the GALEX NUV image also shows a deficiency of star formation in the last 200 million years in the inner \(\sim\)6 kpc region of NGC2639. These results point to star-formation quenching taking place in the central regions of NGC2639.
If the CO (1-0) molecular gas ring is a result of push-back from the jet in NGC2639, the \(PV\) (pressure times volume) amount of work done on the molecular gas by the jet to create a cavity can be estimated. Rao et al. [45] have shown that for the CO gas ring radius of 3 kpc, the volume
Figure 2: The four AGN jet episodes of NGC2639. (Left) 735 MHz GMRT total intensity image. The \(\sim\)9 kpc radio lobes are seen in this image. Contour levels: \((-2,-1,1,2,4,8,16,32,64,128,256)\)\(\times\) 0.6 mJy beam\({}^{-1}\). The beam at the bottom left corner is of size: 5.48 arcsec \(\times\) 3.0 arcsec at PA = 54.6 degrees. (Top) 5.5 GHz VLA total intensity image. Contour levels: \((-2,-1,1,2,4,8,16,32,64,128,256,512)\)\(\times\) 0.03 mJy beam\({}^{-1}\). The \(\sim\)1.5 kpc north-south radio jets are seen here. Beam size: 1.02 arcsec \(\times\) 0.89 arcsec at PA = \(-\)5.8 degrees. (Right) 5 GHz VLA radio image. Contour levels: \((-2,-1,1,2,4,8,16,32,64,128)\)\(\times\) 0.164 mJy beam\({}^{-1}\). The \(\sim\)360 parsec east-west lobes are seen in this image. Beam size: 0.43 arcsec \(\times\) 0.30 arcsec at PA = \(-\)85.4 degrees. (Bottom) 8.3 GHz VLBA image showing a \(\sim\)3 parsec jet at PA = 130 degrees. Contour levels: \((-2,-1,1,2,4,8,16,32,64)\)\(\times\) 0.239 mJy beam\({}^{-1}\). Beam size: 7.7 mas \(\times\) 6.2 mas at PA = \(-\)4.9 degrees. Figure reproduced from Rao et al. [45]. A \(\sim\)6 kpc hole in the molecular gas distribution is observed in the central regions of NGC2639.
of the disk-like cavity is \(1.25\times 10^{66}\) cm\({}^{3}\), and the \(PV\) work done is \(>\)3.44 \(\times\) 10\({}^{54}\) erg. Using the lobe flux densities at 5 GHz, the time-averaged power (e.g., using the relations in [58]) for a spectral age of 2.8 million years is \(6.8\times 10^{56}\) erg. Therefore, only \(\sim\)0.5% of the east-west jet power is sufficient to push back the CO gas in NGC2639. Similarly small fractions are needed from the north-south jets and the north-east-south-west jets. However, the creation of a hole in the molecular gas in the galactic center likely required several jet episodes to occur, given that each jet episode is collimated and therefore highly directional [45].
NGC2639 or Mrk6 are not unique in showing signatures of radio jets or lobes that have different sky orientations when looked at with multiple spatial resolutions. Another case is the famous Seyfert galaxy, NGC4051. Multi-resolution images of NGC4051 from Jones et al. [59] and Giroletti and Panessa [60] show at least three sets of radio lobes or jets. At a resolution of \(\sim\)2 arcsec, there lies a double-lobed radio structure of extent \(\sim\)830 parsec at a PA of 30 degrees (at its redshift of 0.002336, 1 arcsec corresponds to 61 parsecs). At a resolution of \(\sim\)0.5 arcsec, there is a jet-core-jet structure of extent \(\sim\)90 parsec at a PA of 60 degrees. Finally, on the 100 mas scale as probed by the European VLBI Network (EVN), the parsec-scale core-jet structure shows a faint extension toward the south (See Figure 3). Interestingly, NGC4051 also appears to have a hole with a diameter of \(\sim\)10 parsecs in its molecular gas distribution, as observed from Gemini North (data from Riffel et al. [61] reanalyzed by D. May et al. 1). Overall, NGC2639 and NGC4051 could be candidates for the presence of jet-driven "negative AGN feedback" in RQ AGN.
## 4 Jet + Wind Driven Feedback
### The Palomar-Green Radio-Quiet Quasar Study on Kpc-Scales
We now discuss the case of the Palomar-Green (PG; [62]) sample of RQ quasars, which has extensive multi-resolution radio jet and multi-phase gas outflow data, in the literature. Silpa et al. [43] carried out a 685 MHz GMRT study of the PG RQ quasar sample and found that the two-frequency radio spectral indices (using the GMRT and VLA) were ultra-steep (\(\alpha\leq-1.0\)) in few of the sources. Other than a correlation between the total GMRT 685 MHz luminosity and Eddington ratios, other radio properties of the sample such as radio core sizes and radio spectral indices did not correlate with BH properties such as their masses or Eddington ratios. This suggested either that the radio emission was stellar-related or was due to previous jet episodes (i.e., "relic" emission) in them (see Section 3). A combined GMRT-VLA study of the PG RQ sample has shown evidence for the presence of small-scale, bent, and low-powered jets in a couple of sources (see Section 5 ahead), while for the rest, it was difficult to unambiguously
Figure 3: The three radio lobes or jets at different position angles in the Seyfert galaxy NGC4051. The leftmost panel (VLA 1.4 GHz) image shows the galactic emission surrounding the 1 kpc-sized radio AGN. The middle panel shows \(\sim\)830 parsec lobes at a PA of 30 degrees. The rightmost panel shows a \(\sim\)90 parsec lobes at a PA of 60 degrees. The bottom panel shows the EVN image with a parsec-scale coreβjet structure, showing a faint extension toward the south. Figure reproduced from Giroletti and Panessa [60]. A \(\sim\)10 parsec hole in the molecular gas distribution is observed in the central regions of NGC4051.
determine the origin of radio emission (whether via jet or wind or stellar processes; [63]). Multi-frequency, multi-resolution radio polarization observations have detected a stratified radio outflow in PG 0007+106 (a.k.a. IIIZw2 [44]). The stratified radio outflow could either be a "spine + sheath" structure in the jet or a "jet + wind" composite structure (see Figure 4). Each component of the stratified outflow is observed to have a characteristic magnetic (B) field geometry (e.g., [64; 65]). B fields are aligned with the jet direction in the case of a jet or jet "spine" while they are transverse in the case of a wind or jet "sheath". The parallel B fields could represent the poloidal component of a large-scale helical B field, while the transverse B fields could either represent the toroidal component of the helical field or a series of transverse shocks that order B fields by compression. Alternately, they could represent toroidal B fields threading an AGN wind or jet "sheath", which is sampled in the lower-resolution images. The bow-shock-like feature at the termination point of the VLA jet (see Figure 5 of [44]), and the presence of a misaligned "sputtering" lobe in the GMRT image (annotated as ML in Figure 4), are consistent with restarted jet activity in IIIZw2.
The 5 GHz jet kinetic power of a subset of the PG RQ quasars estimated using the empirical relation of Merloni and Heinz [58] is within the range \(10^{42}\)-\(10^{43}\) erg s\({}^{-1}\)[63]. For these jet powers, either stellar-mass loading (e.g., [66; 67]) or the growth of Kelvin-Helmholtz (KH) instabilities triggered by recollimation shocks/jet-medium interactions can effectively decelerate
Figure 4: The GMRT 685 MHz total intensity contour image of IIIZw2 superimposed with red fractional polarization vectors, with 2 arcsec length of the vector corresponding to 6.25% fractional polarization. The inset on the right presents the VLA 5 GHz total intensity contour image of the south-western jetβlobe region, superimposed with red fractional polarization vectors, with 1 arcsec length of the vector corresponding to 25% fractional polarization. The smallest inset on the top right presents the VLA 5 GHz total intensity contour image of the radio core with red polarized intensity vectors, with 1 arcsec length of the vector corresponding to 0.031 mJy beam\({}^{-1}\) polarized intensity. The peak contour surface brightness is \(x\) mJy beam\({}^{-1}\) and the levels are \(y\times(-1,1,2,4,8,16,32,64,128,256,512)\) mJy beam\({}^{-1}\), where \((x\,;y)\) are \((35,0.21)\) and \((124,0.05)\) for the GMRT and VLA images respectively. Figure reproduced from Silpa et al. [44].
and decollimate the jets (e.g., [68; 69]). As the jets propagate through the host galaxy, they would encounter numerous stars, which inject matter into the jets via stellar winds. The mixing of the injected stellar material with the jet plasma causes the jets to decelerate. Using a small sample of RQ quasars, Silpa et al. [32] found that the polarized radio emission and [O III] emission did not always spatially overlap. This was suggested to arise from the depolarization of radio emission by either an irregular Faraday screen of clumpy emission-line gas or by the emission-line gas that had entrained and mixed with the synchrotron plasma in the lobes. While modeling the former scenario, the fluctuation scales of the electron density (or, equivalently the sizes of the emission-line gas clumps or clouds, or filaments) were considered and estimated. This value was also assumed to represent the fluctuation scales of the random B-field component while modeling the latter scenario. Interestingly, the lower limit on the size value (\(\sim\)10\({}^{-5}\) parsec) matches the sizes of red giant stars. KH instabilities also promote entrainment and mixing between the surrounding gas and the jet plasma, resulting in the deceleration and decollimation of the jets. KH instabilities can also cause jet bending (e.g., [70; 71]). Signatures of KH instabilities include knotty polarization structures and the presence of poloidal magnetic fields (e.g., [72]). Both of these signatures are observed in the few RQ quasars that are detected in polarized light. _Overall, therefore, there is evidence for small (arcsec) scale jet-medium interaction taking place in these sources._
Shangguan et al. [73] have found that the molecular gas masses and kinematics of the PG quasars were similar to those of the star-forming galaxies. Additionally, no molecular gas outflows were detected in these sources. Their host galaxies were found to be following the "Kennicutt-Schmidt law" [74], suggesting that no star formation quenching was taking place in them. In the recent work of Molina et al. [75], the molecular gas kinematics of the PG quasar host galaxies also suggest that the "negative AGN feedback" is ineffective in them. In general, there appears to be no immediate significant impact on the global molecular gas reservoirs by jets or outflows in the PG sources. However, the possibility that the AGN might be impacting the ISM locally cannot be ruled out (e.g., [76]). As discussed above, we infer the presence of localized jet-medium interaction from the jet kinetic power argument as well as the polarization data in the PG RQ quasar sample. Localized impact on the gas by AGN or jets could in principle be captured by spatially resolved observations of multiple and higher CO transitions (e.g., [77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87]).
One of the plausible mechanisms for star-formation quenching as proposed in the literature includes galaxy major and minor mergers (along with "AGN feedback" in most cases [88; 89; 90; 91; 92; 93; 94; 95]). The galaxy major merger simulations by Springel et al. [96] show that the presence of accreting BHs can significantly impact the merger dynamics. Galaxy mergers can also cause enhanced star-forming activity [97; 98; 99] just as a "positive AGN feedback" process [100; 101]. On the other hand, Weigel et al. [102] suggest that the major merger quenching cannot fully explain the slow evolution of galaxies from blue to red; alternative quenching mechanisms ("AGN feedback" being a potential candidate) are needed. The host galaxies of a large fraction of the PG RQ quasars show signatures of ongoing galactic mergers [43]. Therefore, the merger scenario can also have an influence on the overall interpretation of AGN feedback.
### The Jet and Wind in Mrk231
We now discuss the case of the quintessential AGN that clearly hosts both a jet and wind component and is routinely considered in "AGN feedback" scenarios, viz., Mrk231. Mrk231 is a Seyfert galaxy that hosts multi-phase multi-scale gas outflows, such as a nuclear ultra-fast outflow (UFO; [103]), a sub-kpc scale HI outflow [104], a kpc-scale molecular CO outflow [34; 103], and a \(\sim\)3 kpc scale outflow seen in the Na I doublet lines [35; 105]. A massive molecular OH outflow has also been detected in Mrk231 [5; 106; 107]. In VLA observations, Mrk231 reveals a one-sided radio outflow to the south, comprising a weakly collimated jet
or jet "spine" embedded inside a broader magnetized wind component [108], resembling the stratified radio outflow in IIIZw2 (see Section 4.1). The composite outflow in Mrk231 is curved, low-powered, and oriented at a small angle to our line of sight. The wind component may comprise both a nuclear starburst and AGN wind, where the former may be the primary contributor close to the core, while the latter may dominate further away. Moving away from the core, the wind component could also be the outer layers of a widened jet like a jet "sheath". The 10-kpc-scale radio structure in Mrk231 is "self-similar" to the radio structure observed on the 10-parsec-scale in the literature (see Figure 1 in [104]), resembling the lobes observed in Mrk6 [48]. The radio structures on the two scales in Mrk231 are not as clearly delineated, however, which may be a result of the low inclination angle. However, the presence of two distinct structures is consistent with episodic jet activity in Mrk231, similar to the case of Mrk6 (see Section 3).
Silpa et al. [108] obtained first-order estimates of the relative contributions of the different components (jet/AGN wind/starburst wind) to the overall budget of the radio emission. While the starburst-driven wind accounted for \(\sim\)10-20% of the total radio emission, both jet and AGN wind contributed significantly to the rest of the emission. To estimate the contribution of starburst-driven wind, two different models were used: one assumed that 10% of the supernova kinetic energy was carried away by the winds [109; 110], and the other assumed 40% [111]. The latter analysis has also been carried out assuming two different initial mass functions (IMFs; [112; 113]). The contribution of the starburst-driven wind remained the same for the different assumed models. The contribution of AGN wind was estimated assuming two different coupling efficiencies (5%; [88]) and (0.5%; [114]). For a 0.5% coupling efficiency, the radio contribution of the jet estimated using Leitherer et al. [110] model, as well as Dalla Vecchia and Schaye [111] model with Chabrier [113] IMF, turned out to be more than of the wind. Although such a first-order analysis cannot provide a one-to-one correspondence between the dominant driver (between jet or AGN wind) and the outflowing gas phase (HI or Na I D or CO or OH), it still indicates that the multi-phase gas outflow in Mrk231 is likely to be driven by both a jet and a wind.
## 5 Signatures of Jet + Wind Feedback on Parsec-Scales
We now turn our attention to much smaller spatial scales than probed by the GMRT and VLA observations. We focus on the KPNO Internal Spectroscopic Survey Red (KISSR; [115]) sample of Seyfert and LINER galaxies [116]. This sample was chosen based on the presence of double-peaked emission lines in their SDSS spectra as well as a radio detection in the VLA FIRST survey [117]. Phase-referenced observations of this sample with the Very Long Baseline Array (VLBA) have revealed the presence of elongated jet-like features, similar to other Very Long Baseline Interferometry (VLBI) studies of RQ AGN in the literature (e.g., [118; 119; 120; 121; 122; 123]). Interestingly, these jets are one-sided, similar to the one-sided parsec-scale jets observed in RL AGN. VLBI imaging also reveals one-sided jets in narrow-line Seyfert 1 galaxies [124; 125; 126]. A \(\sim\)60 parsec long radio source (jet-core-counterjet) was imaged in the Seyfert 2 galaxy Mrk348 using VLBI by Neff and de Bruyn [127]; the counterjet emission was detected more than 40 parsecs away from the radio core. In RL AGN, one-sidedness is typically understood to be a consequence of Doppler-boosting effects [128]. It is not clear if the one-sidedness in Seyfert or LINER jets is a result of Doppler-boosting effects or due to free-free absorption; multi-frequency and spectral index observations could help to disentangle these two scenarios (e.g., [129; 130]). If Doppler-boosting is responsible for the jet one-sidedness in the KISSR sources, for instance, then lower limits to jet speeds would range from 0.003c to 0.75c [116] assuming jet inclinations to be \(\geq\)50 degrees, consistent with their type 2 classification, and the expected torus half-opening angles being \(\sim\)50 degrees (e.g., [131]).
On the other hand, if the missing counterjet emission was a result of free-free absorption, the required electron densities of the ionized gas could be estimated using \(EM=3.05\times 10^{6}\)\(\tau\)\(T^{1.35}\)\(\nu^{2.1}\), and \(n_{e}=\sqrt{EM/T}\)[132], where \(EM\) is the emission measure in pc cm\({}^{-6}\), \(\tau\) is the optical depth at frequency \(\nu\) in GHz, \(T\) is the gas temperature in units of \(10^{4}\) K, \(n_{e}\) is the electron density in cm\({}^{-3}\), and \(l\) is the path length in parsecs. In order to account for the observed jet-to-counterjet surface brightness ratios (\(R_{I}\)) of \(\sim\)20 on parsec scales (as in the case of KISSR434; [133]), the optical depth at 1.5 GHz would need to be at least \(\sim\)1.0 using \(\exp(-\tau)=1/R_{I}\)[134]. For a gas temperature of \(10^{4}\) K and a jet path length of 1 parsec, an \(EM\) of \(\approx\)7.1 \(\times\)\(10^{6}\) pc cm\({}^{-6}\) and \(n_{e}\) of \(\approx\)2700 cm\({}^{-3}\) are required for free-free absorption on parsec scales. Such ionized gas densities can be found in NLR gas clouds. However, their volume filling factor is of the order of \(10^{-4}\) (e.g., [135]), making them unlikely candidates for absorbers for the \(\sim\)100-parsec-scale (counter-)jets observed in a couple of the KISSR sources [116; 133]. Ionized gas in giant HII regions with \(n_{e}\)\(\sim\)100-1000 cm\({}^{-3}\) could in principle also be the candidate media for free-free absorption [136] but also has a low volume filling factor (\(\geq\)0.2, e.g., [137]).
Multi-epoch VLBI observations are therefore necessary to check for proper motions and obtain accurate jet speeds to verify the Doppler-boosting picture in Seyferts and LINERs. This has recently become possible for eight KISSR sources. Preliminary results suggest the detection of superluminal jet motion in at least one source (Kharb et al. in prep.). With these new observations, the jet detection rate in the KISSR sample becomes \(\sim\)75%. Jet detection rates in larger samples of Seyferts and LINERs range between \(\sim\)30% and \(\sim\)50% (e.g., [123; 138]). The higher jet detection rate in the KISSR sample is consistent with a selection bias that comes from selecting double-peaked emission-line AGN (DPAGN) and jet-NLR interaction being the primary cause of the double-peaked emission lines in them.
Furthermore, it was found that the double peaks of the emission lines were typically separated by velocities of \(\sim\)100-300 km s\({}^{-1}\) for most sources, and the widths of the lines corresponded to velocities of \(\sim\)100-200 km s\({}^{-1}\)[116; 117; 133; 139]. These velocities are much smaller, by factors of several hundred, than the expected jet speeds. This, therefore, suggests that the emission line gas could be pushed in a direction lateral to the jet (e.g., [33; 139]) or could arise in wider, slower-moving winds around the jets (see for example Figure 5). Indeed, nested biconical outflows have been invoked to explain the origin of double-peaked emission lines in low luminosity AGN by Nevin et al. [140; 141]. Importantly, wide wind-like outflows can be efficient agents of "AGN feedback" (see the review by [12]). Indeed, from the MAPPINGS III modeling of emission lines such as H\(\alpha\), H\(\beta\), H\(\gamma\), [S II], [O III], and [O II], it appears that in sources possessing parsec-scale jets, the "shock + precursor" model can explain the observed line ratios, consistent with the idea of jet-NLR interaction. The "shock + precursor" model comes into play when ionizing radiation (i.e., extreme UV and soft X-ray photons) generated by the cooling of hot gas behind a shock front creates a strong radiation field leading to significant photoionization [142]. The spatial scales sampled by the SDSS optical fiber with a diameter of 3 arcsecs, corresponding to \(\sim\)3-6 kpc at the distance of the KISSR sources, are much larger than the \(\sim\)100 parsec-scale VLBA jets. However, as Schmitt et al. [143; 144] have noted, the NLR ranges from a few 100 parsecs to a few kpcs in Seyfert galaxies. The emission-line modeling is consistent with the idea that RQ AGN are energetically capable of influencing their parsec and kpc-scale environments, making them agents of "radio AGN feedback".
## 6 Summary
Radio-quiet AGN make up the vast majority of all AGN. While they lack the 100 kpc radio jets and lobes that are observed in the more spectacular looking and rarer radio-loud AGN, the smaller outflows in RQ AGN have profound though subtle effects on their host galaxies. Multi-frequency polarization-sensitive observations of the PG RQ quasars and other Seyfert galaxies with the VLA and GMRT indicate that the radio outflows are layered or stratified. The multi-component outflows could either be spine + sheath structures or jets with winds. While the "sheath" layers around the "spines" of jets could come about due to jet-medium interaction, they continue to entrain the surrounding gas, create instabilities between layers, and further impact both the jet and the medium itself. Small (arcsec) scale jet-medium interaction is also implied by their polarimetric and jet kinetic power data. The picture of nested biconical outflows also emerges from a completely different study, namely, the VLBI study of a sample of Seyfert and LINER galaxies that exhibit double-peaked emission lines in their SDSS optical spectra. Two separate results point to jet-medium interaction in these sources. First, the high incidence of one-sided radio jets, several of which are \(\sim\)100 parsec in extent, points to a selection bias when choosing a DPAGN sample as the double-peaked emission lines appear to be the result of jet-NLR gas cloud interaction. Second, plasma modeling codes on the optical emission lines from SDSS indicate that the NLR gas clouds are affected through the "shock + precursor" mechanism and that the jets are likely stratified with a faster moving "spine" and a slower moving "sheath" or wind. Finally, multi-frequency arcsec-scale observations detect the signatures of multiple jet episodes in RQ AGN through the presence of additional steep-spectrum radio lobes. Episodic AGN activity in fact appears to be the norm in RQ AGN, signatures of which are seen in the radio spectra as well as morphological features like bow-shocks. In the case of the Seyfert galaxies NGC2639 and NGC4051, and others, these multiple jet episodes appear to excavate holes in the molecular gas in the central regions of their host galaxies, consistent with "negative AGN feedback". However, in many cases, like the PG RQ quasars, there appears to be no currently observable depletion of molecular gas in their host galaxies. "AGN feedback", therefore, appears to be a complex phenomenon in the
Figure 5: Cartoon showing a nested biconical structure in the outflow (that is blue-shifted when approaching and redshifted when receding as denoted by the colors) in order to reproduce the double-peaked emission line AGN spectroscopic observations. The host galaxy (red ellipse) and accretion disk (blue ellipse) as shown here are not to scale. The outer layers of a jet or a wind could be driving the NLR gas clouds.
case of RQ AGN. It cannot be ruled out on small spatial scales as implied by the signatures of jet-medium interaction. However, global impact signatures on the host galaxies are often difficult to see.
This paper has contributions from projects lead by both P.K. and S.S. Projects lead by S.S. are a part of her Ph.D. thesis currently being supervised by P.K. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
The data underlying this article will be shared on reasonable request to the corresponding author.
We thank the anonymous reviewers for their suggestions that have significantly improved this paper. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Center for Radio Astrophysics of the Tata Institute of Fundamental Research. P.K. and S.S. acknowledge the support of the Department of Atomic Energy, Government of India, under the project 12-R&D-TFR-5.02-0700. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
The authors declare no conflict of interest. |
2304.13769 | Evaluating Code Metrics in GitHub Repositories Related to Fake News and
Misinformation | The surge of research on fake news and misinformation in the aftermath of the
2016 election has led to a significant increase in publicly available source
code repositories. Our study aims to systematically analyze and evaluate the
most relevant repositories and their Python source code in this area to improve
awareness, quality, and understanding of these resources within the research
community. Additionally, our work aims to measure the quality and complexity
metrics of these repositories and identify their fundamental features to aid
researchers in advancing the fields knowledge in understanding and preventing
the spread of misinformation on social media. As a result, we found that more
popular fake news repositories and associated papers with higher citation
counts tend to have more maintainable code measures, more complex code paths, a
larger number of lines of code, a higher Halstead effort, and fewer comments. | Jason Duran, Mostofa Sakib, Nasir Eisty, Francesca Spezzano | 2023-04-26T18:23:16Z | http://arxiv.org/abs/2304.13769v1 | # Evaluating Code Metrics in GitHub Repositories Related to Fake News and Misinformation
###### Abstract
The surge of research on fake news and misinformation in the aftermath of the 2016 election has led to a significant increase in publicly available source code repositories. Our study aims to systematically analyze and evaluate the most relevant repositories and their Python source code in this area to improve awareness, quality, and understanding of these resources within the research community. Additionally, our work aims to measure the quality and complexity metrics of these repositories and identify their fundamental features to aid researchers in advancing the field's knowledge in understanding and preventing the spread of misinformation on social media. As a result, we found that more popular fake news repositories and associated papers with higher citation counts tend to have more maintainable code measures, more complex code paths, a larger number of lines of code, a higher Halstead effort, and fewer comments. Utilizing these findings to devise efficient research and coding techniques to combat fake news, we can strive towards building a more knowledgeable and well-informed society.
GitHub Repository Mining, Fake News, Misinformation, Code Metrics, Source Code Analysis
## I Introduction
In the current era, software and technology have become ubiquitous and play a pivotal role in the progress and advancement of various domains. Therefore, it's crucial to delve into software quality, applicability, and work processes. To assess the quality of software or toolkits, a direct approach is to access the source code from popular repositories such as Bitbucket, GitHub, or other version control systems and analyze them. This analysis can reveal existing software and tools' interesting, attractive, user-friendly, and reliable features.
Typically, the quality of the software is measured using code quality metrics. Simply put, metrics development involves creating standard numerical values that serve as a benchmark for evaluating quality. As the software industry has grown and become more automated, code quality metrics have become an essential tool for assessing the effectiveness of source code. This approach can provide valuable insights into the quality of work and how it relates to the characteristics of real-world samples.
The study of popular software repositories and their code bases is an established area of research in software engineering. Similarly, a significant amount of research has focused on detecting fake news in virtual media, spanning computer science, and social science. Fake news is intentionally fabricated content that aims to deceive readers, leading to misinformation [8]. The field has received considerable funding from the US and international entities, making it a significant focus of academic research for the foreseeable future. In addition, it is a highly cited research area that has garnered interest from researchers worldwide.
The prevalence of fake news across social, online, and print media has raised doubts about the reliability of news and information among the general public [20]. This issue has garnered significant attention from both the public and academic spheres, especially in this era of abundant information [24]. In today's society, false, misleading, and politically charged fake news stories are gaining more traction and creating distrust and skepticism among audiences, especially in the age of social media dominance. The spread of fake news and its harmful impact on society, particularly after the 2016 US presidential election, is a significant phenomenon [23].
In response to the growing prevalence of fake news, a significant amount of research has focused on detecting fake news and misinformation using artificial intelligence (AI) and machine learning (ML) and researchers worldwide have developed many software, tools, and models. For instance, Monti et al. [22] created a generic fake news detection model based on geometric deep learning. Shu et al. [28] established a tri-relationship between publisher-news relations and user-news interactions to classify fake and real news. Wang et al. [30] devised a fine-grained multimodal fusion network-based model that leverages textual and visual features to detect fake news.
However, little effort has been dedicated to evaluating the quality and functionality of the source code of these tools using popular code metrics. To address this research gap,
we analyze well-cited and famous repositories related to fake news and misinformation. By analyzing citation data, we can gain insights into the importance of ideas, changes in source work, and the impact of research in a particular field [9]. Additionally, meta information such as stars and forks can provide insight into the effects of GitHub repositories associated with fake news. We specifically focus on exploring publicly available repositories on GitHub, as it is the largest freely available source code hosting repository for software developers [10]. Our study aims to determine the relationship between measures of popularity, such as citation and star count, and traditional code metrics.
To summarize, our research intends to bridge the gap between software engineering and fake news, two distinct fields of computer science research, and to advance the field while also presenting fresh research opportunities. Our overarching research objective is to identify code quality and complexity metrics associated with popular fake news research and repositories. This will provide researchers with valuable insights into the aspects of their work and code that are highly valued by others. Additionally, since these repositories are highly regarded and frequently cited in the research community, it motivates us to continue exploring this topic. Therefore, we pose the following research questions:
**RQ1: Is there any correlation between different code metrics and research citations?**
The primary objective of RQ1 is to investigate whether any discernible code metrics are linked to the citations received by a paper for a particular source code repository. We will identify commonly used measures of code quality, complexity, and characteristics and examine their correlation with the citations of related research papers. It should be noted that this analysis does not necessarily indicate causality. Still, it may provide valuable insight into the aspects of shared research and source code that are highly valued.
**RQ2: Is there any correlation between different code metrics and GitHub popularity?**
Similar to RQ1, this research question aims to explore the association between various code metrics and measures of popularity, but this time with respect to repository-specific measures such as GitHub stars and subscribers. It is worth noting that while these measures are related to popularity, per Fig. 1 they are only loosely correlated with citations. Thus, we expect different metrics associated with repository popularity compared to paper citations. We aim to identify these metrics and gain insight into the factors that contribute to the popularity of repositories in the domain of fake news detection.
**RQ3: Is there any correlation between GitHub popularity and associated paper citations regarding code metrics?**
The goal of this research question is to investigate if there are any common code metrics or themes that are linked with the popularity of a repository based on its GitHub stars, subscribers, and paper citations.
## II Background
GitHub has become a popular platform for software quality analysis through source code analysis among software engineering researchers. It offers open-source capabilities, integrated social features, and metadata that can be retrieved through a simple API call [19]. Research on GitHub can be categorized into two sections: quantitative and qualitative. Quantitative studies focus on the development practices followed by developers, while qualitative analysis evaluates the performance and quality of projects by examining developer activities [19].
Kalliamvakou et al. [19] address the challenges and opportunities of using GitHub as a data source for software quality analysis in a comprehensive study. They analyze a dataset of GitHub repositories from 2014 and identify various limitations of using such data, including inactive projects, repositories used solely for disk storage, and lack of pull request usage. Finally, the authors discuss the implications of these limitations and provide recommendations for researchers to ensure the validity and reliability of their findings.
Inspired by social media, GitHub introduced stars to measure a repository's popularity. Borges et al. [10] used star count as a measure of popularity. Meanwhile, Hu et al. [18] utilized stars and forks to develop a star graph revealing the social relationship between users and repositories. Additionally, Zhu et al. [33] found that the use of standard folders, such as documents, testing, and examples, increases the probability of a code repository being forked and thus directly related to popularity. Interactions among the developer community on GitHub, such as following other users, watching or starring other project repositories, can also impact exposure and popularity [13]. Aggarwal et al. [7] explored the correlation between documentation and popularity, concluding that popularity can positively impact the attraction of more developers and the continuous improvement of documentation. Additionally, citation counts are a way to measure the popularity of a paper or repository, reflecting key concepts, approval from scholars, and groundbreaking novel ideas in a scientific domain [29].
According to Sharma [25], software developers have utilized code quality metrics for over five decades. These metrics are typically divided into five categories: size, complexity, coupling, cohesion, and inheritance. Over time, code quality metrics have naturally progressed and can now be assessed using various factors, such as lines of code, code clone, readability, and author rank [15]. Among the most prominent code quality metrics are Cyclomatic complexity [21], Halstead metrics [17], Chidamber and Kemerer (C&K) metrics suite [11], and the MOOD metrics [6].
McCabe [21] developed a method to measure code complexity using graph theory to analyze code paths. This measurement estimates the number of independent code paths a system can take and is considered to closely reflect the perceived complexity of a code by programmers. In brief, the calculation for McCabe code complexity is \(MCC=C+1\), where \(MCC\) represents the McCabe code complexity, and \(C\) denotes the
number of decision points in the code [3].
Halstead metrics consider the operators (such as main, print, int, etc.) and operands (%d, avg, etc.) in code to determine the time required for programming, the number of delivered bugs, the level of difficulty, and the amount of effort needed, among other factors. On the other hand, the Chidamber and Kemerer (C&K) [11] metrics suite comprises six object-oriented metric suites, namely Weighted Methods per Class, Depth of Inheritance Tree, Number of Children, Coupling between Objects, Response for a Class, and Lack of Cohesion of Methods. Finally, the MOOD metrics are a collection of metrics that include method inheritance factor, attribute inheritance factor, coupling factor, method inheritance factor, attribute inheritance factor, and coupling factor [6].
During our research, we aimed to identify any metrics explicitly designed for assessing fake news-related repositories. However, to our knowledge, no such metrics have been developed. Therefore, we have resorted to utilizing some of the code quality metrics mentioned earlier to evaluate our selected repositories.
## III Methodology
In this section, we present our approach to gathering the data and the analysis methods used to examine the relationships between the code quality metrics, GitHub, and citation-related measures of popularity.
### _Data Collection_
Our objective was to locate repositories specifically associated with published, submitted, or presented papers on fake news detection models. To identify these repositories, we initiated a search on GitHub for all repositories containing the keywords "fake news" or "misinformation" created after 1st January 2019, with a minimum of 2 subscribers and larger than 1 MB in size. We arrived at these search criteria after experimenting with various options to balance, including as many relevant repositories as possible while avoiding small test repositories, class assignments, and other less significant repositories that would consume our limited API requests. Following this process, we identified 445 repositories related to "fake news" and 71 related to "misinformation" as potential candidates.
We refined our analysis by requesting specific attributes from GitHub and eliminating repositories that did not relate to model-related research projects. Firstly, we excluded repositories with an associated URL containing '.ai', '.io', '.guru', '.br', '.Br', or 'flask'. These URLs typically represented demo applications or advanced class or resume-building projects. Next, we included any repository that had an associated URL linking to a paper or research site and mentioned "paper", "conference", or "proceedings" in either the GitHub description or the 'README.md' file. Finally, we required each repository to contain at least one '.py' file for our analysis. Furthermore, we manually excluded a few repositories that caused third-party libraries to crash while parsing the Python code, mainly those written in Python 2.7 or earlier versions. From this exploration, We ended up with 39 GitHub repositories.
Apart from the aforementioned details, we searched for citations of each repository that had an associated paper. To accomplish this, we utilized two platforms, namely Google Scholar [1] and Semantic Scholar [5]. We manually examined each repository that appeared studies linked above, determined the paper title, and manually searched the websites to gather the # of citations. This data was then used alongside the stars and subscribers as independent variables for the regression modeling.
We manually retrieved the source code repositories for three major works [12, 27, 31] in the fake news domain that was not available on GitHub and added them to our analysis directory. These repositories were maintained on publicly accessible cloud solutions. To compensate for missing stars and subscribers (followers) counts, we assigned them the average values for the overall sample. We gathered citations for these repositories in the same manner as for the other repositories.
After applying the inclusion and exclusion criteria mentioned earlier, we obtained a total of 42 repositories and projects. We used this dataset to conduct the planned analysis to explore the relationship between code metrics, repositories, and citations.
### _Data Analysis_
We utilized two popular Python libraries (Radon & Lizard) to compute code metrics on Python code, calculating several code quality, complexity, and maintainability metrics typically used in software engineering. Tables I and II display these metrics and their corresponding definitions. We used the Radon [4] and Lizard [2] Python packages to calculate these metrics on a per-file and function level. We then aggregated these metrics by taking the mean of the file level measures to the repository level for comparing projects and analyzing the relationship between the metrics and popularity as represented by GitHub stars and subscribers. Additionally, we calculated several associated meta-variables such as the presence of a requirement.txt file, a README.md that exceeds a certain length, and which deep learning libraries (PyTorch, Keras, and/or TensorFlow) the Python code references or uses.
Table IV shows the summary data statistics of the variables from the repositories in GitHub and the remaining projects for the calculated metrics. The last row or the maximum value of the measured metrics is particularly interesting because it shows some of the largest outliers. That is also supported in the histogram of the model variables in Fig. 2 and the line graphs of all the variables in Fig. 3.
### _Experiment_
Once we gathered the per-repository data in a tabular format, we conducted various exploratory data analyses to address the research questions. We then performed feature selection among the metrics using Recursive Feature Elimination (RFE) methods from the Sklearn library to generate
a parsimonious model from the complete set of potential regressors that would allow us to draw conclusions about the relationship between some of the metrics and measures of popularity (i.e., stars, subscribers, and citations). With 20 possible explanatory variables, many of which were highly collinear with one another (e.g., lines of code, logical lines of code, and average lines of code), we utilized the RFE approach to examine progressively smaller subsets of variables that exhibited the highest correlations with the target variables (i.e., stars and citations).
We also noted that the maintainability index was an approximately normally distributed variable. Ultimately, after settling on the five variables identified by the RFE method (i.e., maintainability index, lines of comments, Halstead volume, effort, and bugs), we employed Statsmodels ols methods to construct a linear model to measure the effects of the predictors. We also incorporated some robust (Huber) models after considering some of the outliers in the data, which enabled us to examine the sensitivity of the models to outliers in certain predictors. The robust linear models yielded more consistent results and reduced the impact of large outliers in the independent and dependent variables. Thus, we present the robust models in Table IV.
### _Software Tools Used_
For our development, we used python 3.8 combined with PyGithub, radon, lizard, pandas, sweetviz, pandas_profiling, statsmodels, sklearn, semanticscholar, scholarly, and seaborn as the main 3rd party libraries for data analysis, modeling, and visualization.
## IV Results
Here we examine the results of our analysis of the various code metrics, present the results of the regressions between the
Fig. 1: Correlation Heatmap of Calculated Metrics
metrics, citations, and GitHub measures, and our ideas about the underlying reasons for the observed relationships.
**RQ1: Is there any correlation between different code metrics and research citations?**
According to Table IV, all the predictors, including mi (maintainability index), comments (number of lines of comments), volume (Halstead volume), effort (Halstead effort), and bugs (Halstead bugs), exhibit statistically significant association with both Google Scholar and Semantic Scholar citations in the robust model. Specifically, lines of comments, bugs, and code volume demonstrate a negative correlation with both gs and ss citations, while mi and Halstead effort exhibit a positive correlation. Interestingly, these findings align with our intuition, indicating that code that is more concise, less buggy, and less commented is linked to fewer paper citations. In contrast, more maintainable, high-effort code is associated with more citations.
**RQ2: Is there any correlation between different code metrics and GitHub popularity?**
As mentioned previously, the measures of GitHub stars and subscribers are considerably more erratic and possibly prejudiced when compared to citations. Consequently, we find the associations between these measures and the various code metrics to be less compelling. According to Table IV, the only statistically significant relationship is with mi (the Maintainability Index), which displays a positive association with both stars and subscribers. While comments are not statistically significant, they exhibit a negative association with the GitHub measures, which is widespread across all the measures. The remaining Halstead metrics, however, appear to have coefficients of varying signs between stars and subscribers and citations, indicating no persistent association.
**RQ3: Is there any correlation between GitHub popularity and associated paper citations regarding code metrics?**
Table IV and the correlation heat map presented in Fig. 1 both demonstrate a strong, statistically significant, positive correlation between the Maintainability Index and stars, subscribers, as well as both Google Scholar and Semantic Scholar citations. This was the most robust and consistent finding throughout our study, exhibiting significance at the 5% and higher level across all models. Furthermore, all four models consistently showed the signs of coefficients and occasional significance of regressors. Interestingly, the most unusual
discovery was the persistent negative correlation between the number of comments and code popularity (across GitHub and citations), which could indicate that high-quality code documents itself, as per the old adage.
Overall, we are satisfied with the general level of intuitiveness of the results and the simplicity of interpretability of the coefficients. While it does not suggest any underlying causality, we believe it is a strong indication of a few intuitive measures that reflect high-quality code and research, which are more likely to be rebuilt upon and reused to advance the science of fake news detection.
## V Threats to Validity
Our focus was solely on repositories that contained published papers. A simple search for fake news on the GitHub platform yielded over 12,000 repositories. However, we narrowed our search to repositories that aligned with a publication in order to identify the best available tools or codes in the fake news domain. Most repositories associated with fake news were either class projects or work without any related publication. As a result, our conclusion may not be the most comprehensive or generalizable from the limited repositories we found. Nevertheless, we have included the most popular repositories and tools associated with fake news, which provide a good understanding of the topic.
We sourced most of the repositories for our analysis from the GitHub platform. Still, other popular code hosting platforms, such as Bitbucket and GitLab, may contain more robust tools in this domain. This study did not consider proprietary industry-level software or tools, particularly those developed by large companies and used in conjunction with search engines. However, since GitHub is the most widely used public code hosting platform, it largely satisfied the need to evaluate similar repositories on other code hosting platforms. We found this threat to be minimal. Furthermore, our study focused solely on publicly available and easily accessible free tools. We did not consider paid or large-scale industry-based tools within the scope of this research.
Our study incorporated several research techniques and features. However, conducting a more comprehensive analysis, such as surveying users, could provide further insight into their experience with fake news detection tools. To assess the effectiveness of existing tools, we utilized popularity measures such as stars, followers count, and forks. An expanded analysis with additional features could potentially provide more information on whether popularity is a reliable indicator of the best code, particularly in the context of fake news detection.
## VI Discussion and Conclusion
In recent years, fake news has emerged as a major concern. One approach to combat the spread of fake news is to create and distribute high-quality research associated with identifying and analyzing misinformation. A crucial aspect of this effort is sharing research code repositories. By identifying important characteristics of this code, we can gain valuable insights into improving the research quality.
This paper presents code metrics and code repository analysis of fake news studies to better understand the characteristics
Fig. 3: Graph of Values of Repository Metrics
Fig. 2: Histograms of Model Variables
of the most popular and highly cited code and research. We also advocate source code quality and associated measures of repository popularity. Through this analysis, we uncover several significant findings.
Our analysis shows that a typical research project repository contains approximately 2000 lines of code, a few hundred lines of comments, and a maintainability index of approximately 50, which falls in the middle of the 1 to 100 scale. In addition, a typical source code file is relatively straightforward, with only a few independent code paths, likely contributing to its increased maintainability.
We found that papers associated with repositories exhibit a strongly right-skewed distribution regarding citations, with seminal papers in the area or those published in well-known venues like Nature being more heavily cited. In contrast, measures of popularity such as GitHub stars or subscribers exhibit weaker associations with the code metrics we measure than citations. We also observed that citations from both Google Scholar and Semantic Scholar are highly similar, with Google Scholar having an average of 22% more citations. Furthermore, the code metric measures and popularity measures represent a mixture of normally distributed measures and those with many outliers on both the high and low ends of the distributions, which complicates modeling and needs further attention in cleaning the data set.
Overall, we found that more popular fake news repositories and associated papers with higher citation counts tend to have more maintainable code measures, more complex code paths, a larger number of lines of code, a higher Halstead effort, and fewer comments.
In summary, our analysis of code repositories provides valuable insights into the characteristics and quality of fake news repositories on GitHub. These findings can hopefully inform efforts to identify and track fake news, facilitate the development of research in this area, and ultimately help mitigate the negative impact of this information on individuals and society.
|
2304.04215 | Charge transport modulation by a redox supramolecular spin-filtering
chiral crystal | The chirality induced spin selectivity (CISS) effect is a fascinating
phenomena correlating molecular structure with electron spin-polarisation in
excited state measurements. Experimental procedures to quantify the
spin-filtering magnitude relies generally on averaging data sets, especially
those from magnetic field dependent conductive-AFM. We investigate the
underlying observed disorder in the IV spectra and the origin of spikes
superimposed. We demonstrate and explain that a dynamic, voltage sweep rate
dependent, phenomena can give rise to complex IV curves for chiral crystals of
coronene bisimide. The redox group, able to capture localized charge states,
acts as an impurity state interfering with a continuum, giving rise to Fano
resonances. We introduce a novel mechanism for the dynamic transport which
might also provide insight into the role of spin-polarization. Crucially,
interference between charge localisation and delocalisation during transport
may be important properties into understanding the CISS phenomena. | Michael Verhage, Pantelis Bampoulis, Marco D. Preuss, Ivo Filot, Heiner Friedrich, Rick R. M. Joosten, E. W. Meijer, Kees Flipse | 2023-04-09T11:08:42Z | http://arxiv.org/abs/2304.04215v1 | # Charge transport modulation by a redox supramolecular spin-filtering chiral crystal
###### Abstract
The chirality induced spin selectivity (CISS) effect is a fascinating phenomena correlating molecular structure with electron spin-polarisation in excited state measurements. Experimental procedures to quantify the spin-filtering magnitude relies generally on averaging data sets, especially those from magnetic field dependent conductive-AFM. We investigate the underlying observed disorder in the IV spectra and the origin of spikes superimposed. We demonstrate
and explain that a dynamic, voltage sweep rate dependent, phenomena can give rise to complex IV curves for chiral crystals of coronene bisimide. The redox group, able to capture localized charge states, acts as an "impurity" state interfering with a continuum, giving rise to Fano resonances. We introduce a novel mechanism for the dynamic transport which might also provide insight into the role of spin-polarization. Crucially, interference between charge localisation and delocalisation during transport may be important properties into understanding the CISS phenomena.
American Chemical Society, Department of Physics, University of California, Berkeley, CA 94720, USA
## 1 Introduction
Heightened research into the field of chiral supramolecular self-assembly by using non-convalent interactions has led to emergence of non-trivial electron transport phenomena through metal-molecule junctions [1, 2, 3]. Of fundamental and application driven importance is the intricate interplay between molecular chirality and spin-polarisation (SP) of electrons as is broadly observed in electron transmission [4], magnetoresistance devices [5, 6, 7, 8] and electrochemistry [9]. From an experimental point of view, correlating molecular structure with SP property in achieving a deeper understanding of the mechanism behind CISS is important, as it would allow uncovering molecular design rules. Designing specific molecules with an active control over the spin-polarisation combined with low electrical resistance for high current densities should hence become possible. Evidence is emerging that besides chirality, the molecular layer thickness [7] and polarisation [2], are properties regulating the CISS effect. Although considerable research [1] has been devoted to demonstrate the link between chirality and spin filtering and forming theoretical models of the CISS effects, insights into the electronic transfer mechanisms and modulation of charge through a molecular transport junction are still to be gained. Our stacked crystalline layers are distinct from a tunneling junction [10] by a layer thickness exceeding that of a single molecule, and hence charge transfer by tunneling is not expected to be the dominant transport mechanism [11, 12, 13, 14, 15]
The aim of this work is to correlate two fundamental properties of a chiral molecule possessing two imide redox groups [3] and an electronic conductive continuum formed by the \(\pi\) - \(\pi\) stacked coronene core or by a highly diffuse molecular orbital (HDMO) [16]; introducing significant charge delocalisation and giving rise to the CISS effect via Fano resonances. The combination of these two
properties and the proposal of a charge modulation model [17], are what we argue are a missing link in the understanding of the CISS effect of such molecules. Based on conduction measurements we suggest a novel mechanism for a microscopic understanding of the measured charge dynamic effects. Charge localisation, in our case driven by imide groups [18] and side groups, are able to induce localized charge states which acts as an "impurity" state, similar to Andersons model [19]**(Fig. 1A)**. To this end, we study chiral coronene bisimide (CBI-GCH) which demonstrated self-assembly into P- or M-type second order hierarchical helices as previously reported by Kulkarni _et al._[20] and yielded very high SP degree up to 50% at room temperature [3]. Such significant SP at room temperature would make SP based devices a possibility and is highly desirable for example in catalytic applications [21, 22, 23]. The charge transport of such redox molecular junction is described by Migliore and Nitzan [17] within transition rate process models. Indicative for the fast increase of the bias voltage are what we assign as Fano like peaks (**Fig. 1B**) and (**Fig. 1C**). Fano resonances emerge naturally from the coupling of a discrete state with an electronic continuum [10]. Similar coupled systems relying on quantum interference, have been created by the coupling of molecular magnets to a continuum introduced by carbon nanotubes, yielding spin polarisation based on Fano resonances [24, 25]
One robust experimental approach of identifying the transmission of the preferential spin of a chiral molecule is derived from averaging many IV spectra and observing a change in current-voltage e.g. resistance, between a two-state magnetic field. A consequence of this approach is that, by approximation, a relatively "smooth" exponential IV curve is observed. Yet, even after averaging many IV spectra, the dynamics in the averaged IV curves is not suppressed [26, 9, 27]. From single IV spectra, observation of dynamic charge transfer is expressed in "spikes" [17]: a rapid rise and fall of the current over a small change in electric field magnitude [3, 28, 29, 30, 27]. We study CB-GCH with conduction-AFM (C-AFM) and discover charge transport variability expressed in non-monotonous IV curves and sequential IV sweeping induced reduction in electrical resistance. The transport characteristics are very dependent on lab-accessible parameters such as voltage sweep rate, normal force of the C-AFM probe and the local morphology of the chiral aggregate. Finally, we discuss molecule properties we predict can be actively be tuned to enhance the CISS magnitude and enhance current densities.
## 2 Results and Discussion
### Structural characterisation of CBI-GCH supramolecular crystals
In our aim of studying the dynamics of CBI-GCH's redox induced charge modulation [17] and consequential correlation to the CISS effect, we explored self-assembled, hierarchical ordering [31] of CBI-GCH crystals. These crystals show polymorphism [31] by yielding high quality crystals **(Fig. 1E, F)**. The conductive insights gained from well ordered crystals, enable further insights into the behaviour of the ductile supramolecular polymeric fibers [3] and hence we regard the former as an enticing model system to study the CISS effect.
The CBI-GCH discotics are self-assembled in methylcyclohexane (MCH) into helical aggregates driven by non-covalent interactions [3]. Our CD (**Supp1**) and DFT geometry optimization **(Fig. 1D)** of this aggregate support helical self-assembly with a mean interdiscotic distance of 3.5 A and a planar rotation of 10.5\({}^{\circ}\)**(Fig. 1a)**. Remarkable, the polymers self-assemble into stacked sheets **(Fig. 1B)** upon a relative short time scale of weeks as dissolved in MCH. For structural and conductive studies, these crystals were drop-casted (10 - 20 \(\mathrm{\SIUnitSymbolMicro L}\), 20 \(\mathrm{mmol}\)) onto several substrates of freshly cleaved mica and HOPG and a Si wafer. Across these substrates the crystal density widely varies, driven by isodesemic growth [20] and drying induced concentration gradients. Elongated structures were observed with AFM **(Fig. 1F)**. We note that the crystals show clustering behaviour (**SuppS2**) which likely point to long range interaction, such as dipole driven electrostatic interactions [31]. From AFM imaging, crystals with a mean length of several hundred nanometer were most commonly observed, but crystalline sheets up to 15 \(\mathrm{\SIUnitSymbolMicro m}\) wide (**SuppS3**) were also noted. Besides the lateral dimensions, the remarkable quality of crystallisation is supported by observing extremely smooth surfaces of the sheets, with a roughness below 0.2 nm. Such smoothness rivals that of typical self-assembled monolayers, yet in our case crystals can be stacked of hundreds of molecular sheets.
We examined the nanoscale ordering of several crystals with PeakForce tapping AFM and observe that the sheets consist out of parallel ordered supramolecular polymers (**Fig. 1D**), with a peak-to-peak spacing of \(S=2.5\pm 0.2\) nm. Because the diameter of CBI-GCH between the two peripheral benzene groups is around 2.03 nm, it suggests alkyl chains are not fully extended and therefore potentially back-folded or interpenetrate amorphous in the sheets. We did observe significant adhesion between the AFM tip and the crystal surface, likely indicating the alkyl groups
Figure 1: **Charge modulation within CBI-GCH crystals** (**A**) Crystalline CBI-GCH depicted with imide redox groups with LUMO-0 (blue oval) and HOMO-0 (red semi-circle) located at the side groups, trapping charge and modulating the main transport channel [17]. The coupling between the localized charges and a continuum across the coronene core results in Fano resonances. (**B, C**) Fano resonances observed in IV spectra. (**D**) CBI-GCH helical fibers with DFT optimized geometry. (**E**) Crystallisation of CBI-GCH in MCH with DFT optimized geometry. (**F**) AFM topography taken at the terminus of a crystal showing the stacked platelets. (**G**) High resolution AFM image indicating parallel fibers in the direction of the black dotted lines. The black circles indicate terminus of single supramolecular polymers. (**H**) Selected area diffraction (SAED) of a CBI-GCH crystal. (**I**) CD spectra of crystals deposited on ITO and ITO/Au substrates
were sticking-out of the surface and adhering to the AFM tip (**SuppS4**). Supramolecular polymers are found to be of finite length as highlighted in the black circles (**Fig. 1G**) showing segmented supramolecular polymers. This observation supports to notion that supramolecular polymers with a distribution in length crystallise in solution following the isodesmic growth [20]. This degree of close packed crystallisation may be extremely beneficial for device application, ie magnetoresistance devices, compared to fibers which suffer from pin-holes [3].
To gain further insight into CBI-GCH crystalline ordering, we turned to selected area diffraction (SEAD) with a transmission electron microscope (TEM). **Fig. 1H** shows a SAED pattern on a crystal (**SuppS5**). The sharp contrast of the diffraction peaks highlight the isotropic ordering within the crystals. The fiber-to-fiber spacing \(S\) of 2.49 nm is obtained. This matches the value of 2.5 nm obtained with the high-resolution AFM very well. The intramolecular spacing is found to be 4.7 A. To investigate whether the helical structure of the polymers was still preserved when crystallised, we simulated several crystalline ordering phases and simulated the frequency spectrum with a Fast Fourier Transform (FFT), to compare with the SAED pattern. Because of the symmetrical shifted diffraction pattern, as a likely internal structure candidate we simulated an herringbone structure (**SuppS5**). Coronene is also known to crystallise in this phase [32]. The herringbone phase seems to best simulate the SAED pattern and the FFT is given in (**SuppS5**). This observation is a different assembly to the helical phase of the 1D polymers, highlighting polymorphic behaviour of CBI-GCH between an H-aggregate and a J-aggregate. Such behaviour is perhaps surprising and shows the strong dynamics of these supramolecular coronene based molecules. DFT calculations corroborate the polymorphic aggregation, with the J-aggregate found with same energy minima as the helix (**Fig. 1E**). The monomers are not rotated, but slipped in-plane, the interdiscotic distance is found to be 4.5 A, which matches those obtained by SAED rather well. Future Molecular Dynamics calculations could possible shed further light into the phase transition dynamics [33]. Finally, we measured CD spectra of the crystals deposited on both ITO/Au and bare ITO substrates (**Fig. 1I**), and identified a chiral signal. Although we cannot exclude the presence of 1D helical fibers mixing with the signal, the CD signal is starkly different from those obtained from 1D fibers in solution [20]. Hence, the ordering of CBI-GCH into crystals are likely to preserve the chirality, dictated by the point chirality embedded in the alkyl chains [3, 20].
### Electrical transport
To shed light on the charge transport characteristics we turn to C-AFM (**Fig. 2A**). The C-AFM tip acts as a movable electrode and the current is measured perpendicular to the surface. Crystals were first imaged with AFM, where care was taken to minimize lateral force induced damage (**SuppS6**). The crystals were drop-casted on HOPG and the local crystalline morphology and thickness (**Fig. 2B**) was registered before commencing IV spectroscopy. An example of a 20 nm crystal where IV spectroscopy was performed is highlighted with the yellow arrow in **Fig. 2B**. Following, the tip was positioned on a location with the tip softly in repulsive force, at the snap-to-contact point, with a nominal normal force of 12 nN. In **SuppS7** the force calibration is given. Furthermore, we choose to use diamond tips due to their extreme mechanical stability (**SuppS8**). Insights into the dynamic memory effect [17] and transport variation can be investigated by sequentially sweeping the bias between positive and negative voltages (**Fig. 2A**) at the same aggregate location. For all IV spectra the ramp rate was prior set and defined the time taken for each voltage sweep.
In **Fig. 2C** 20 sequential IV sweeps taken at negative bias voltages, are superimposed which show unexpected dynamic resistance variation. The first few IV curves (dark blue) indicate the crystal layer is a strong insulator, with little current flowing even at bias as high as 8 V. Gradually, by continuous sequential bias sweeping, the electrical gap is reduced. For the last 5 curves (yellow curves), the gap stabilises around \(\pm 2\) V. We note that the tip force was kept constant during measurement, and no lateral drift was observed. The surprising characteristic of this dynamic gap reduction is it's reproducibility. Having observed over 1000 IV spectra, the dynamic change in gap is steadily observed across many crystals of variable thickness and composition. Hence, we establish this behaviour as an intrinsic property of this crystalline aggregate. The large variability and spread in current also presents a problem for averaging the sequential IV spectra. In **Fig. 2D**, violin plots of the current distribution are shown for several bias voltages. The non-normal distribution of the current is evident, which limits single location, repeated bias sweep averaging. For large bias, i.e. above 6 V, saturation of the current by the current amplifier leads a skewed distribution.
Following, we increased the normal force \(F_{n}\) applied by the tip to the crystals step-wise by 5 nN to reduce the layer thickness, as schematically illustrated in **Fig. 2A**. The bias was swept 50 times before increasing \(F_{n}\) further. By reducing layer thickness the electrical resistance is reduced as transport becomes more efficient. We confirm tip indentation by observing on many occasion tip
Figure 2: **C-AFM IV spectroscopy.** (**A**) C-AFM experimental setup, highlighting the normal force \(F_{n}\) of the tip used to control indentation of the tip into the crystals, and the reduction of layer thickness indicated with the black arrows. Sequential IV sweeps are schematically illustrated as swept between positive and negative bias voltage. (**B**) Topographic AFM image showing the location of spectroscopy taken at the yellow arrow. The black area indicates an area after large force tip indention. (**C**) 20 sequential IV sweeps at the same location, with constant \(F_{n}\), indicating gradual reduction of electronic gap. (**D**) Plotting the IV curves of **c** on a violin plot, shows non-normal distribution especially for larger bias voltages. (**E**) Gradually increasing \(F_{n}\), for a 40 nm thick crystal, by 5 nN steps (white dashed lines) a gradual decrease of the gap can be observant, also for constant \(F_{n}\) (between two white dashed lines) the gap gradually decreases. The colour plot shows 650 IV curves taken. (**F**)
holes in the aggregate layer, an example is highlighted with the black circle (**Fig. 2B**). From this experiment (**Fig. 2E**) the reduction of the gap is continuously observed by increasing \(F_{n}\). Hence, the transport is not modulated by tunneling, as is also not to be expected for a layer thickness of \(20\,\mathrm{nm}\), for we would have to observed an exponential reduction in gap resistance. The gradual gap decrease we also attribute to the highly ordered nature CBI-GCH molecules in crystalline phase, where the increased \(F_{n}\) is not likely to distort significantly the lateral ordering which may be of case for the 1D fibers.
An important expression, and perhaps overlooked property, of the dynamic conductive behaviour of chiral molecular aggregates can be found in the peculiar rapid increase and decrease in current magnitude, we refer to as spikes or Fano resonances. In **Fig. 2F** examples of such peak structure is given. The bias sweep rate was varied between \(3\,\mathrm{Hz}\) and \(21\,\mathrm{Hz}\). These spikes are superimposed on the current transport and show a steep slope, indicative of highly efficient charge transport, within a small bias range of several tens of mV. The notion of spikes is reproducible throughout all IV spectra (over 1000 curves have been taken across many positions on the aggregates). Hence, artificially removing the spikes by averaging many IV spectra across several locations, as predominately done in CISS literature [3, 27, 28, 29, 30], a vital element of the charge transport is likely lost. For significant bias voltage exceeding several volts, (note the electric field is large, in the order of \(10^{8}\,\mathrm{V}\,\mathrm{m}^{-1}\)), (**SuppS9**), a rapid increase of the current is observed. This regime always coincides with the emergence of peaks superimposed on the current transport, which highlights that the current fluctuations are not merely dependent on the bias potential but on the occurrence of significant charge flow [14]. The occurrence of peaks or Fano resonances, suggest that the CBI-GCH molecules switch their conductive state _collectively_, as hundreds of molecules are simultaneously probed during C-AFM.
Dielectric breakdown can likely be excluded as the cause of the rapid increase in current, or spikes, because even after several sweeps the spikes in the current does not saturate. Even a current decrease can be observed for larger voltages after several consecutive current spikes. For very thin layers, i.e. monolayers, and high voltages in excess of several volts, dielectric breakdown cannot be ruled out, but is only likely for thin crystalline layers and very high electric fields. Also, slightly increasing the tip force \(F_{n}\), effectively reducing the junction thickness, seems not to lead to an electrical short, which cannot be explained by dielectric breakdown. Note the AFM tip force variation is small because of the active feedback working during the acquisition of the IV spectra.
We consider the dynamic change in resistance by charge capture in light of the presence of imide redox sites [34] of CBI-GCH. The theoretical work of Migliore and Nitzan [17] describes a model for a molecular junction containing redox sites. We conjecture the imide groups on CBI-GCH function as mediator of the transferring charge through the delocalized orbitals across the core of the molecule, in this case coronene. This enables the manifestation of interplay between charge transfer with different timescales [17]. The presence of (noise) spikes for this molecule, and to extend more general in C-AFM CISS literature [3, 27, 28, 29, 30], can be regarded as the expression of stochastic hysteresis between the slow and fast transport channels. The expression of multi-stability of the transported current magnitude is a transient phenomena, guided by a complex interference of localized and delocalized charges. As the timescale of the slow channel has a subtle influence on the fast channel, the system is continuously switching between the two transport channels. An expression of this behaviour must be found in the voltage sweep rate applied to the metal-molecule-metal junction [17]. The occupation state of the redox center which influences the transmission through the charge channel can hence be probed.
We ramped the bias at different sweep time per IV curve. A 20 nm thin crystal on HOPG was probed. **Fig. 2G** and **2H** show 20 superimposed IV curves taken at 1.4Hz and 15.6 Hz, respectively. The former correspond to a slow sweep of the IV curve, while the latter to a fast sweep. Between the two, we observe behaviour that is fundamentally different, although taken at exact same spot and tip force on the molecular crystal. In **Fig. 2G**, we observe no hysteresis and almost smooth monotonic IV curves. Only a few spikes are observed. For this situation the sweep rate is slow enough that the transport current can adiabatically follow the bias potential, and the memory effect induced by the redox site is almost nullified [17]. Contrary, by sweeping fast (**Fig. 2H**), a significant hysteresis is observed and the spikes are present indicating the redox site localises carriers within the model of Migliore and Nitzan [17]. The observed peaks or Fano resonances are equidistantly spaced indicating the resonant behaviour of carrier capture and release. We likely only observe this periodic behaviour **Fig. 1B, C** and **Fig. 2G** for thin crystalline layers, i.e. thinner than 20 nm. For thicker crystals the transport disorder introduced by the different localization rates introduce significant decoupling of the peak spacing and hence more disordered spacing of the peak (**Fig. 2F**).
Figure 3: **Spin filtering by contact potential shift under application of magnetic field (A, B) Topographic, left column and contact potential difference (CPD), right column, measured as function of no magnetic field, and downward/upward magnetic field. A relative shift in CPD is evident for only upward pointing magnetic field. (C) Statistical distribution of the CPD of B as function of magnetic field. (D) Plotting all CPD versus height data of B shows a relatively large spread and approximately linear increase as function of height.**
### Probing spin-polarisation of CBI-GCH crystals
Having established the structure and transport characteristics of CBI-GCH crystals, we turned to probing the spin-polarisation possibility of the crystalline aggregates. We measured the change in contact-potential difference (CPD) with electrostatic force microscopy (EFM) following work from Naaman _et al._[35]. The authors successfully demonstrated observing spin-dependent electronic resistance variation at metal-chiral molecule interfaces by employing (a)chiral self-assembled organic monolayers (SAMs)[35]. The benefit of this method is that no tip damage is incurred by the non-contact nature of EFM. A fundamental relation between dipole orientation and chirality induced spin filtering was also found Eckshtain-Levi _et al._[36]. In this work, we performed similar EFM measurement and deposited the CBI-GCH crystals on a gold (60 nm) coated mica (**Fig. 3A**). Note, the large SOC of gold is likely a source for SP[37]. The CPD is measured as function of no magnetic field (No M) and the magnetic field point "up" (M\({}_{\mathrm{up}}\)) and "down" (M\({}_{\mathrm{down}}\)). Because we have to manually invert the permanent magnet placed below the sample, identifying the exact same topographic area is difficult. However, the experimental data is taken from the same area of the molecular aggregate within 5 um positional accuracy.
The crystals show a significant asymmetric CPD shift of several tens of mV, depending on the relative orientation of the external magnetic field, (**Fig. 3B**). This confirms that the crystals are spin-polarisers, akin to earlier reports for 1D supramolecular polymers[3] and other EFM reports[35]. Because the crystals are stacked and thus the molecular aggregate layer thickness varies, an distribution in the magnetic field dependent CPD is observed. Hence, we analysed the CPD data of **Fig. 3B** to obtain the statistical distribution (**Fig. 3C**). The statistical distribution shows a relative shift of 20 % of peak position for the upward orienting magnetic field, with respect to the no magnetic field. We must note that the spread in distribution is relatively large, and overlap of the CPD between all magnetic states exist. This could hint the ordering of the crystals within the aggregates being important for superposition or reduction of the total polarisation. Also, for increasingly thicker films, the CPD shifts approximately linear (**Fig. 3D**). This indicates an internal polarisation/dipole is observed, which we relate to chiral nature of the molecular aggregate.
Figure 4: **Schematic model indicating transport modulation driven by imide redox sites of CBI-GCH.** (**A**) Localized states are close and connected to a continuum. The molecule posses a dipole and the direction and polarisation is dependent on the handedness and adsorption on a metallic substrate. A ferromagnetic (FM) layer may be present to inject spin-polarised charge. (**B**) Ramping the bias can lead to charge injection and trapping at the redox sites. (**C**) Spins can couple via exchange interaction with neighbouring trapped charges, on the parallel discotics, and aligned in an external B-field. (**D**) Further ramping of the bias eventually crosses the Fermi-level into the LUMO, and leads to coupling with the continuum. (**E**) Charge displacement from the redox sites leads to spin order loss, and an induction current is generated through the continuum. The magnitude of the induction current is related to the sum of the external bias and internal polarisation vector. (**F**) Further ramping of the bias leads to the downward slope of the Fano resonance and blocking of current flow. (**G**) Opposite spin localisation. (**H**) The delocalisation of the localised spins into the continuum lead again to an inductive current, but in reduced magnitude due to the polarisation vector subtracting the total electric field across the molecule.
#### Model
We turn our attention to a novel mechanism to correlate CBI-GCH origins of charge dynamics [17] and the reported high degree of spin polarisation of CBI-GCH [3]. Migliore and Nitzan [17] describe the charge transfer phenomena within transition rate process models. Here we suggest a novel mechanism for a microscopic understanding of the measured charge dynamic effects (**Fig. 4**). Indicative expression of the model are what we assign as Fano like [10] peak structures. A localised state and a continuum state, via the delocalised core of the molecule (**Fig. 4A**) govern the charge transfer. The continuum could be represented by \(\pi\) - \(\pi\) hybridization of the stacked coronene cores. Another possibility is the novel mechanisms of highly diffuse molecular orbitals (HDMO). We note that for coronene, HDMO orbitals have been predicted and experimental verified [38] as they can arise in the excited state [39]. The HDMO share similarities with the super-atomic molecular orbitals (SAMO) observed for curved aromatic molecules [39] and fullerenes [40]. In our case the HDMO requires a dipole, which is provided by the chirality induced by the helix and screening from the metallic substrate. A ferromagnetic (FM) layer may be used to inject SP carriers, but is not a requirement _per se_.
For CBI-GCH in the ground-state, the LUMO-0 is located near the redox imide (**Fig. 1A**). The redox sites can be half-filled as on-site Coulomb repulsion between the stacked redox sites of the stacked discotics, will prevent double occupation and form electron states very close to the LUMO level. For the thin crystal structures embedded between two metal contacts, a fast increase of the bias voltage (**Fig. 4B**, _State I_) above the bandgap will fill the localised redox states. The closeness of the redox sites (**Fig. 1A**) can align the spins in a particular direction due to exchange interaction. In addition, a magnetic field can also be applied to align to spins of the localised charge (**Fig. 4C**). Further increase of the bias voltage will allow the continuum like states to be filled and coupled to the localised states to form a Fano resonance [41] (**Fig. 4D**, _State II_). The Fano like resonance can also be described by the Anderson impurity model [19]. Within this model the probability of double occupancy of the localized state is strongly reduced due to the on-site Coulomb repulsion and the other term in the Hamiltonian represents the hybridization between the redox pendant state and the continuum.
The continuum will boost the current density so much that a significant spin-orbit coupling may be induced, which in the presence of the redox sites, can introduce a spin-dependent current through
the stack of the CBI-GCH molecules for a short time before the localised charge will become part of the continuum (**Fig. 4E**, _State III_) [42, 43]. This would implicitly mean that the metal electrodes in the transport set-up need not necessary to be magnetic for observing the CISS effect in CBI-GCH molecular systems, supporting the observed CPD shift on gold/mica substrates. Hereby, the classical spin-orbit coupling state of the continuum current contribution with an additional part from the orbital current, followed by a current dip induced by the spin-order loss (**Fig. 4F**, _State IV_), mimics the Fano shape behaviour. The low conductivity is finally restored and new charges can be trapped at the redox sites (**Fig. 4F**, _State IV_). For opposite spin, similar mechanism occurs (**Fig. 4G**, _State II_) with localisation on the redox groups. However, the inductive current magnitude into the continuum (**Fig. 4H**, _State IV_) is lower due to the polarisation vector leading to a subtractive current density. The interpretation of charge transport in this way provides a spin-induced contribution to the total current, depending on the spin orientation in relation to the main current direction, hence supporting the spin polarised induced current density difference in the way as has been measured in the CISS experiment of [3]. In **SuppS10** the spin polarisation model of enantiomers is discussed.
## 4 Conclusions \(\&\) Outlook
The success attained here to unravel the reason for the spiky IV spectra measured with a conductive AFM which opens a way to understand the fascinating CISS effect. The clue is the combination of continuum, perhaps a HDMO state, and a strongly localized state by a pendant redox group, which provides the asymmetry in the conduction of spin polarized charge. Similar systems have been created by the coupling of paramagentic molecules to carbon nanotubes yielding spin polarisation [25]. Localisation of charge and the coupling between them generates a spin-polarised molecular state. Electronic coupling to a continuum allows for delocalisation by a Fano resonance. We argue spin polarization is induced by delocalisation of the trapped charge on the redox site into the continuum, generating a large "inductive" current, depending asymmetrically on the polarisation vector driven by the molecular handedness. Future work of electrochemical gating may be usd to actively control the redox state of the diimide [18] and thus possibly control the degree of spin-polarisation.
This new view on the role of chirality and spin polarization offers novel insights to define design
rules for chirality supramolecular systems and applications for magneto resistance with a high degree of spin polarisation at room temperature. A strong continuum may be created by actively introducing a super atomic molecular orbital (SAMO) close to the LUMO level.[16, 39, 40] Such orbitals benefit from the highly delocalised orbital nature beyond the dimension of the molecule. This way, free-electron like conductivity can be introduced for slightly curved aromatic molecules when adsorbed on metal surfaces. By including charge localisation states close the continuum, i.e. by redox pendent sites, an ideal spin-polarised Fano system may be created based on chiral molecules.
## 3 Materials and Methods
### Substrates
Synthesis of CBI-GCH is described in.[20] Freshly cleaved HOPG substrates where used for the deposition of the crystals for IV spectroscopy. For EFM, the crystals where drop-casted on gold (60 nm) evaporated mica sheets. The mica sheets where prior for coating freshly cleaved. Drop-casting was performed by elevating the sample temperature to 40 \({}^{\circ}\)C to increase the evaporation rate of MCH and thus enhance the local crystalline concentration. Several drop-cast were made consecutively. ITO-coated glass substrates were evaporated with 10 nm of gold and CBI-GCH drop-casted to with the sample held at 40 \({}^{\circ}\)C.
### CD measurements
Circular Dichroism (CD) measurements were performed using JASCO J-815 CD spectrometer using the following settings; sensitivity: Standard, D.I.T: 0.5 s, bandwidth: 1 nm, scanning speed: 100 nm min\({}^{-1}\), data pitch: 1 nm. Bulk measurements were performed on 1 cm x 1 cm x 0.1 cm (l x w x t) quartz slides covered with ITO and 10 nm of gold. Solid state measurements were performed using a solid-state holder with a circular opening of 12 mm\({}^{2}\) (d=4 mm). The presented CD spectra are the average of two spectra measured from front and back to limit the contribution of linear dichroism and linear birefringence to the emergent CD.
### TEM
For TEM sample preparation a 200-mesh copper grid covered with a Quantifoil R 2/2 holey carbon film and a 2 nm continuous carbon layer (Quantifoil Micro Tools, GmbH) was surface plasma treated for 5 s using a Cressington 208 carbon coater. Subsequently, 20 \(\upmu\)L of the aqueous sample was drop casted onto the carbon film of the copper grid, followed by drying in air. CryoTEM imaging was carried out on the cryoTITAN (Thermo Fisher, previously FEI), equipped with a field emission gun (FEG) and a 2k \(\times\) 2k Gatan US1000 CCD camera. The microscope was operated at 300 kV acceleration voltage in bright-field TEM mode at a nominal magnification of 5400 \(\times\) with a 1 s image acquisition time. Selected Area Electron Diffraction (SAED) pattern were acquired by operating the TEM at LN2 temperature, in diffraction mode using a camera length of 1.15 m, a selected area aperture of 40 \(\upmu\)m and a 0.25 s exposure time.
### Conduction-AFM
C-AFM was performed with a diamond tip (Apex Sharp) from Adama Innovations, with a nominal tip radius of 10 nm and a spring constant of 2.6 N m\({}^{-1}\). Identification of drop-casted clusters was performed with the AFM optical microscope. The cantilever was gently brought into the soft contact with the molecular cluster, and then retracted to break tip-sample contact. We define soft contact as the point of snap-into-contact of the force-distance curve. The voltage of the piezo tube was continuously recorded and used for force conversion. The tip was placed in soft contact mode to scan the local morphology. This ensured a local pristine morphology to perform IV spectroscopy. Afterward, a larger morphology was imaged to identify the local area probed during spectroscopy. At each force set-point the tip was allowed to stabilize. Little drift was observed as imaging the area after IV spectroscopy highlighted the same molecular topographic features. Bias was applied to the sample. Up to 10 pA DC off-set was registered as leakage current from the op-amp and subtracted from the IV spectroscopy. The number of data-points and bias sweep rate were registered for each IV curve.
### PeakForce AFM
A Bruker Edge AFM was used with PeakForce Tapping feedback. AFM silicon nitride PeakForce-HIRS-F-B tips (Bruker) with a spring constant of 0.12 N m\({}^{-1}\) and a nominal tip radius of 2 nm were used. Scan parameters were optimized using Bruker ScanAsyst(tm). Before each measurement the drift was nullified by scanning for an hour before measuring the high resolution imaging at the nanoscale. All data were plane fitted to subtract the background offset using Gwyddion Software ([http://gwyddion.net](http://gwyddion.net)).
### Electrostatic force microscopy
A Dimenson Veeco Multimode running a NanoScope III controler was used for EFM. Pt coated Si tips where used with a nominal tip radius of around 20 nm. The alternating bias voltage was applied to the tip of 6 V at the eigenfrequency of the cantilever. The tip was retracted by 5 nm to measure the CPD in lift mode. In the work of Ghosh _et al.[35]_ the measurement was performed without actively grounding the metallic film. We grounded the substrate via the mica bottom, similar to the approach in reference [35]. The magnetic field was applied via an external permanent magnet placed directly below the sample (\(B=\)500 mT) as measured with a Gauss Probe. All data were plane fitted to subtract the background offset using Gwyddion Software ([http://gwyddion.net](http://gwyddion.net))
### DFT calculations
Density functional theory simulations were performed using VASP. The PBE exchange-correlation functional was used in conjunction with the projector augmented wave approach. All structures were optimized to their local minima using the conjugate gradient algorithm as implemented in VASP. The cut-off energy for the plane-wave basis set is 400 eV. The Brillouin zone was sampled using a 1x1x1 Monkhorst-Pack grid (Gamma-point only). Electron smearing was employed using Methfessel-Paxton smearing with a smearing width (\(\sigma\)) of 0.0005 eV. To avoid the spurious interaction of neighboring super cells, the unit cell embedding the oligomers was constructed such that a vacuum layer of at least 10 A is present in each cartesian direction. It was explicitly verified that the electron density and potential approach zero at the edges of the unit cell
## Acknowledgement
We thank Omur Goknicar for assistance with the AFM experiments and Riccardo Ollearo for fabrication of the ITO/Au substrates. We thank Mark de Jong for support with the Bruker PeakForce AFM. We thank Chidambar Kulkarni for the synthesis of CBI-GCH supramolecular polymers.
## Supplementary
### S1 - AFM image of micrometer scale crystal
### S2 - Crystal clustering
Figure 5: **Structure and CD of CBI-GCH**
Figure 6: **Optical and AFM image of clustered CBI-GCH crystals drop-casted on Sio2/Si wafer**
### S4 - PeakForce spectroscopy
Force distance spectroscopy was taken on a CBI-GCH crystal deposited on SiO2/Si wafer. The deflection curve shows the snap-to-contact induced by the attractive forces (light green). When the tip is retracted, a significant adhesion force is noted across more than 100 nm of tip retraction. Clearly, a component of the CBI-GCH crystals are sticking to the tip, preventing snap-from-contact. Only at very large distances, in excess of 120 nm the pulling force is released and the tip oscillates (ringing). We interpret this behaviour from alkyl chains sticking to the AFM tip after contact and adhering when retracting.
Figure 7: **AFM topography of micrometer scale crystals deposited on SiO2/Si wafer**
## S5 - TEM
### S6 - Damage to aggregates
We observed with tapping AFM, contact AFM and even PeakForce AFM damage was easily incurred onto the supramolecular structures. Figure 10 shows topographic images of such damage. Figure 10a shows indentation area's of the diamond tip, gently pressed into the molecular cluster with increasing normal force. The triangular shape of the tip can be observed. Figure 10b shows PeakForce tapping images. We used a Bruker Edge with ultra sharp tips with a radius of \(2\,\mathrm{n}\mathrm{m}\). The peakforce reduces lateral forces by retracting the tip at each pixel before moving down towards the next pixel during
Figure 8: **PeakForce AFM force-distance spectroscopy**
Figure 9: **TEM imaging** (**a**) Real space image of a crystal deposited on laceycarbon TEM grid. The black circle indicates selected area diffraction (SAED) location. (**b**) SAED showing high quality diffraction pattern. (**c**) Simulated FFT of a herringbone structure
imaging. The yellow arrow shows the complete disappearance of parts of the crystal, simply by imaging the local area again. Hence, the extremely soft nature of these materials makes application of scanning probe microscopy notoriously difficult. Figure 10c shows an Amplitude Modulation Tapping Mode image. By gently increasing the normal force, the tip was able to cut through several layers of the crystal, highlighted with the blue lines. The stacked nature is evident from the cross-section taken at the black line.
### S7 - F-z calibration of C-AFM tip force
The force applied by the C-AFM was determined following standard procedure [44]. A F-z curve was taken by placing the tip on the surface, with no molecules present. The tip was approached and retracted and the voltage of the tip deflection registered, see Figure 11. The voltage-offset was nullified. The force constant was taken from manufacturer specification (AdamaProbes). Figure 11b shows the linear relationship between the applied tip force and registered voltage.
Figure 10: **AFM induced damage to CBI-GCH aggregates.** (**a**) Normal force induced tip indentation. (**b**) PeakForce induced damage to the crystal evident by sequential imaging of the same area. (**c**) CBI-GCH crystal showing local damage incurred by Amplitude Modulation Tapping Mode between the blue lines, by gently increasing the normal force. The black lines shows a cross-section line highlighting the stacked nature of the crystals.
### S8 - C-AFM tip state evaluation
Metal coated Si tips, such as those coated with PtIr, often show continuous degradation between 10's of IV spectra due to mechanical removal or deformation of the metallic film [45]. At bias voltages exceeding several volts, the electric field can easily reach as high as \(10^{8}\,\mathrm{V}\,\mathrm{m}^{-1}\), which can introduce non-reversible tip state alteration and thus changing the contact resistance continuously. It is hence important to compare the conductivity of the tip before the experiment and especially after, to exclude artefacts. To reduce the effect of tip degradation, doped single crystal diamond tips (Adama Innovations) have been used throughout this work, which show no degradation even after prolonged AFM scanning [46]. Furthermore, they enable Ohmic contact and withstand high bias, up to 10V in the used setup, without notable degradation.
We observe an intrinsic conductive state of the Adama Innovations diamond tip with Ohmic contact resistance of around \(2.3\,\mathrm{M}\Omega\), see Figure 12a, measured by pressing the diamond tip directly on gold. The IV curves are swept 20 times, with no noticble spikes or anomalies in the spectroscopy. Furthermore, we observe no change in tip conductivity during experiments. We note a small current offset below \(100\,\mathrm{pA}\), because of leakage current from the current-to-voltage amplifier.
However, adsorption of molecules on the tip is difficult to rule-out. For example, it was observed that after repeating several IV spectroscopy sequences on the molecular bundles and subsequently placing the tip on the gold surface, no direct Ohmic behavior could be established. A clear example of this is given in Figure 12b, the first IV curves, green, shows non-linear IV behavior, even though the tip is directly contacted on gold with no large molecular aggregates present. It could be argued
Figure 11: **F-z cure conversion of force.** (**a**) F-z curve with approach and retract curve. (**b**) Converted F-z force to derive the tip force in nN.
that previous taken IV data on molecular aggregates has resulted in the transfer of molecules to the tip which act as tunneling barrier when contacting gold. Subsequent sequences of spectroscopy shows gradual degradation in the tunnel gap (lighter green) hinting at likewise sequential desorption of molecules likely due to very large electric fields of \(10^{8}-10^{9}\) V/m. A sudden change to Ohmic behavior was established after several IV spectroscopy cycles. This behavior shows that the intrinsic tip state of Ohmic conductivity was indeed unaltered, but the ill-controllable adhesion of molecules is problematic and we conjecture it can be a source of wrongly reporting absolute SP degrees in literature.
## S9 - Resonant peaks
### S10 - Spin-polarisation model
Effectively, the current rise and fall through the Fano resonance can be considered as a induction current "flow" through the continuum, depending on the "flow" direction of the (spin-polarised) current, schematically illustrated in **Fig. 14**. For a chiral molecule, the dipole is dependent on the handedness (**Fig. 14A, B**). The total electric field across the molecule is sum of the dipole/polarisation and the externally applied electric field. Adsorption on the molecules on a metallic surface will induce screening and present an overall nett dipole/across the molecular layer,
Figure 12: **IV spectroscopy of molecules coated tip on gold substrate.** (**a**) Ohmic contact resistance diamond tip on gold. (**b**) Observation of gradual change in IV behaviour (green) by sequential bias voltage ramping, until Ohmic resistance (red) is achieved. We conjecture tip contamination with molecular clusters.
Figure 14: **Schematic model of the relation between chirality (dipole) and preferential spin-polarisation.** (**A**, **B**) Chiral molecule with opposite handeness determines relative orientation of dipole. (**C**, **D**) Adsorption of molecule on metallic substrate introduces screening and formation of polarisation vector, depending on molecule handedness. (**E**, **F**) Charge localisation on the redox, with aligned spin are coupled to the continuum. Depending on the localised spin orientation with respect to the polarisation, on trapped charge delocalisation, an induction current is generated. The inductive current is either additive or subtractive driven by the polarisation vector. (**A**) The sum of inductive current on the rise and fall slope of the Fano resonance, determines the slope of the current rise. A different slope is expected for preferential spin induced inductive current from the localised state coupling to the continuum.
Figure 13: **IV spectroscopy of a \(35\,\mathrm{nm}\) thick CBI-GCH crystal.** Black arrows highlight peaks. The tip force is set to \(12\,\mathrm{nN}\).
which differs for both enantiomers (**Fig. 14C, D**). Delocalisation of the electrons and their aligned spins on the redox sites will lead to an induction current upon spin-order loss (**Fig. 14E, F**). The magnitude of the inductive current is related to the assymetry of the Fano resonance, and the polarisation vector either additive or substractive to the inductive current direction. Consecutive Fano resonances (**Fig. 14G**) will push the current density either up or down and this is observed in experiments as an change in resistance of the molecular aggregate. The presence and the role of a Fano resonant state, a localised electron interfering with a continuum of electrons, is crucial for boosting the spin-induced charge transport in the CBI-GCH system and possibly is more general for redox group containing chiral supramolecular polymers.
|
2309.02698 | Quantile and pseudo-Huber Tensor Decomposition | This paper studies the computational and statistical aspects of quantile and
pseudo-Huber tensor decomposition. The integrated investigation of
computational and statistical issues of robust tensor decomposition poses
challenges due to the non-smooth loss functions. We propose a projected
sub-gradient descent algorithm for tensor decomposition, equipped with either
the pseudo-Huber loss or the quantile loss. In the presence of both
heavy-tailed noise and Huber's contamination error, we demonstrate that our
algorithm exhibits a so-called phenomenon of two-phase convergence with a
carefully chosen step size schedule. The algorithm converges linearly and
delivers an estimator that is statistically optimal with respect to both the
heavy-tailed noise and arbitrary corruptions. Interestingly, our results
achieve the first minimax optimal rates under Huber's contamination model for
noisy tensor decomposition. Compared with existing literature, quantile tensor
decomposition removes the requirement of specifying a sparsity level in
advance, making it more flexible for practical use. We also demonstrate the
effectiveness of our algorithms in the presence of missing values. Our methods
are subsequently applied to the food balance dataset and the international
trade flow dataset, both of which yield intriguing findings. | Yinan Shen, Dong Xia | 2023-09-06T04:21:50Z | http://arxiv.org/abs/2309.02698v1 | # Quantile and pseudo-Huber Tensor Decomposition
###### Abstract
This paper studies the computational and statistical aspects of quantile and pseudo-Huber tensor decomposition. The integrated investigation of computational and statistical issues of robust tensor decomposition poses challenges due to the non-smooth loss functions. We propose a projected sub-gradient descent algorithm for tensor decomposition, equipped with either the pseudo-Huber loss or the quantile loss. In the presence of both heavy-tailed noise and Huber's contamination error, we demonstrate that our algorithm exhibits a so-called phenomenon of two-phase convergence with a carefully chosen step size schedule. The algorithm converges linearly and delivers an estimator that is statistically optimal with respect to both the heavy-tailed noise and arbitrary corruptions. Interestingly, our results achieve the first minimax optimal rates under Huber's contamination model for noisy tensor decomposition. Compared with existing literature, quantile tensor decomposition removes the requirement of specifying a sparsity level in advance, making it more flexible for practical use. We also demonstrate the effectiveness of our algorithms in the presence of missing values. Our methods are subsequently applied to the food balance dataset and the international trade flow dataset, both of which yield intriguing findings.
## 1 Introduction
Data in the form of multi-dimensional arrays, commonly referred to as tensors, have become increasingly prevalent in the era of big data. For instance, the monthly international trade flow (Cai et al., 2022) of commodities among countries is representable by a \(47(\text{countries})\times 47(\text{countries})\times 97(\text{ commodities})\times 12(\text{months})\) fourth-order tensor; the food balance data1 describing the detailed report on the food supply of countries consist of several third-order tensors; the comprehensive climate dataset (CCDS, Chen et al. (2020)) - a collection of climate records of North America can be represented as a
\(125(\text{locations})\times 16(\text{variables})\times 156(\text{time points})\) third-order tensor. Tensor decomposition aims to find a low-rank approximation of tensorial data, which is a powerful tool of extracting hidden signal of low-dimensional structure. A tensor is considered low-rank if it can be expressed as the sum of a few rank-one tensors. A formal definition can be found in Section 2. Tensor decomposition has a variety of applications, including tensor denoising and dimension reduction (Lu et al., 2016; Zhang and Xia, 2018), community detection in hypergraph networks (Ke et al., 2019), node embedding in multi-layer networks (Jing et al., 2021; Cai et al., 2022b), imputing missing data through tensor completion (Zhang, 2019; Cai et al., 2019; Xia et al., 2021), clustering (Sun and Li, 2019; Wang and Li, 2020), and link prediction in general higher-order networks (Lyu et al., 2023), among others.
While a tensor can be viewed as a natural extension of a matrix into a multi-dimensional space, finding a "good" low-rank approximation of a tensor is fundamentally more challenging than finding the best low-rank approximation of a matrix. For any given matrix, its optimal low-rank approximation can be obtained through a singular value decomposition (SVD, Golub and Van Loan (2013)), a process facilitated by highly efficient algorithms. In stark contrast, our understanding of the best low-rank approximation of a tensor is relatively limited (Kolda and Bader, 2009). Furthermore, computing the optimal low-rank approximation of a tensor is generally an NP-hard problem (Hillar and Lim, 2013). Therefore, computational feasibility becomes a crucial factor when we design statistical methods for tensor data analysis, even including the convex ones. To date, a variety of polynomial-time algorithms have been developed to find a good low-rank approximation of a tensor in Euclidean distance, such as the Frobenius norm. These algorithms can be locally or even globally optimal under certain statistical models, provided they are well-initialized. For example, De Lathauwer et al. (2000) introduced a higher-order singular value decomposition (HOSVD) method for tensor low-rank approximation which solely relies on multiple SVDs of rectangular matrices. They also found that an iterative refinement algorithm, known as Higher-Order Orthogonal Iterations (HOOI), can often enhance the performance in tensor low-rank approximation when applied after HOSVD. The sub-Gaussian tensor PCA model (also referred to as tensor SVD, as defined in Section 2) is a useful tool for studying the theoretical performance of tensor low-rank approximation algorithms. Liu et al. (2022), Xia and Zhou (2019), Zhang and Xia (2018) and Xia et al. (2021) examined HOSVD and HOOI under sub-Gaussian noise, showing that while HOSVD is generally sub-optimal, HOOI achieves minimax optimality. A Burer-Monteiro type gradient descent algorithm, proposed by Han et al. (2022), also achieves a minimax optimal rate under sub-Gaussian noise for tensor decomposition. Cai et al. (2019) studied a vanilla gradient descent algorithm and derived sharp error rates not only in Frobenius norm but also in sup-norm. A Riemannian gradient descent algorithm was also shown to be minimax optimal under sub-Gaussian
noise by Cai et al. (2022b). More recently, Lyu et al. (2023) investigated the Grassmannian gradient descent algorithm and demonstrated its minimax optimality under sub-Gaussian noise.
The technological revolution of recent decades has enabled the collection of vast amounts of information across a wide range of domains. The inherent heterogeneity of these domains can introduce outliers and heavy-tailed noise (Crovella et al., 1998; Rachev, 2003; Roberts et al., 2015; Sun et al., 2020) into tensorial datasets. Existing tensor decomposition algorithms typically seek a tensor low-rank approximation in the Frobenius norm, utilizing squared error as the loss function. However, the square loss is sensitive to outliers and heavy-tailed noise, which can render these algorithms unreliable in many real-world applications. For example, when analyzing international trade flow data, a central objective is to study the economic ties between countries and their respective positions in the global supply chain. This structured and interconnected nature of global industries can often be encapsulated by a handful of multi-way principal components. However, outliers may occur if two countries have a substantial amount of trade flow simply due to geographical proximity or because one country is a primary supplier of a particular natural resource. Although such outliers are relatively rare in tensorial data, they can significantly skew the results of tensor low-rank approximation since they do not accurately reflect the countries' positions in the global supply chain. Figure 1 highlights the advantage of using absolute loss in handling outliers. The figure focuses on the trading flow among approximately 50 countries, specifically for the product 'Petroleum oils and oils obtained from bituminous minerals; crude', from 2018 to 2022. The top two sub-figures represent the node embedding of countries. Red triangles represent (net) importers and blue circles represent (net) exporters. A country is considered a (net) importer if it imports more than it exports, as is the case with the U.S.A. Countries such as Saudi Arabia, Canada, and the Russian Federation, which export significant amounts, dominate the principal components in tensor decomposition using square loss. Meanwhile, all other countries cluster together, as shown in the top-left sub-figure. The top-right figure represents the node embedding from tensor decomposition using absolute loss. This is less sensitive to outlier entries caused by those three countries, leading to a more dispersed but better clustered embedding. The bottom two sub-figures display the embedding results of months, i.e., the third dimension of the tensor data. Intuitively, we would expect similar trading patterns for months within the same year. This is indeed observed in the bottom-right sub-figure, which is produced by absolute-loss tensor decomposition. In contrast, clusters are much less clear based on node embedding from the square-loss tensor decomposition, as shown in the bottom-left sub-figure. It's important to note that the trade amount in the two months 202209 and 202210 is significantly smaller, likely due to incomplete data, causing outlier slices in the tensor data. The bottom-right sub-figure illustrates that absolute loss is insensitive to these outlier points.
Figure 1: International trade flow data: node embedding of countries and months from estimated principal components by tensor decomposition. Left sub-figures: square-loss tensor decomposition; right sub-figures: absolute-loss tensor decomposition.
The development of statistical methods that are robust to outliers and heavy-tailed noise is garnering increasing significance in today's data-centric world. A variety of these robust methods have been proposed, including the median of means (Minsker, 2015; Lecue and Lerasle, 2020; Lugosi and Mendelson, 2019; Degersin, 2020), Catoni's method (Catoni, 2016; Minsker, 2018), and approaches involving trimming or truncation (Fan et al., 2016; Oliveira and Orenstein, 2019; Lugosi and Mendelson, 2021). These methods have proven useful for robust linear regression, mean, and covariance estimation. The issue of robustness against outliers has frequently been examined in theory (Depersin and Lecue, 2022; Dalalyan and Minasyan, 2022; Shen et al., 2023; Chinot et al., 2020; Thompson, 2020; Minsker et al., 2022), often resorting to Huber's contamination model (Huber, 1964). This model posits that a fraction \(\alpha\in(0,1)\) of the total samples are corrupted in an arbitrary manner. According to the findings of Chen et al. (2016, 2018), the minimax optimal error rate for several problems is directly proportional to \(\alpha\) under Huber's model. Robust methods for matrix data analysis have also been extensively studied in the literature. The seminal work Candes et al. (2011) examines matrix decomposition in the presence of sparse outliers, a problem known as robust PCA. Several studies Candes et al. (2011); Chandrasekaran et al. (2011); Hsu et al. (2011); Netrapalli et al. (2014); Yi et al. (2016) have demonstrated the possibility of precisely recovering a low-rank matrix corrupted by sparse outliers under specific identifiability conditions. Further, Agarwal et al. (2012) and Klopp et al. (2017) explored the least squares estimator, employing a combination of nuclear norm and \(\ell_{1}\)-norm penalties imposing no assumptions over locations of the support, with additional sub-Gaussian noise. Their derived error rates, proportional to \(\alpha^{1/2}\), do not disappear even in the absence of the sub-Gaussian noise. This rate is optimal under arbitrary corruption but sub-optimal under Huber's contamination model where the optimal dependence on the corruption ratio is \(\alpha\). A similar sub-optimal rate was exhibited by the non-convex method introduced by Cai et al. (2022) and the convex approach based on sorted-Huber loss proposed by Thompson (2020), both with regard to the proportion of corruption. A different perspective was offered by Chen et al. (2021), who presented an alternating minimization algorithm that could attain an optimal error rate under strict conditions: uniformly random location of the outliers, random signs of the outliers, and sub-Gaussian noise. Heavy-tailed noise, a common source of outliers, can be treated as a combination of bounded noise and sparse corruption. This approach is generally sub-optimal, as noted by Cai et al. (2022). Fortunately, heavy-tailed noise can usually be handled by robust loss functions including quantile loss, Huber loss, and the absolute loss. For instance, Elsener and van de Geer (2018); Alquier et al. (2019); Chinot et al. (2020) showed that statistically optimal low-rank matrix estimators against heavy-tailed noise can be attained by utilizing those robust loss functions. However, all of these methods are based on convex relaxations and the computational aspect of the proposed estimators have not
been thoroughly examined. It is important to bear in mind that the optimization process can be quite challenging due to the non-smooth nature of the aforementioned robust loss functions, even when the objective function is convex.
The integrated investigation of the computational and statistical aspects of robust low-rank methods is a somewhat under-explored area. Both Charisopoulos et al. (2021) and Tong et al. (2021) examined the sub-gradient descent algorithm for matrix decomposition, employing robust loss functions. They demonstrated that the algorithm could achieve linear convergence with a schedule of decaying step sizes. However, the error rates derived from their research are generally sub-optimal, even under Gaussian noise conditions. In their respective works, Cai et al. (2022b) and Dong et al. (2022) adopted the square loss and introduced a sparse tensor to accommodate potential outliers resulting from heavy-tailed noise. Although this method ensures rapid computation, it is generally sub-optimal under standard heavy-tailed noise assumptions. The study by Shen et al. (2023) revealed that the sub-gradient descent algorithm could be both computationally efficient and statistically optimal for low-rank linear regression under heavy-tailed noise. They observed an intriguing phenomenon termed as "two-phase convergence". However, it is important to note that the more technically demanding robust tensor decomposition differs significantly from low-rank linear regression, rendering the results of Shen et al. (2023) non-transferable. Auddy and Yuan (2022) proposed a one-step power iteration algorithm with Catoni-type initialization for rank-one tensor decomposition under heavy-tailed noise. This method, which only necessitates a finite second moment condition, achieves a near-optimal error rate up to logarithmic factors. The bound remains valid with a probability lower bounded by \(1-\Omega(\log^{-1}d)\) for a tensor of size \(d\times d\cdots\times d\). However, a strong signal strength condition is also vital for this method. Huber matrix completion was studied in Wang and Fan (2022) through the lens of leave-one-out analysis. Due to technical constraints, their analysis framework is not applicable to tensor decomposition, and a significantly large truncate threshold is necessitated by Wang and Fan (2022). How the methods proposed by Auddy and Yuan (2022) and Wang and Fan (2022) behave in the presence of arbitrary outliers remains unclear. Robust tensor decomposition in the presence of missing values presents even greater challenges. Shrinkage-based approaches for the matrix case have been studied by Minsker (2018) and Fan et al. (2016). While their rates are optimal with respect to the dimension and sample size under a minimal second-order moment noise condition, their derived rates are not proportional to the noise level. Wang and Fan (2022) extended the leave-one-out analysis to the vanilla sub-gradient descent algorithm for matrix completion under heavy-tailed noise. However, their entry-wise error rate is still sub-optimal, and it remains unclear whether their method is applicable to tensors and with arbitrary corruptions. We believe that this sub-optimality is due to technical reasons. We demonstrate this by showing that a simple sample splitting trick can yield
statistical optimality for both Frobenius-norm and entry-wise error rates, even in the presence of arbitrary corruptions.
In this paper, we develop computationally fast and statistically optimal methods for tensor decomposition, robust to both heavy-tailed noise and sparse arbitrary corruptions. Our contributions are summarized as follows.
1. We propose a tensor decomposition framework that employs quantile loss and pseudo-Huber loss. Existing works in robust tensor decomposition often falls short in terms of algorithmic development, computational guarantees, and statistical optimality. To address this, we introduce a computationally efficient algorithm grounded in Riemannian (sub-)gradient descent. We simultaneously explore computational convergence and statistical performance, demonstrating that our proposed algorithm converges linearly and achieves statistical optimality in handling both heavy-tailed noise and arbitrary corruptions. Unlike previous works (Cai et al., 2022; Dong et al., 2022), our method does not necessitate the specification of a sparsity level in advance. A phenomenon of two-phase convergence is also observed in the proposed algorithms for robust tensor decomposition. We apply our methods to the food balance dataset and international trade flow dataset, both of which yield intriguing findings.
2. Our approach offers several theoretical benefits. We demonstrate that quantile and pseudo-Huber tensor decomposition can achieve statistical optimality under both dense noise and arbitrary corruptions, regardless of whether the noise is sub-Gaussian or heavy-tailed. Existing works often treat sparse corruptions using heavy-tailed distributions, as seen in Cai et al. (2022); Fan et al. (2016); Auddy and Yuan (2022); Wang and Fan (2022). We examine the robustness to sparse corruptions under Huber's contamination model. Even in the presence of both heavy-tailed noise and Huber's contamination, our approach can still deliver a statistically optimal estimator. We are the first to derive the minimax optimal rate of matrix/tensor decomposition under Huber's contamination model. Previously, methods by Agarwal et al. (2012); Klopp et al. (2017); Cai et al. (2022) achieved an error rate proportional to \(\alpha^{1/2}\), where \(\alpha\) is the proportion of contamination under Huber's model. We demonstrate that quantile tensor decomposition achieves an error rate proportional to \(\alpha\), which is minimax optimal under Huber's contamination model. The left sub-figure in Figure 1(a) showcases the achieved error rate by absolute-loss tensor decomposition under Huber's contamination model. It examines both cases of dense Gaussian noise and Student's t noise. The plot reveals a linear pattern between the achieved error and the corruption rate.
3. Robust tensor decomposition poses greater technical challenges than high-dimensional linear regression (Shen et al., 2023). Our key technical contribution lies in demonstrating the
so-called two-phase regularity properties of the absolute loss and pseudo-Huber loss. Particularly noteworthy is the second-phase regularity condition where the size of the projected sub-gradient (namely, the Riemannian sub-gradient of the loss) diminishes as the estimate approaches the true model parameter. We also prove the first-phase regularity condition that was initially conjectured in Charisopoulos et al. (2021). Robust tensor decomposition becomes even more complex in the presence of missing values, where the powerful leave-one-out framework still yields sub-optimal results. We posit that the sub-optimality is caused by technical difficulty, and demonstrate that a simple sample splitting trick can yield a statistically optimal error rate under missing values and in the presence of arbitrary outliers.
## 2 Tensor Decomposition and Robust PCA
We shall write tensors in bold calligraphy font, such as \(\mathbf{\mathcal{C}},\mathbf{\mathcal{M}},\mathbf{\mathcal{T}}\) and write matrices in upper-case bold face, such as \(\mathbf{U},\mathbf{V},\mathbf{W}\). Lower-case bold face letters such as \(\mathbf{u},\mathbf{v},\mathbf{w}\) denote vectors. An \(m\)-th order tensor \(\mathbf{\mathcal{T}}\in\mathbb{R}^{d_{1}\times\cdots\times d_{m}}\) is an \(m\)-dimensional array and \(d_{j}\) is the size in \(j\)-th dimension. Denote its mode-\(j\) matricization of \(\mathbf{\mathcal{T}}\) as \(\mathfrak{M}_{j}(\mathbf{\mathcal{T}})\in\mathbb{R}^{d_{j}\times d_{j}^{-}}\), where \(d_{j}^{-}:=\prod_{l\neq j}d_{l}\). The mode-\(j\) marginal multiplication between a tensor \(\mathbf{\mathcal{T}}\) and a matrix \(\mathbf{U}^{\top}\in\mathbb{R}^{r_{j}\times d_{j}}\) results into an \(m\)-th order tensor of size \(d_{1}\times\cdots d_{j-1}\times r_{j}\times d_{j+1}\cdots d_{m}\), whose elements are \((\mathbf{\mathcal{T}}\times_{j}\mathbf{U}^{\top})_{i_{1}\cdots i_{j-1}li_{j+1} \cdots i_{m}}:=\sum_{l\neq j}d_{l}\). The mode-\(j\) marginal multiplication between a tensor \(\mathbf{\mathcal{T}}\) and a matrix \(\mathbf{U}^{\top}\in\mathbb{R}^{r_{j}\times d_{j}}\) is the \(m\)-th order tensor of size \(d_{1}\times\cdots d_{j-1}\times r_{j}\times d_{j+1}\cdots d_{m}\).
\(\sum_{i_{j}=1}^{d_{j}}[\mathbf{\mathcal{T}}]_{i_{1}\cdots i_{j-1}i_{j}i_{j+1}\cdots i_{ m}}\mathbf{U}_{i_{j}l}.\) A simple and useful fact is \(\mathfrak{M}_{j}\left(\mathbf{\mathcal{T}}\times_{j}\mathbf{U}^{\top}\right)= \mathbf{U}^{\top}\mathfrak{M}_{j}(\mathbf{\mathcal{T}})\). Unlike matrices, there are multiple definitions of tensor ranks. Throughout this paper, tensor ranks are referred to as the Tucker ranks (Tucker, 1966). The \(m\)-th order tensor \(\mathbf{\mathcal{T}}\) is said to have Tucker rank \(\mathbf{r}:=(r_{1},r_{2},\cdots,r_{m})\) if its mode-\(j\) matricization has rank \(r_{j}\), i.e., \(r_{j}=\text{rank}(\mathfrak{M}_{j}(\mathbf{\mathcal{T}}))\). As a result, \(\mathbf{\mathcal{T}}\) admits the so-called Tucker decomposition \(\mathbf{\mathcal{T}}=\mathbf{\mathcal{C}}\cdot[\![\mathbf{U}_{1},\cdots,\mathbf{U}_{ m}]\!]:=\mathbf{\mathcal{C}}\times_{1}\mathbf{U}_{1}\times_{2}\cdots\times_{m} \mathbf{U}_{m}\) where the core tensor \(\mathbf{\mathcal{C}}\) is of size \(r_{1}\times\cdots\times r_{m}\) and \(\mathbf{U}_{j}\in\mathbb{R}^{d_{j}\times r_{j}}\) has orthonormal columns. Tucker decomposition is conceptually similar to the matrix SVD except that the core tensor is generally not diagonal. Interested readers are suggested to refer to Kolda and Bader (2009); De Silva and Lim (2008); De Lathauwer et al. (2000) for more details about Tucker ranks and Tucker decomposition. Tucker decomposition is well-defined and can be fast computed by HOSVD. For notational convenience, we denote \(d^{*}:=d_{1}\cdots d_{m}\), \(d_{k}^{-}:=d^{*}/d_{k}\), \(r^{*}:=r_{1}\cdots r_{m}\), \(r_{k}^{-}:=r^{*}/r_{k}\) for any \(k\in[m]\). Denote \(\mathbf{r}:=(r_{1},\cdots,r_{m})^{\top}\) and \(\mathbb{M}_{\mathbf{r}}:=\{\mathbf{\mathcal{T}}\in\mathbb{R}^{d_{1}\times\cdots \times d_{m}}:\ \text{rank}(\mathfrak{M}_{k}(\mathbf{\mathcal{T}}))\leq r_{k}\}\) the set of tensors with Tucker rank bounded by \(\mathbf{r}\).
Noisy tensor decomposition is concerned with reconstructing a low-rank tensor from noisy observation. Consider an \(m\)-th order tensor \(\mathbf{\mathcal{A}}\) of size \(d_{1}\times\cdots\times d_{m}\). This could be representative of various types of data, such as international trade flow among countries (Cai et al., 2022; Lyu and Xia, 2023) or a higher-order network (Ke et al., 2019; Jing et al., 2021), among others. The fundamental premise of tensor decomposition is the existence of a low-rank "signal" tensor \(\mathbf{\mathcal{T}}^{*}\) embedded within \(\mathbf{\mathcal{A}}\). Here, \(\mathbf{r}\) represents the Tucker ranks of \(\mathbf{\mathcal{T}}^{*}\), satisfying that \(r_{k}\ll d_{k}\) for all \(k\in[m]\). Throughout this paper, we assume additive noise, leading to a linear model. For more context on tensor decomposition in generalized linear models, please refer to Han et al. (2022); Lyu and Xia (2023); Lyu et al. (2023). With the assumption of additive noise, tensor decomposition strives to find a low-rank approximation for the tensorial data \(\mathbf{\mathcal{A}}\). If the additive noise is sub-Gaussian, the associated model is often referred to as sub-Gaussian tensor PCA (Cai et al., 2022) and the signal tensor can be estimated by the least squares estimator
\[\widehat{\mathbf{\mathcal{T}}}^{\text{\tiny LS}}:=\operatorname*{arg\,min}_{\mathbf{ \mathcal{T}}\in\mathbb{M}_{\mathbf{r}}}\,\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{A}}\| _{\text{F}}^{2}:=\sum_{\omega\in[d_{1}]\times\cdots\times[d_{k}]}\big{(}[\mathbf{ \mathcal{T}}]_{\omega}-[\mathbf{\mathcal{A}}]_{\omega}\big{)}^{2}. \tag{1}\]
The optimization problem involved in (1) is generally NP-hard. Computationally efficient algorithms have been developed to find locally optimal solutions which are statistically optimal under strong signal-to-noise ratio (SNR) conditions. See, e.g., Zhang and Xia (2018); Liu et al. (2022); Cai et al. (2022).
This paper focuses on tensor decomposition in the existence of heavy-tailed noise and arbitrary corruptions/outliers. More specifically, we study the robust tensor PCA model in that the observed tensor data, denoted as \(\mathbf{\mathcal{Y}}\), consists of three underlying parts:
\[\mathbf{\mathcal{Y}}=\mathbf{\mathcal{T}}^{*}+\mathbf{\Xi}+\mathbf{\mathcal{S}}. \tag{2}\]
The signal tensor, represented as \(\mathbf{\mathcal{T}}^{*}\), holds a Tucker rank of \(\mathbf{r}\). The dense noise tensor, \(\mathbf{\Xi}\), potentially contains entries with heavy tails, and \(\mathbf{\mathcal{S}}\) is a sparse tensor that captures arbitrary corruptions or outliers. It's important to note that heavy-tailed noise can result in outliers, and the additional sparse tensor \(\mathbf{\mathcal{S}}\) accommodates Huber's contamination model. It is possible that \(\mathbf{\mathcal{T}}^{*}\) and \(\mathbf{\mathcal{S}}\) may be indistinguishable if \(\mathbf{\mathcal{T}}^{*}\) itself also exhibits sparsity. For identifiability, the incoherent condition introduced by Candes et al. (2011) is often necessary. The set of \(\mu\)-incoherent rank-\(\mathbf{r}\) tensors is denoted by \(\mathbb{M}_{\mathbf{r},\mu}:=\{\mathbf{\mathcal{T}}\in\mathbb{M}_{\mathbf{r}}:\mu( \mathbf{\mathcal{T}})\leq\mu\}\).
**Definition 1**.: _A tensor \(\mathbf{\mathcal{T}}=\mathbf{\mathcal{C}}\cdot\llbracket\mathbf{U}_{1},\ldots,\mathbf{U }_{m}\rrbracket\) with Tucker rank \(\mathbf{r}=(r_{1},\ldots,r_{m})\) is said \(\mu\)-incoherent iff \(\mu(\mathbf{\mathcal{T}}):=\max_{k=1,\ldots,m}\|\mathbf{U}_{k}\|_{2,\infty}^{2} \cdot d_{k}/r_{k}\leq\mu\), or equivalently \(\|\mathbf{U}_{k}\|_{2,\infty}\leq(\mu r_{k}/d_{k})^{1/2}\) for each \(k=1,\ldots,m\)._
Heavy-tailed noise and outliers can be handled by robust loss functions. In the following sections, we focus on two specific robust loss functions:
1. _Pseudo-Huber loss:_ \(\rho_{H_{p},\delta}(x):=(x^{2}+\delta^{2})^{1/2}\) _for any_ \(x\in\mathbb{R}\) _where_ \(\delta>0\) _is a tuning parameter;_
2. _Quantile loss:_ \(\rho_{Q,\delta}(x):=\delta x\mathbbm{1}(x\geq 0)+(\delta-1)x\mathbbm{1}(x<0)\) _for any_ \(x\in\mathbb{R}\) _with_ \(\delta:=\mathbb{P}(\xi\leq 0)\)_. Without loss of generality, only the case_ \(\delta=1/2\)_, i.e, absolute loss_ \(\rho(x)=|x|\)_, will be specifically studied._
A robust low-rank estimator for \(\mathbf{\mathcal{T}}^{*}\) can be achieved through tensor decomposition combined with robust loss functions. More specifically, we define
\[\widehat{\mathbf{\mathcal{T}}}:=\operatorname*{arg\,min}_{\mathbf{\mathcal{T}}\in \mathbb{M}_{\mathbf{r},\mu^{*}}}f(\mathbf{\mathcal{T}})\quad\text{ where }f(\mathbf{\mathcal{T}}):=\sum_{\omega\in[d_{1}]\times \cdots\times[d_{m}]}\rho\big{(}[\mathbf{\mathcal{T}}]_{\omega}-[\mathbf{\mathcal{Y}}] _{\omega}\big{)}. \tag{3}\]
Here, \(\rho(\cdot)\) can represent either the pseudo-Huber or quantile loss and \(\mu^{*}\) denotes incoherence parameter of \(\mathbf{\mathcal{T}}^{*}\). The optimization program involved in equation (3) presents a greater challenge than that in equation (1) due to the often non-smooth nature of robust loss functions. Our aim is to develop a fast converging algorithm capable of finding a local minimizer for equation (3), which is also statistically optimal w.r.t. the heavy-tailed noise and arbitrary corruptions with high probability.
## 3 Pseudo-Huber Tensor Decomposition
In this section, we study tensor decomposition using the pseudo-Huber loss and demonstrate its robustness to heavy-tailed noise. More specifically, suppose the observed tensor \(\mathbf{\mathcal{Y}}=\mathbf{\mathcal{T}}^{*}+\mathbf{\Xi}\) where \(\mathbf{\Xi}\) is a noise tensor whose entries are i.i.d. centered random variables. Denote
\((x^{2}+\delta^{2})^{1/2}\) the pseudo-Huber loss with a tuning parameter \(\delta>0\). The pseudo-Huber loss is a smooth approximation of the absolute loss and Huber loss. We estimate \(\mathbf{\mathcal{T}}^{*}\) by solving the following non-convex program:
\[\widehat{\mathbf{\mathcal{T}}}=\operatorname*{arg\,min}_{\mathbf{\mathcal{T}}\in \mathbb{M}_{\mathbf{r},\mu}}\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\right\|_{ \mathrm{H}_{\mathrm{p}}}:=\sum_{\omega\in[d_{1}]\times\cdots\times[d_{m}]} \rho_{H_{p},\delta}\big{(}[\mathbf{\mathcal{T}}]_{\omega}-[\mathbf{\mathcal{Y}}]_{ \omega}\big{)}. \tag{4}\]
Here \(\mu\) is some constant larger than the \(\mu^{*}=\mu(\mathbf{\mathcal{T}}^{*})\), i.e., the incoherence parameter of the ground truth. Note that Cambier and Absil (2016) has empirically demonstrated the benefit of pseudo-Huber loss in matrix completion. We prove that pseudo-Huber loss is indeed robust to heavy-tailed noise and can deliver a statistically optimal estimator under mild conditions.
### Projected gradient descent
Finding the global minimizer of program (4) is generally NP-hard. We only intend to find a local minimizer which enjoys statistical optimality. The objective function in (4) is convex, but the feasible set is non-convex. Meanwhile, the set of fixed-rank tensors forms a Riemannian manifold. We apply the projected gradient descent (Chen and Wainwright, 2015) algorithm to solving the program (4). The vanilla gradient is usually full-rank, rendering the projection step computationally intensive. For computational benefit, we utilize the Riemannian gradient which is also low-rank. This corresponds to the Riemannian gradient descent algorithm extensively studied in the recent decade. See, e.g., Vandereycken (2013); Cambier and Absil (2016); Wei et al. (2016); Cai et al. (2022b); Shen et al. (2022) and references therein. The details are in Algorithm 1. The algorithm consists of two main steps. First, at the current iterate \(\mathbf{\mathcal{T}}_{l}\), Algorithm 1 moves along the Riemannian gradient, which is the projection of the vanilla gradient into the tangent space, denoted as \(\mathbb{T}_{l}\), of \(\mathbb{M}_{\mathbf{r}}\) at \(\mathbf{\mathcal{T}}_{l}\). The second step retracts the updated estimate back to the feasible set \(\mathbb{M}_{\mathbf{r}}\). Although the retraction step seems to require the computation of HOSVD (De Lathauwer et al., 2000) of a \(d_{1}\times\cdots\times d_{m}\) tensor, which would be rather computational costly, in fact it can be reduced to the HOSVD of a \(2r_{1}\times\cdots\times 2r_{m}\) tensor. For more details of computation implementation, please refer to Cai et al. (2020, 2022); Shen et al. (2022); Luo and Zhang (2022). Note that Algorithm 1 requires no further steps to ensure the incoherence. Instead, we shall prove that the iterates output by Algorithm 1 maintain the incoherence property if equipped with a good initialization.
### Algorithm convergence and statistical optimality
Let \(\xi\) be a heavy-tailed random variable denote the entrywise error, i.e., the entries of \(\mathbf{\Xi}\) are i.i.d. and have the same distribution as \(\xi\). Denote \(h_{\xi}(\cdot)\) and \(H_{\xi}(\cdot)\) the density and distribution of \(\xi\), respectively. Pseudo-Huber tensor decomposition requires the following condition of the noise.
**Assumption 1** (Noise condition I).: _There exists an \(\varepsilon>0\) such that \(\gamma:=\left(\mathbb{E}|\xi|^{2+\varepsilon}\right)^{1/(2+\varepsilon)}<+\infty\). The density function \(h_{\xi}(\cdot)\) is zero symmetric2 in that \(h_{\xi}(x)=h_{\xi}(-x)\). There exists \(b_{0}>0\) such that \(h_{\xi}(x)\geq b_{0}^{-1}\) for all \(|x|\leq C_{m,\mu^{*},r^{*}}(6\gamma+\delta),\) where \(C_{m,\mu^{*},r^{*}}:=72(5m+1)^{2}3^{m}\mu^{*m}r^{*}\) and \(\delta\) is the pseudo-Huber loss parameter._
Footnote 2: The zero-symmetric condition can be slightly relaxed to \(\frac{d}{dt}\mathbb{E}(t-\xi)^{2}+\delta^{2}\big{)}^{1/2}\big{|}_{t=0}=0\), which is equivalent to \(\int_{-\infty}^{+\infty}s(s^{2}+\delta^{2})^{-1/2}h_{\xi}(s)\,ds=0\).
Basically, Assumption 1 requires a finite \(2+\varepsilon\) moment bound of noise. The lower bound condition of noise density has appeared in existing literature such as Elsener and van de Geer (2018); Alquier et al. (2019); Chinot et al. (2020); Wang et al. (2020); Shen et al. (2023). Note that \(b_{0}\) is only related to the random noise \(\xi\) together with pseudo-Huber parameter \(\delta\). Assumption 1 also implies a lower bound \(b_{0}\geq C_{m,\mu^{*},r^{*}}(6\gamma+\delta)\). By choosing a parameter \(\delta=O(\gamma)\), the relationship \(b_{0}\asymp\mathbb{E}|\xi|\) holds for Gaussian noise, Student's t noise, and zero symmetric Pareto noise, etc.
The convergence dynamic of Algorithm 1 and statistical performance are decided by the schedule of step sizes. They are related to regularity properties of the objective function. Interestingly, the following lemma shows that the pseudo-Huber loss exhibits two-phase regularity properties depending on the closeness between \(\mathbf{\mathcal{T}}\) and the ground truth. Define \(\mathsf{DoF}_{m}:=r_{1}r_{2}\cdots r_{m}+\sum_{j=1}^{m}d_{j}r_{j}\), reflecting the model complexity. Here the sup-norm \(\|\mathbf{\mathcal{A}}\|_{\infty}:=\max_{\omega\in[d_{1}]\times\cdots\times[d_{m} ]}\big{|}[\mathbf{\mathcal{A}}]_{\omega}\big{|}\) and the \((2,\infty)\)-norm of a \(d_{1}\times p_{1}\) matrix is defined by \(\|\mathbf{A}\|_{2,\infty}:=\max_{i\in[d_{1}]}\|\mathbf{e}_{i}^{\top}\mathbf{ A}\|\) where \(\|\cdot\|\) denotes the vector \(\ell_{2}\)-norm and \(\mathbf{e}_{i}\) denotes the \(i\)-th standard basis vector.
**Lemma 1** (Two-phase regularity properties of pseudo-Huber loss).: _Suppose the noise \(\mathbf{\Xi}\) has i.i.d. entries satisfying Assumption 1. There exist absolute constants \(c,c_{1},c_{2}>0\) such that with probability exceeding \(1-c\sum_{k=1}^{m}d_{k}(d_{k}^{-})^{-1-\min\{1,\varepsilon\}}-\exp\left(-\mathsf{ Dof}\mathbf{r}_{m}/2\right)\), the following facts hold._
1. _For all_ \(\mathbf{\mathcal{T}}\in\mathbb{R}^{d_{1}\times\cdots\times d_{m}}\) _and any gradient_ \(\mathbf{\mathcal{G}}\in\partial\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\right\|_{ \mathrm{H}_{\mathrm{p}}}\)_,_ _Here_ \(\mathbb{T}\) _denotes the tangent space of_ \(\mathbb{M}_{\mathbf{r}}\) _at the point_ \(\mathbf{\mathcal{T}}\)_. Furthermore, if_ \(\mathbf{\mathcal{T}}\) _is_ \(\mu\)_-incoherent, then for each_ \(k\in[m]\) _and_ \(j\in[d_{k}]\)_,_ \[\left\|\mathfrak{M}_{k}\left(\mathcal{P}_{\mathbb{T}}(\mathbf{ \mathcal{G}})\right)\right\|_{2,\infty}\leq\big{(}3\mu r_{k}\cdot d_{k}^{-} \big{)}^{1/2},\] \[\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}})_{j,\cdot}\right\| _{\mathrm{H}_{\mathrm{p}}}-\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}^{*}-\mathbf{ \mathcal{Y}})_{j,\cdot}\right\|_{\mathrm{H}_{\mathrm{p}}}\geq\left\|\mathfrak{ M}_{k}(\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*})_{j,\cdot}\right\|_{\infty}^{-1} \cdot\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*})_{j,\cdot} \right\|_{\mathrm{F}}^{2}-6d_{k}^{-}\gamma-d_{k}^{-}\delta.\]
2. _For all_ \(\mathbf{\mathcal{T}}\in\mathbb{M}_{\mathbf{r}}\) _satisfying_ \(\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\|_{\infty}\leq C_{m,\mu^{*},r^{*}}(6\gamma+\delta)\) _and_ \(\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\|_{\mathrm{F}}\geq c_{1}b_{ 0}\sqrt{\mathsf{Dof}\mathbf{r}_{m}}\)_,_ \[\left\|\mathcal{P}_{\mathbb{T}}(\mathbf{\mathcal{G}})\right\|_{\mathrm{F}}\leq c_{2 }\delta^{-1}\sqrt{m+1}\cdot\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\| _{\mathrm{F}},\quad\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\right\|_{\mathrm{ H}_{\mathrm{p}}}-\left\|\mathbf{\mathcal{T}}^{*}-\mathbf{\mathcal{Y}}\right\|_{\mathrm{H}_{ \mathrm{p}}}\geq(4b_{0})^{-1}\cdot\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*} \right\|_{\mathrm{F}}^{2}.\]
Lemma 1 admits a sharper characterization of the lower bound on the objective function and the upper bound on the Riemannian gradient when \(\mathbf{\mathcal{T}}\) is closer to the ground truth \(\mathbf{\mathcal{T}}^{*}\). The loose bound in (1) is derived directly by a triangular inequality, while the bound in (2) relies on techniques from empirical processes (Boucheron et al., 2013; Ludoux and Talagrand, 1991; Van Der Vaart et al., 1996). The lower bound for Lipschitz objective function such as \(\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\right\|_{\mathrm{H}_{\mathrm{p}}}- \left\|\mathbf{\mathcal{T}}^{*}-\mathbf{\mathcal{Y}}\right\|_{\mathrm{H}_{\mathrm{p}}}\) is often referred to as the sharpness condition or margin condition in the literature (Elsener and van de Geer, 2018; Charisopoulos et al., 2021). Chinot et al. (2020) generalizes such lower bounds with a _local Bernstein condition_. The upper bound of the Riemannian gradient plays a critical role in the convergence dynamic of Algorithm 1. Note that a trivial upper bound of \(\rho^{\prime}_{H_{p},\delta}(x)\) is one and thus the upper bound of \(\|\mathcal{P}_{\mathbb{T}}(\mathbf{\mathcal{G}})\|_{\mathrm{F}}\) in (1) is just a trivial bound. However, bound in (2) shows that the Riemannian gradient actually shrinks as \(\mathcal{T}\) approaches closer to the ground truth. This behavior has been visualized in Figure 1(b). The polynomial probability term \(d_{k}(d_{k}^{-})^{-1-\min\{1,\varepsilon\}}\) appears from bounding the slice sum of absolute value of random noise, while the negligible exponential probability term is a by-product of applying empirical processes technique. In the special case \(d_{k}\equiv d\), the probability guarantee of Lemma 1 becomes \(1-\Omega\big{(}md^{-\min\{1,\varepsilon\}-(m-2)}-\exp(-\mathsf{Dof}\mathbf{r}_{m}) \big{)}\). The one-step power iteration method in Auddy and Yuan (2022) only guarantees a log polynomial probability \(1-\Omega(\log^{-1}d)\). Two-phase regularity properties of Lipschitz loss functions have been discovered in robust high-dimensional linear regression (Shen et al., 2022, 2023). We emphasize that establishing two-phase regularity property for tensor decomposition is much more challenging.
Towards that end, we need to precisely connect the sup-norm error \(\|\boldsymbol{\mathcal{T}}-\boldsymbol{\mathcal{T}}^{*}\|_{\infty}\) and the Frobenius-norm error \(\|\boldsymbol{\mathcal{T}}-\boldsymbol{\mathcal{T}}^{*}\|_{\mathrm{F}}\). Characterizing sup-norm error rate in matrix/tensor decomposition is technically challenging.
Two-phase regularity property from Lemma 1 leads to a two-phase convergence dynamic of Algorithm 1. Basically, phase-one convergence happens when \(\boldsymbol{\mathcal{T}}_{l}\) is far from \(\boldsymbol{\mathcal{T}}^{*}\) in that \(\|\boldsymbol{\mathcal{T}}_{l}-\boldsymbol{\mathcal{T}}^{*}\|_{\mathrm{F}}= \Omega_{m,\mu^{*},r^{*}}\big{(}(\gamma+\delta)\cdot d^{*1/2}\big{)}\). Algorithm 1 then enters phase-two convergence when \(\boldsymbol{\mathcal{T}}_{l}\) gets closer to \(\boldsymbol{\mathcal{T}}^{*}\). The precise convergence dynamic is presented in the following theorem. Note that \(\underline{\lambda}^{*}:=\min_{k\in[m]}\big{\{}\sigma_{r_{k}}\big{(}\mathfrak{ M}_{k}(\boldsymbol{\mathcal{T}}^{*})\big{)}\big{\}}\) is referred to as the signal strength, where \(\sigma_{k}(\cdot)\) denotes the \(k\)-th largest singular value of a matrix.
**Theorem 1**.: _Suppose the noise \(\boldsymbol{\Xi}\) has i.i.d. entries satisfying Assumption 1 and the pseudo-Huber parameter \(\delta\leq\gamma(\log d^{*})^{-1/2}\). There exist absolute constants \(D_{0},c,c^{\prime},c_{1},c_{2}>0\) such that if the initialization satisfies \(d^{*1/2}\left\|\boldsymbol{\mathcal{T}}_{0}-\boldsymbol{\mathcal{T}}^{*} \right\|_{\infty}\leq D_{0}\leq c\underline{\lambda}^{*}\delta^{2}(b_{0}^{2}m ^{4}\mu^{*m}r^{*})^{-1}\) and initial stepsize \(\eta_{0}\in D_{0}\cdot(5m+1)^{-2}(\mu^{*m}r^{*}d^{*})^{-1/2}\cdot[0.125,\ 0.375]\), then, with probability at least \(1-c^{\prime}\sum_{k=1}^{m}d_{k}(d_{k}^{-})^{-1-\min\{1,\varepsilon\}}-\exp{(- \mathsf{D}\mathsf{o}\mathsf{F}_{m}/2)}-c_{2}(d^{*})^{-7}\), Algorithm 1 exhibits the following dynamics:_
1. _in phase one, namely for the_ \(l\)_-th iteration satisfying_ \((1-c_{m,\mu^{*},r^{*}}/32)^{l}\,D_{0}\geq 2c_{m,\mu^{*},r^{*}}^{-1/2}d^{*1/2}(6 \gamma+\delta)\)_, by choosing a stepsize_ \(\eta_{l}=(1-c_{m,\mu^{*},r^{*}}/32)^{l}\,\eta_{0}\) _where_ \(c_{m,\mu^{*},r^{*}}:=(5m+1)^{-2}(3^{m}\mu^{*m}r^{*})^{-1}\)_, we have_ \[\left\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*} \right\|_{\mathrm{F}}\leq(1-c_{m,\mu^{*},r^{*}}/32)^{l+1}\,D_{0},\] \[\left\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*} \right\|_{\infty}\leq\frac{1}{\sqrt{c_{m,\mu^{*},r^{*}}d^{*}}}\cdot(1-c_{m, \mu^{*},r^{*}}/32)^{l+1}\,D_{0};\]
2. _in phase two, namely for the_ \(l\)_-th iteration satisfying_ \(\mathsf{D}\mathsf{o}\mathsf{F}_{m}^{1/2}\cdot b_{0}\leq\left\|\boldsymbol{ \mathcal{T}}_{l}-\boldsymbol{\mathcal{T}}^{*}\right\|_{\mathrm{F}}\leq 2c_{m,\mu^{*},r^{*}}^{-1/2}d ^{*1/2}(6\gamma+\delta)\)_, by choosing a constant stepsize_ \(\eta_{l}=\eta\) _such that_ \(8c_{1}^{2}(m+1)\eta b_{0}\delta^{-2}\in[1,3]\)_, we have_ \[\left\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\right\|_{ \mathrm{F}}\leq\left(1-\frac{(\delta/b_{0})^{2}}{32c_{1}^{2}(m+1)}\right) \left\|\boldsymbol{\mathcal{T}}_{l}-\boldsymbol{\mathcal{T}}^{*}\right\|_{ \mathrm{F}}.\]
_Therefore, after at most \(\tilde{l}=O\big{(}\log(\underline{\lambda}^{*}/\sqrt{\mu^{m}r^{*}d^{*}}\gamma) +\log(\gamma/b_{0})+\log(d^{*}/\mathsf{D}\mathsf{o}\mathsf{F}_{m})\big{)}\) iterations, Algorithm 1 outputs an estimator achieving the error rate \(\|\boldsymbol{\mathcal{T}}_{\tilde{l}}-\boldsymbol{\mathcal{T}}^{*}\|_{ \mathrm{F}}=O\big{(}\mathsf{D}\mathsf{o}\mathsf{F}_{m}^{\tilde{l}/2}\cdot b_{0} \big{)}\), which holds with the same aforementioned probability._
Theorem 1 shows, in both phases, Algorithm 1 enjoys fast linear convergence. Due to technical reasons, the initialization condition is imposed w.r.t. the sup-norm which immediately implies the Frobenius norm bound via the simple fact \(\left\|\boldsymbol{\mathcal{A}}\right\|_{\mathrm{F}}\leq d^{*1/2}\left\| \boldsymbol{\mathcal{A}}\right\|_{\infty}\) for any tensor \(\boldsymbol{\mathcal{A}}\) of size \(d_{1}\times\cdots\times d_{m}\). By Theorem 1, the phase-one convergence terminates after at most \(l_{1}=O(\log(\underline{\lambda}^{*}/\sqrt{\mu^{m}r^{*}d^{*}}\gamma))\) iterations and Algorithm 1 reaches an estimate with the Frobenius-norm error rate
\(2c_{m,\mu^{*},r^{*}}^{-1/2}(6\gamma+\delta)\) and sup-norm error rate \(\|\boldsymbol{\mathcal{T}}_{l_{1}}-\boldsymbol{\mathcal{T}}^{*}\|_{\infty}\leq 2 c_{m,\mu^{*},r^{*}}^{-1} (6\gamma+\delta)\). Geometrically decaying stepsizes are required during phase-one iterations, which is typical in non-smooth optimization (Charisopoulos et al., 2021; Tong et al., 2021; Shen et al., 2023). After \(\ell_{1}\) iterations, Algorithm 1 enters the second phase and a constant step size suffices to ensure linear convergence. The phase-two convergence terminates after at most \(l_{2}=O(\log(\gamma/b_{0})+\log(d^{*}/\mathsf{DoF}_{m}))\) iterations and Algorithm 1 outputs an estimator with error rate \(\|\boldsymbol{\mathcal{T}}_{l_{1}+l_{2}}-\boldsymbol{\mathcal{T}}^{*}\|_{ \mathrm{F}}=O_{p}\big{(}\mathsf{DoF}_{m}^{1/2}\cdot b_{0}\big{)}\). In total, Algorithm 1 converges within a logarithmic-order number of iterations. Note that \(b_{0}\) is same scale as \(\mathbb{E}|\xi|\) for many examples such as Gaussian, Student's t, and zero symmetric Pareto, etc. The error rate \(\mathsf{DoF}_{m}^{1/2}\cdot b_{0}\) is minimax optimal (Zhang and Xia, 2018) in terms of the model complexity.
We note that our analysis can derive sharp upper bounds for the sup-norm error rate during phase-one convergence. However, the analysis framework cannot work for phase-two convergence even by the leave-one-out technique (Chen et al., 2021, 2021, 2022). This is due to technical issues of treating the derivatives of pseudo-Huber loss function. The challenge is also observed by the recent work Wang and Fan (2022) on robust matrix completion using Huber loss. The Huber parameter set by Wang and Fan (2022) is at the order \(\|\boldsymbol{\mathcal{T}}^{*}\|_{\infty}+\gamma d^{1/2}\), while the pseudo-Huber parameter in our algorithm should be at the order \(\gamma\). Our Theorem 1 and Wang and Fan (2022) both yield sub-optimal sup-norm error rates. We believe the sub-optimality is due to technical issue because Section 6 will present that a sample splitting trick can produce nearly optimal sup-norm error rate.
## 4 Quantile Tensor Decomposition
This section addresses the more general setting of robust tensor decomposition that allows both heavy-tailed noise and arbitrary corruptions. More specifically, suppose the observed tensor \(\boldsymbol{\mathcal{Y}}=\boldsymbol{\mathcal{T}}^{*}+\boldsymbol{\Xi}+ \boldsymbol{\mathcal{S}}\) where the noise tensor \(\boldsymbol{\Xi}\) may have heavy tails and the sparse tensor \(\boldsymbol{\mathcal{S}}\) can be arbitrary corruptions. We shall assume that \(\boldsymbol{\mathcal{S}}\) is \(\alpha\)-fraction sparse meaning that \(\boldsymbol{\mathcal{S}}\) has at most \(\alpha\) fraction non-zero entries in each slice. Here \(\alpha\in(0,1)\) is understood as the corruption rate in Huber's contamination model. Basically, for each \(k\in[m]\) and \(j\in[d_{k}]\), one has \(\|\mathbf{e}_{j}^{\top}\mathfrak{M}_{k}(\boldsymbol{\mathcal{S}})\|_{0}\leq \alpha d_{k}^{-}\) where \(\mathbf{e}_{j}\) is the \(j\)-th canonical basis vector whose dimension may vary at different appearances. The \(\alpha\)-fraction sparsity model is also called deterministic sparsity model and has appeared in Hsu et al. (2011); Chandrasekaran et al. (2011); Netrapalli et al. (2014); Chen and Wainwright (2015); Cai et al. (2022). This \(\alpha\)-fraction sparsity model is less stringent than the one considered in Dong et al. (2022) that imposes sparsity assumption on each fibers of \(\boldsymbol{\mathcal{S}}\) and is more general than the random support model studied in existing literature (Candes et al., 2011; Lu et al., 2016; Chen et al., 2021). In contrast, Agarwal et al. (2012); Klopp et al. (2017) impose no assumption
over locations of the support but their derived minimax optimal error rates are not proportional to noise level meaning that the low-rank matrix cannot be exactly recovered even if the noise part \(\mathbf{\Xi}\) is absent. Moreover, the foregoing works mostly focused on the matrix case and it is unclear whether their methods are still applicable for tensors, especially in consideration of the computational aspects of tensor-related problems.
Our approach is based on quantile tensor decomposition, replacing the square loss by quantile loss. Without loss of generality, we only present the method and theory for absolute loss, a special case of quantile loss. Let \(\rho(x)=|x|\) be the absolute loss and we estimate \(\mathbf{\mathcal{T}}^{*}\) by solving the following non-convex program:
\[\widehat{\mathbf{\mathcal{T}}}=\operatorname*{arg\,min}_{\mathbf{\mathcal{T}}\in \mathbb{M}_{\mathrm{F},\mu}}\ \|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\|_{1}:=\sum_{\omega\in[d_{1}]\times \cdots\times[d_{m}]}\big{|}[\mathbf{\mathcal{T}}]_{\omega}-[\mathbf{\mathcal{Y}}]_{ \omega}\big{|}. \tag{5}\]
The absolute loss has been proved statistically robust for high-dimensional linear regression (Elsener and van de Geer, 2018; Moon and Zhou, 2022; Shen et al., 2023). Its theoretical analysis for tensor decomposition is more challenging because we must simultaneously investigate the computational and statistical aspects of the minimizers of (5).
### Projected sub-gradient descent with trimming
Our algorithm for finding local minimizers of (5) is essentially the same as the Riemannian-type Algorithm 1 except that now sub-gradient is employed because the absolute loss is non-smooth. The algorithm is thus called Riemannian sub-gradient descent, previously studied in Charisopoulos et al. (2021); Shen et al. (2023) for low-rank regression. Here the algorithm is more involved because one needs to ensure the incoherence property. Unlike the pseudo-Huber loss used in Algorithm 1, the absolute loss is non-differentiable so that even the leave-one-out technique cannot help prove the incoherent condition during the phase-two iterations. To enforce incoherence and control sup-norm error rate, an additional trimming and truncation step is utilized.
For a given tensor \(\mathbf{\mathcal{B}}\) and a truncation threshold \(\tau_{1}\), define the operator \(\mathrm{Trun}_{\tau_{1},\mathbf{\mathcal{B}}}(\cdot):\mathbb{R}^{d_{1}\times\cdots \times d_{m}}\to\mathbb{R}^{d_{1}\times\cdots\times d_{m}}\) as
\[[\mathrm{Trun}_{\tau_{1},\mathbf{\mathcal{B}}}(\mathbf{\mathcal{T}})]_{\omega}:=[\mathbf{ \mathcal{T}}]_{\omega}+\mathrm{sign}([\mathbf{\mathcal{T}}-\mathbf{\mathcal{B}}]_{ \omega})\cdot\min\big{\{}0,\tau_{1}-|[\mathbf{\mathcal{T}}-\mathbf{\mathcal{B}}]_{ \omega}|\big{\}}, \tag{6}\]
The trimming operator (Cai et al., 2022b,c) is defined similarly. For any \(\tau_{2}>0\), define
\[\big{[}\mathrm{Trun}_{\tau_{2}}(\mathbf{\mathcal{T}})\big{]}_{\omega}:=[\mathbf{ \mathcal{T}}]_{\omega}+\mathrm{sign}\left([\mathbf{\mathcal{T}}]_{\omega}\right) \cdot\min\Big{\{}0,(\tau_{2}/d^{*})^{1/2}\left\|\mathbf{\mathcal{T}}\right\|_{ \mathrm{F}}-|[\mathbf{\mathcal{T}}]_{i_{1}\cdots i_{m}}|\Big{\}}. \tag{7}\]
The truncation operation ensures a uniform upper bound of \(\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\|_{\infty}\) during phase-two iterations. The parameter \(\tau_{1}\) is chosen such that \(\tau_{1}=\Omega\big{(}\|\mathbf{\mathcal{T}}_{l_{1}}-\mathbf{\mathcal{T}}^{*}\|_{ \infty}\big{)}\) w.h.p. where \(\mathbf{\mathcal{T}}_{l_{1}}\) is the output
after phase-one iterations. The trimming operator aims to maintain the incoherence property and the parameter \(\tau_{2}\) can be set at the level \(\mu^{*m}r^{*}\). The detailed implementations can be found in Algorithm 2. Practical guidelines to the selection of \(\tau_{1}\) and \(\tau_{2}\) shall be discussed in Section 5. Compared to existing algorithms in the literature (Chen et al., 2021; Dong et al., 2022; Cai et al., 2022b), our approach does not require any robustness parameters such as the sparsity level.
```
Input: observations \(\boldsymbol{\mathcal{Y}}\), max iterations \(l_{\max}\), step sizes \(\{\eta_{l}\}_{l=0}^{l_{\max}}\), parameters \(\tau_{1}.\tau_{2}\). Initialization: \(\boldsymbol{\mathcal{T}}_{0}\in\mathbb{M}_{\mathbf{r}}\) for\(l=0,\ldots,l_{\max}\)do Choose a vanilla subgradient: \(\boldsymbol{\mathcal{G}}_{l}\in\partial\|\boldsymbol{\mathcal{T}}_{l}- \boldsymbol{\mathcal{Y}}\|_{1}\) Compute Riemannian sub-gradient: \(\widetilde{\boldsymbol{\mathcal{G}}}_{l}=\mathcal{P}_{\mathbb{T}_{l}}( \boldsymbol{\mathcal{G}}_{l})\) Retraction to \(\mathbb{M}_{\mathbf{r}}\): \(\boldsymbol{\mathcal{T}}_{l+1}=\left\{\begin{array}{ll}\mathrm{HOSVD}_{ \mathbf{r}}(\boldsymbol{\mathcal{T}}_{l}-\eta_{l}\widetilde{\boldsymbol{ \mathcal{G}}}_{l})&\text{ if in phase one}\\ \mathrm{HOSVD}_{\mathbf{r}}\big{(}\mathrm{Trim}_{\tau_{2}}(\mathrm{Trun}_{ \tau_{1},\boldsymbol{\mathcal{T}}_{l_{1}}}(\boldsymbol{\mathcal{T}}_{l}-\eta \widetilde{\boldsymbol{\mathcal{G}}}_{l}))\big{)}&\text{ if in phase two}\\ \end{array}\right.,\) where \(\boldsymbol{\mathcal{T}}_{l_{1}}\) is phase one output and \(\mathrm{Trun}_{\tau_{1},\boldsymbol{\mathcal{T}}_{l_{1}}}(\cdot)\), \(\mathrm{Trim}_{\tau_{2}}(\cdot)\) are defined in (6) and(7), respectively. endfor Output: \(\widehat{\boldsymbol{\mathcal{T}}}=\boldsymbol{\mathcal{T}}_{l_{\max}}\)
```
**Algorithm 2** Riemannian Sub-gradient Descent with Trimming
### Algorithm convergence and error bound
Assume that the noise tensor \(\boldsymbol{\Xi}\) has i.i.d. entries whose density and distribution functions are denoted as \(h_{\xi}(\cdot)\) and \(H_{\xi}(\cdot)\), respectively. It turns out that absolute loss requires a lightly different condition on the noise, detailed in the following assumption. Here the tensor condition number \(\kappa\) is defined as \(\kappa:=\kappa(\boldsymbol{\mathcal{T}}^{*}):=\underline{\lambda}^{*-1} \overline{\lambda}^{*}\) where \(\overline{\lambda}^{*}:=\max_{k=1,\ldots,m}\left\{\sigma_{1}\left(\mathfrak{M }_{k}(\boldsymbol{\mathcal{T}}^{*})\right)\right\}\).
**Assumption 2** (Noise condition II).: _There exists an \(\varepsilon>0\) such that \(\gamma:=\big{(}\mathbb{E}|\xi|^{2+\varepsilon}\big{)}^{1/(2+\varepsilon)}<+\infty\) and the noise term has median zero \(H_{\xi}(0)=\frac{1}{2}\). Also, there exist \(b_{0},b_{1}>0\) such that3_
Footnote 3: The lower bound can be slightly relaxed to \(|H_{\xi}(x)-H_{\xi}(0)|\geq|x|/b_{0}\) for all \(|x|\leq C_{m,\mu^{*},rj,\kappa}\gamma\).
\[h_{\xi}(x)\geq b_{0}^{-1},\quad\text{ for all }|x|\leq C_{m,\mu^{*},r^{*}, \kappa}\gamma;\] \[h_{\xi}(x)\leq b_{1}^{-1},\quad\text{ for all }x\in\mathbb{R},\]
_where \(C_{m,\mu^{*},r^{*},\kappa}:=(5m+1)^{2}6^{m}\kappa^{m}\mu^{*m(m+1)/2}(r^{*})^{ (m+1)/2}\)._
A simple fact of Assumption 2 is \(b_{1}\leq b_{0}\) and \(b_{0}\geq C_{m,\mu^{*},r^{*},\kappa}\gamma\). Compared with the noise condition in Assumption 1, an additional upper bound of the noise density is imposed but the symmetry requirement is waived. See Alquier et al. (2019); Elsener and van de Geer (2018); Shen et al.
(2023) for comparable noise assumptions for treating various types of loss functions. The constant \(C_{m,\mu^{*},r^{*},\kappa}\) does not depend on the tensor dimensions. If \(m,\mu^{*},r^{*},\kappa\) are regarded as constants, we have \(b_{0}\asymp b_{1}\asymp\gamma\asymp\mathbb{E}|\xi|\) for Gaussian, Student's t, and zero-symmetric Pareto distributions, etc.
The absolute loss also exhibits a two-phase regularity property even in the existence of the additional sparse corruptions. These properties play an essential role in characterizing the convergence dynamics of Algorithm 2. Here \(\mu\) is any positive constant.
**Lemma 2** (Two-phase regularity properties of absolute loss).: _Suppose \(\mathbf{\Xi}\) contains i.i.d. entries satisfying Assumption 2 and \(\mathbf{\mathcal{S}}\) is \(\alpha\)-fraction sparse with its non-zero entries being arbitrary values. Then there exist absolute constants \(c,c_{1},c_{2}>0\) such that with probability exceeding \(1-c\sum_{k=1}^{m}d_{k}(d_{k}^{-})^{-1-\min\{1,\varepsilon\}}-\exp\left(-\textsf{ DoF}_{m}/2\right)\), the following facts hold._
1. _For all_ \(\mathbf{\mathcal{T}}\in\mathbb{R}^{d_{1}\times\cdots\times d_{m}}\) _and any sub-gradient_ \(\mathbf{\mathcal{G}}\in\partial\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\|_{1}\)_, we have_ \[\left\|\mathcal{P}_{\mathbb{T}}(\mathbf{\mathcal{G}})\right\|_{\mathrm{F}}\leq d ^{*1/2},\] _Furthermore, for each_ \(k\in[m]\) _and_ \(j\in[d_{k}]\)_, if_ \(\mathbf{\mathcal{T}}\in\mathbb{M}_{\mathbf{r},\mu}\)_, then_ \[\left\|\mathfrak{M}_{k}\left(\mathcal{P}_{\mathbb{T}}(\mathbf{ \mathcal{G}})\right)\right\|_{2,\infty}\leq(3\mu r_{k}\cdot d_{k}^{-})^{1/2},\] \[\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}})_{j, \cdot}\right\|_{1}-\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}^{*}-\mathbf{\mathcal{ Y}})_{j,\cdot}\right\|_{1}\] \[\qquad\qquad\geq\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}-\mathbf{ \mathcal{T}}^{*})_{j,\cdot}\right\|_{\infty}^{-1}\left(\left\|\mathfrak{M}_{k} (\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*})_{j,\cdot}\right\|_{\mathrm{F}}^{2}-2 \alpha d_{k}^{-}\left\|\mathfrak{M}_{k}(\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*} )_{j,\cdot}\right\|_{\infty}^{2}\right)-6d_{k}^{-}\gamma.\]
2. _For all_ \(\mathbf{\mathcal{T}}\in\mathbb{M}_{\mathbf{r},\mu}\) _and any sub-gradient_ \(\mathbf{\mathcal{G}}\in\partial\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\|_{1}\) _with_ \(\mathbf{\mathcal{T}}\) _satisfying_ \(\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\|_{\infty}\leq C_{m,\mu^{*},r^{*},\kappa}\gamma\) _and_ \(\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\|_{\mathrm{F}}\geq c_{1}b_{ 0}\cdot\max\left\{\textsf{DoF}_{m}^{1/2},\ \alpha\big{(}(m+1)(\mu^{*}\vee\mu)^{m}r^{*}d^{*}\big{)}^{1/2}\right\}\)_, we have_ \[\left\|\mathcal{P}_{\mathbb{T}}(\mathbf{\mathcal{G}})\right\|_{\mathrm{F}}\leq c_{2}(m+1)^{1/2 }\cdot b_{1}^{-1}\cdot\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\|_{ \mathrm{F}},\quad\left\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{Y}}\right\|_{1}-\left\| \mathbf{\mathcal{T}}^{*}-\mathbf{\mathcal{Y}}\right\|_{1}\geq(2b_{0})^{-1}\cdot\left\| \mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\right\|_{\mathrm{F}}^{2}.\]
Compared with Lemma 1, the second phase property (2) in Lemma 2 only holds in the restricted subset over \(\mu\)-incoherent tensors. This additional restriction comes from dealing with the presence of arbitrary sparse outliers. We note that the probability can be improved to \(1-\Omega\big{(}\sum_{k=1}^{m}d_{k}\exp(-d_{k})-\exp(-\textsf{DoF}_{m}/2)\big{)}\) if the random noise \(\xi\) has sub-Gaussian tails.
**Theorem 2**.: _Suppose \(\mathbf{\Xi}\) contains i.i.d. entries satisfying Assumption 2 and \(\mathbf{\mathcal{S}}\) is \(\alpha\)-fraction sparse with its non-zero entries being arbitrary values. Let \(c_{m,\mu^{*},r^{*}}:=(5m+1)^{-2}(3^{m}\mu^{*m}r^{*})^{-1}\) and set \(\tau_{1}\in c_{m,\mu^{*},r^{*}}^{-1}\cdot[12,24]\) and \(\tau_{2}\in\mu^{*m}r^{*}\cdot[1,\ 2]\). There exist absolute constants \(D_{0},c,c^{\prime},c_{1},c_{2}>0\) such that if the initialization satisfies \(\|\mathbf{\mathcal{T}}_{0}-\mathbf{\mathcal{T}}^{*}\|_{\infty}\leq D_{0}/d^{*1/2}\leq c (b_{1}/b_{0})^{2}(m^{4}3^{m}\mu^{*m}r^{*})^{-1}\underline{\lambda}^{*}/d^{*1/2}\), initial stepsize satisfies \(\eta_{0}\in D_{0}\cdot(5m+1)^{-2}(3^{m}\mu^{m}r^{*}d^{*})^{-1/2}\cdot[0.125,\ 0.375]\) and corruption rate is bounded with \(\alpha\leq\big{(}12(5m+1)^{2}3^{m}\mu^{*m}r^{*}\big{)}^{-1}\), then with probability at least \(1-c^{\prime}\sum_{k=1}^{m}d_{k}(d_{k}^{-})^{-1-\min\{1,\varepsilon\}}-\exp \left(-\textsf{DoF}_{m}/2\right)\), Algorithm 2 exhibits the following dynamics:_
1. _in phase one, namely for the_ \(l\)_-th iteration satisfying_ \(\left(1-c_{m,\mu^{*},r^{*}}/32\right)^{l}D_{0}\geq 12c_{m,\mu^{*},r^{*}}^{-1/2}d ^{*1/2}\gamma\)_, by choosing a stepsize_ \(\eta_{l}=\left(1-c_{m,\mu^{*},r^{*}}/32\right)^{l}\eta_{0}\)_, we have_ \[\left\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\right\|_{ \mathrm{F}}\leq\left(1-c_{m,\mu^{*},r^{*}}/32\right)^{l+1}D_{0},\] \[\left\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\right\|_{ \infty}\leq\frac{1}{\sqrt{c_{m,\mu^{*},r^{*}}d^{*}}}\cdot\left(1-c_{m,\mu^{*}, r^{*}}/32\right)^{l+1}D_{0};\]
2. _in phase two, namely for the_ \(l\)_-th iteration satisfying_ \(c_{1}b_{0}\cdot\max\left\{\mathsf{Dof}_{m}^{1/2},\alpha\big{(}(m+1)\mu^{*m}r^{ *}d^{*}\big{)}^{1/2}\right\}\leq\left\|\boldsymbol{\mathcal{T}}_{l}-\boldsymbol {\mathcal{T}}^{*}\right\|_{\mathrm{F}}\leq 12c_{m,\mu^{*},r^{*}}^{-1/2} \sigma^{*1/2}\gamma\)_, by choosing a constant step size_ \(\eta_{l}=\eta\in b_{0}^{2}\big{(}c_{1}^{2}b_{1}(m+1)\big{)}^{-1}[1,\ 3]\)_, we have_ \[\left\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\right\|_{ \mathrm{F}}\leq\left(1-\frac{(b_{1}^{2}/b_{0}^{2})}{32c_{1}^{2}(m+1)}\right) \left\|\boldsymbol{\mathcal{T}}_{l}-\boldsymbol{\mathcal{T}}^{*}\right\|_{ \mathrm{F}}.\]
_Therefore, after at most \(\tilde{l}=O\big{(}\log(\underline{\lambda}^{*}/\sqrt{d^{*}}\gamma)+\log(\gamma /b_{0})+\min\{\log(d^{*}/\mathsf{Dof}_{m}),\log(1/\alpha)\}\big{)}\) iterations, Algorithm 2 outputs an estimator achieving the error rate \(\left\|\boldsymbol{\mathcal{T}}_{\tilde{l}}-\boldsymbol{\mathcal{T}}^{*} \right\|_{\mathrm{F}}^{2}=O\big{(}b_{0}^{2}\cdot\left(\mathsf{Dof}_{m}+\alpha ^{2}d^{*}\right)\big{)}\) if treating \(\mu^{*},m\) as constants, holding with the aforementioned probability._
Basically, Algorithm 2 enjoys a two-phase linear convergence with the scheduled step sizes. The phase-one convergence terminates after \(l_{1}=O(\log(\underline{\lambda}^{*}/\sqrt{d^{*}}\gamma))\) iterations and the output satisfies \(\left\|\boldsymbol{\mathcal{T}}_{l_{1}}-\boldsymbol{\mathcal{T}}^{*}\right\|_ {\mathrm{F}}\leq 12\big{(}c_{m,\mu^{*},r^{*}}^{-1}d^{*}\big{)}^{1/2}\gamma\) and \(\left\|\boldsymbol{\mathcal{T}}_{l_{1}}-\boldsymbol{\mathcal{T}}^{*}\right\|_ {\infty}\leq 12c_{m,\mu^{*},r^{*}}^{-1}\gamma\). The phase-two convergence lasts for at most \(l_{2}=O(\log(\gamma/b_{0})+\min\{\log(d^{*}/\mathsf{Dof}_{m}),\log(1/\alpha)\})\) iterations and the algorithm finally outputs an estimator with error rate \(\left\|\boldsymbol{\mathcal{T}}_{l_{1}+l_{2}}-\boldsymbol{\mathcal{T}}^{*} \right\|_{\mathrm{F}}^{2}=O_{p}\big{(}b_{0}^{2}\cdot\left(\mathsf{Dof}_{m}+ \alpha^{2}d^{*}\right)\big{)}\) where \(\mu^{*},m,r^{*}\) are regarded as some constants. The first term \(b_{0}^{2}\cdot\mathsf{Dof}_{m}\) is sharp in terms of the model complexity. The model complexity \(\mathsf{Dof}_{m}\) dominates \(\alpha^{2}d^{*}\) if the corruption rate \(\alpha=O\big{(}(\mathsf{Dof}_{m}/d^{*})^{1/2}\big{)}\), improving the prior work Cai et al. (2022b). Note that if the random noise \(\boldsymbol{\Xi}\) is absent so that \(\gamma=0\), Theorem 2 implies that Algorithm 2 can exactly recovers the ground truth \(\boldsymbol{\mathcal{T}}^{*}\) after phase-one iterations, enjoying both Frobenius norm and sup norm convergence guarantees. It cannot be achieved by the convex approaches studied in Agarwal et al. (2012) and Klopp et al. (2017).
Optimality w.r.t. corruption rateThe support size of \(\boldsymbol{\mathcal{S}}\) is at most \(\alpha d^{*}\) implying that the associated model complexity is \(O(\alpha d^{*})\). Thus a seemingly natural outlook on the optimal error rate should emerge as \(O_{p}(b_{0}^{2}\cdot\alpha d^{*})\). This is indeed what has appeared in the existing literature. See, e.g., Agarwal et al. (2012); Klopp et al. (2017); Cai et al. (2022b) and references therein. Intriguingly, Theorem 2 shows that Algorithm 2 achieves an error rate with a faster dependence of the corruption rate, which is \(O_{p}(b_{0}^{2}\cdot\alpha^{2}d^{*})\). This rate turns out to be minimax optimal with a comparable lower bound to be established in the next section. The improvement comes from the benefit of absolute loss, compared with the square loss used in the foregoing works. Denote \(\tilde{\Omega}\) the support of \(\boldsymbol{\mathcal{S}}\) and an
upper bound for \(\|[\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}]_{\tilde{\Omega}}\|_{\mathrm{F}}\) is often needed for incoherent matrices/tensors \(\mathbf{\mathcal{T}}\) and \(\mathbf{\mathcal{T}}^{*}\). Cai et al. (2022b) bounds this term by \(\|[\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}]_{\tilde{\Omega}}\|_{\mathrm{F}}=O \big{(}\alpha^{1/2}\cdot\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\|_{\mathrm{F}} \big{)}\). An additional factor \(\alpha^{1/2}\) will appear by considering the absolute loss in that \(\|[\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}]_{\tilde{\Omega}}\|_{1}=O\big{(} \alpha d^{*1/2}\cdot\|\mathbf{\mathcal{T}}-\mathbf{\mathcal{T}}^{*}\|_{\mathrm{F}} \big{)}\).
### Minimax lower bound
We now establish the minimax lower bounds of robust tensor decomposition in the existence of both dense noise and sparse corruptions. For simplicity, we assume the dense noise tensor \(\mathbf{\Xi}\) comprises of i.i.d. Gaussian entries and the support of \(\mathbf{\mathcal{S}}\) is randomly sampled with probability \(\alpha\), following the typical scheme used in Candes et al. (2011); Yi et al. (2016); Chen et al. (2021b). The proof of Theorem 3 borrows the idea used in studying Huber's contamination model (Chen et al., 2018).
**Theorem 3**.: _Suppose the entries of \(\mathbf{\Xi}\) are i.i.d. with distribution \(N(0,\sigma^{2})\). Let \(\alpha\in(0,1)\), suppose the entries of \(\mathbf{\mathcal{S}}\) follow the distribution \([\mathbf{\mathcal{S}}]_{\omega}\sim(1-\alpha)\delta_{0}+\alpha Q_{\omega}\), where \(Q_{\omega}\) is an arbitrary distribution and \(\delta_{0}\) is the zero distribution for all \(\omega\in[d_{1}]\times\cdots\times[d_{m}]\). Then there exists absolute constants \(c,C>0\) such that_
\[\inf_{\widehat{\mathbf{\mathcal{T}}}}\sup_{\mathbf{\mathcal{T}}^{*}\in\mathbb{M}_{ \mathrm{r},\mu^{*}}}\sup_{\{Q_{\omega}\}}\mathbb{P}\left(\left\|\widehat{\mathbf{ \mathcal{T}}}-\mathbf{\mathcal{T}}^{*}\right\|_{\mathrm{F}}^{2}\geq\sigma^{2}\max \big{\{}\textsf{DoF}_{m},\;C\alpha^{2}d^{*}/(\mu^{*m}r^{*})\big{\}}\right)\geq c,\]
_where \(\widehat{\mathbf{\mathcal{T}}}\) is any estimator of \(\mathbf{\mathcal{T}}^{*}\) based on an observation \(\mathbf{\mathcal{Y}}=\mathbf{\mathcal{T}}^{*}+\mathbf{\Xi}+\mathbf{\mathcal{S}}\)._
## 5 Algorithmic Parameter Selection and Initialization
Algorithmic parameter selectionThe initial stepsize and two-phase stepsizes can be selected similarly to Shen et al. (2023). We only need to discuss the selection of truncation parameters \(\tau_{1},\tau_{2}\) in the second phase of Algorithm 2. It's important to note that \(\tau_{1},\tau_{2}\) are determined by the incoherence \(\mu^{*}\) and the noise level \(\gamma\). We can estimate \(\mu^{*}\) and \(\gamma\) based on the phase-one output \(\mathbf{\mathcal{T}}_{l_{1}}\). In fact, according to the proof of Theorem 2, we have \(\mu^{*}/2\leq\mu(\mathbf{\mathcal{T}}_{l_{1}})\leq 2\mu^{*}\). This allows us to obtain a satisfactory estimation of the oracle \(\mu^{*}\). As for \(\gamma\), we have \(\left\|\mathbf{\mathcal{T}}_{l_{1}}-\mathbf{\mathcal{T}}^{*}\right\|_{\infty}\asymp\gamma\) with high probability. Thus, the median \(\mathrm{med}(|\mathbf{\mathcal{T}}^{*}-\mathbf{\mathcal{Y}}|)\) is a rough estimation of the noise scale \(\tau_{2}\). Moreover, in simulations, the sequence \(\{\mathbf{\mathcal{T}}_{l}\}_{l\geq 1}\) maintains incoherence automatically and in practice, we don't need the truncation or trimming steps. The proof of \(\ell_{1}\)-loss maintaining incoherence implicitly is left for future study.
InitializationWe now present an initialization method that works under both dense noise and sparse arbitrary corruptions. See model (2). Note that Auddy and Yuan (2022) proposed an
initialization method based on Catoni's estimator (Minsker, 2018) where only the case of heavy-tailed noise is considered. The robust low-rank matrix work of Wang and Fan (2022); Cai et al. (2022b) uses the truncation method as an initialization, providing the guarantees of heavy-tailed noise case and sub-Gaussian noise plus sparse corruptions respectively. And Dong et al. (2022) provides the noiseless case initialization guarantees. Our initialization approach is inspired by the truncation method (Fan et al., 2016). We begin with truncating the observed tensor \(\boldsymbol{\mathcal{Y}}\) with a threshold that is selected at the level \(\tau\asymp\big{(}\|\boldsymbol{\mathcal{T}}^{*}\|_{\infty}+d^{*1/8}\,\|\xi\|_{ 4}\big{)}\). Here we write \(\|\xi\|_{4}:=(\mathbb{E}\xi^{4})^{1/4}\) in short. The truncation step yields
\[[\hat{\boldsymbol{\mathcal{Y}}}]_{\omega}:=[\boldsymbol{\mathcal{Y}}]_{\omega} \cdot 1_{\{|[\boldsymbol{\mathcal{Y}}]_{\omega}|\leq\tau\}}+\tau\cdot\mathrm{ sign}\,([\boldsymbol{\mathcal{Y}}]_{\omega})\cdot 1_{\{|[\boldsymbol{\mathcal{Y}}]_{\omega}|>\tau\}}, \quad\forall\omega\in[d_{1}]\times\cdots\times[d_{m}].\]
Finally, we apply spectral initialization and obtain \(\boldsymbol{\mathcal{T}}_{0}:=\mathrm{HOSVD}_{\mathbf{r}}(\hat{\boldsymbol{ \mathcal{Y}}})\).
**Theorem 4**.: _Suppose the noise tensor \(\boldsymbol{\Xi}\) has i.i.d. entries with a finite \((4+\varepsilon)\) moment for any \(\varepsilon>0\) and \(\boldsymbol{\mathcal{S}}\) has independent entries with \([\boldsymbol{\mathcal{S}}]_{\omega}\sim(1-\alpha)\delta_{0}+\alpha Q_{\omega}\) where \(Q_{\omega}\) is an arbitrary distribution. There exist \(c_{0},c,c_{1},C,C_{1},C_{2},C_{3}>0\) such that if \(d^{*}\geq\mu^{*m}r^{*}\kappa\bar{d}\log\bar{d}\), truncation level \(\tau\in(\|\boldsymbol{\mathcal{T}}^{*}\|_{\infty}+d^{*1/8}\|\xi\|_{4})\cdot[C_ {1},C_{2}]\), signal strength \(\lambda^{*}/\|\xi\|_{4}\geq C_{3}m\kappa\sqrt{r^{*}}\max\{(\bar{d}\log\bar{d}) ^{1/2},\;d^{*1/4}(\log\bar{d})^{1/4}\}\), and corruption rate \(\alpha\leq c_{1}\min\{(\underline{\lambda}^{*}/\|\xi\|_{4})/d^{*5/8},1/(\mu^{* m}r^{*})\}/(m\kappa^{2}\sqrt{r^{*}})\), then with probability at least \(1-cd^{*-\varepsilon/4}-\sum_{k=1}^{m}d_{k}^{-}\exp(-\alpha d_{k})\), we have_
\[\|\boldsymbol{\mathcal{T}}_{0}-\boldsymbol{\mathcal{T}}^{*}\|_{ \mathrm{F}} \leq C_{3}m\kappa\sqrt{r^{*}}\left(\left(\|\xi\|_{4}+\|\boldsymbol {\mathcal{T}}^{*}\|_{\infty}\right)\cdot\left(\sqrt{\bar{d}\log\bar{d}}+4d^{* 1/4}(\log\bar{d})^{1/4}\right)+2\alpha\tau\sqrt{d^{*}}\right),\] \[\|\boldsymbol{\mathcal{T}}_{0}-\boldsymbol{\mathcal{T}}^{*}\|_{ \infty} \leq C_{3}m^{2}\kappa^{2}\sqrt{r^{*}}\sqrt{\frac{\mu^{*m}r^{*}}{d ^{*}}}\left(\left(\|\xi\|_{4}+\|\boldsymbol{\mathcal{T}}^{*}\|_{\infty}\right) \cdot\left(\sqrt{\bar{d}\log\bar{d}}+4d^{*1/4}(\log\bar{d})^{1/4}\right)+2 \alpha\tau\sqrt{d^{*}}\right).\]
For ease of exposition, suppose that \(m,\mu^{*},r^{*},\kappa\asymp O(1)\). Theorem 4 shows that \(\boldsymbol{\mathcal{T}}_{0}\) satisfies the initialization condition required in Theorem 2 if the signal strength satisfies \(\underline{\lambda}^{*}/\|\xi\|_{4}=\Omega\big{(}\max\{\sqrt{d\log\bar{d}},(d^ {*}\log\bar{d})^{1/4}\}\big{)}\) and the corruption rate is bounded as \(\alpha=O\big{(}\min\{(\underline{\lambda}^{*}/\|\xi\|_{4})/d^{*5/8},\)\(1/(\mu^{*m}r^{*})\}\big{)}\). The signal-to-noise ratio is near optimal with an extra \(\log^{1/2}\bar{d}\) factor (Zhang and Xia, 2018). The corruption rate requirement is weaker than Cai et al. (2022b). Initialization guarantee of Theorem 1 can be attained in a similar fashion.
## 6 Missing Values, Sample Splitting and Optimality
While Theorems 1 and 2 demonstrate that both pseudo-Huber tensor decomposition and quantile tensor decomposition can yield estimators that are minimax optimal in Frobenius norm, the derived entry-wise error rates are generally sub-optimal. This remains the case even though powerful techniques like leave-one-out have been utilized. This sub-optimality, which is due to the non-smoothness of loss functions, has also been observed in Wang and Fan (2022). However, we believe
that this sub-optimality is a result of technical difficulty and can be addressed using a simple sample splitting trick. We hope that the positive insights from this section can inspire future research to tackle this technically unresolved problem.
For technical simplicity, we focus on the sampling with replacement model, commonly used in matrix and tensor completion literature (Cai and Zhou, 2016; Elsener and van de Geer, 2018; Xia et al., 2021; Cai et al., 2022c). Let \(\{(Y_{i},\mathbf{\mathcal{X}}_{i})\}_{i=1}^{N}\) be independent observations where \(\mathbf{\mathcal{X}}_{i}\) is uniformly sampled from the set \(\mathbf{\mathcal{X}}:=\{\mathbf{e}_{\omega}:\omega\in[d_{1}]\times\cdots\times[d_ {m}]\}\). Here the tensor \(\mathbf{e}_{\omega}\) has value \(1\) on its entry \(\omega\) and \(0\)'s everywhere else. The response \(Y_{i}\) satisfies the trace-regression model
\[Y_{i}=\langle\mathbf{\mathcal{X}}_{i},\mathbf{\mathcal{T}}^{*}\rangle+\xi_{i}+s_{i},\]
where \(\xi_{i}\)'s are i.i.d. (potentially) heavy-tailed noise and \(s_{i}\sim(1-\alpha)\delta_{0}+\alpha Q_{\omega_{i}}\) represents a potentially arbitrary corruption. Here \(Q_{\omega_{i}}\) denotes an arbitrary distribution and \(\alpha\in[0,1)\) is the corruption rate, following the Huber's contamination model (Chen et al., 2016, 2018). We split the data into \(M+1\) non-overlapping sub-samples and, without loss of generality, assume \(N=(M+1)n\) for some integer \(n\). Here \(M+1\) denotes the total number of iterations of our algorithm. Denote the \(M+1\) sub-samples as \(\mathcal{D}_{l}=\cup_{i=1}^{n}\{(Y_{i}^{(l)},\mathbf{\mathcal{X}}_{i}^{(l)})\}\) and \(\cup_{l=0}^{M}\mathcal{D}_{l}=\{(Y_{i},\mathbf{\mathcal{X}}_{i})\}_{i=1}^{N}\). We still apply the Riemannian sub-gradient descent algorithm to minimize the absolute loss, but at the \(l\)-the iteration, the algorithm is only implemented on the \(l\)-th sub-sample data. The sample splitting ensures the independence across iterations. The detailed implementation can be found in Algorithm 3.
```
Input: observations \(\{\mathcal{D}_{l}\}_{l=0}^{M}\), max iterations \(M+1\), step sizes \(\{\eta_{l}\}_{l=0}^{M}\). Initialization: \(\mathbf{\mathcal{T}}_{0}\in\mathbb{M}_{\mathbf{r}}\) is based on \(\mathcal{D}_{0}\) for\(l=0,\ldots,M-1\)do Choose a vanilla sub-gradient: \(\mathbf{\mathcal{G}}_{l}\in\partial\sum_{i=1}^{n}|Y_{i}^{(l+1)}-\langle\mathbf{ \mathcal{X}}_{i}^{(l+1)},\mathbf{\mathcal{T}}_{l}\rangle|\). Compute Riemannian sub-gradient: \(\widetilde{\mathbf{\mathcal{G}}}_{l}=\mathcal{P}_{\mathbb{T}_{l}}(\mathbf{\mathcal{G}} _{l})\) Retraction to \(\mathbb{M}_{\mathbf{r}}\): \(\mathbf{\mathcal{T}}_{l+1}=\text{HOSVD}_{\mathbf{r}}(\mathbf{\mathcal{T}}_{l}-\eta_{l }\widetilde{\mathbf{\mathcal{G}}}_{l})\) endfor Output: \(\widehat{\mathbf{\mathcal{T}}}=\mathbf{\mathcal{T}}_{M}\)
```
**Algorithm 3** Riemannian Sub-gradient Descent with Sample Splitting
**Assumption 3** (Noise condition III).: _There exists an \(\varepsilon>0\) such that \(\gamma:=\left(\mathbb{E}|\xi|^{1+\varepsilon}\right)^{1/(1+\varepsilon)}<+\infty\) and the noise term has median zero \(H_{\xi}(0)=\frac{1}{2}\). Also, there exist \(b_{0},b_{1}>0\) such that the noise
density satisfies 4_
Footnote 4: The lower bound can be slightly relaxed to \(|H_{\xi}(x)-H_{\xi}(0)|\geq|x|/b_{0}\) for all \(|x|\leq C_{m,\mu^{*},r;\mu}\gamma\).
\[h_{\xi}(x)\geq b_{0}^{-1},\quad\text{ for all }|x|\leq C_{m,\mu^{*},r^{*} }\gamma;\] \[h_{\xi}(x)\leq b_{1}^{-1},\quad\text{ for all }x\in\mathbb{R},\]
_where \(C_{m,\mu^{*},r^{*}}:=(5m+1)^{2}6^{m}\mu^{*m}r^{*}\)._
Compared with Assumption 1 and 2, here we only require a finite \(1+\varepsilon\) moment. The following theorem established the convergence dynamic of Algorithm 3. Recall that \(\bar{d}\) denotes \(\max_{j\in[m]}d_{j}\).
**Theorem 5**.: _Suppose Assumption 3 holds. There exist positive constants \(D_{0},\{c_{m,\mu^{*},r^{*}}^{(i)}\}_{i=1}^{5},\{C_{m,\mu^{*},r^{*}}^{(j)}\}_{j=1}^ {5}\) depending only on \(m,\mu^{*},r^{*}\) such that if \(n\geq C_{m,\mu^{*},r^{*}}^{(1)}\bar{d}\log\bar{d}\), the initialization satisfies \(\|\boldsymbol{\mathcal{T}}_{0}-\boldsymbol{\mathcal{T}}^{*}\|_{\infty}\leq D _{0}/d^{*1/2}\leq c_{m,\mu^{*},r^{*}}^{(1)}(b_{1}/b_{0})^{2}\lambda^{*}/d^{*1/2}\), the initial stepsize \(\eta_{0}\in d^{*1/2}D_{0}/n\cdot[c_{m,\mu^{*},r^{*}}^{(2)},c_{m,\mu^{*},r^{*}} ^{(3)}]\), and corruption rate is bounded by \(\alpha\leq c_{m,\mu^{*},r^{*}}^{(4)}\), then with probability at least \(1-c_{m}Md^{*-10}\), Algorithm 3 exhibits the following dynamics:_
1. _in phase one, namely for the_ \(l\)_-th iteration satifying_ \((1-c_{m,\mu^{*},r^{*}}^{(5)})^{l}D_{0}\geq C_{m,\mu^{*},r^{*}}^{(2)}\sqrt{d^{*}}\gamma\)_, by specifying a stepsize_ \(\eta_{l}=(1-c_{m,\mu^{*},r^{*}}^{(5)})^{l}\eta_{0}\)_, we have_ \[\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\|_ {\mathrm{F}} \leq(1-c_{m,\mu^{*},r^{*}}^{(5)})^{l+1}D_{0},\] \[\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\|_ {\infty} \leq\frac{C_{m,\mu^{*},r^{*}}^{(3)}}{\sqrt{d^{*}}}\cdot(1-c_{m, \mu^{*},r^{*}}^{(5)})^{l+1}D_{0};\]
2. _in phase two, namely for the_ \(l\)_-th iteration satisfying_ \(C_{m,\mu^{*},r^{*}}^{(4)}b_{0}\cdot\max\{(n^{-1}\cdot\textsf{DoF}\log\bar{d}) ^{1/2},\;\alpha\}\leq\left\|\boldsymbol{\mathcal{T}}_{l}-\boldsymbol{ \mathcal{T}}^{*}\right\|_{\mathrm{F}}/d^{*1/2}\leq C_{m,\mu^{*},r^{*}}^{(1)}\gamma\)_, by choosing a constant stepsize satisfying_ \(\eta_{l}=\eta\in(b_{1}^{2}/b_{0})d^{*}/n\cdot[c_{m,\mu^{*},r^{*}}^{(6)},c_{m, \mu^{*},r^{*}}^{(7)}]\)_, we have_ \[\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\|_ {\mathrm{F}} \leq(1-c_{m,\mu^{*},r^{*}}^{(8)})^{l+1-l_{1}}\left\|\boldsymbol{ \mathcal{T}}_{l_{1}}-\boldsymbol{\mathcal{T}}^{*}\right\|_{\mathrm{F}},\] \[\|\boldsymbol{\mathcal{T}}_{l+1}-\boldsymbol{\mathcal{T}}^{*}\|_ {\infty} \leq\frac{C_{m,\mu^{*},r^{*}}^{(5)}}{\sqrt{d^{*}}}\cdot(1-c_{m, \mu^{*},r^{*}}^{(8)})^{l+1-l_{1}}\left\|\boldsymbol{\mathcal{T}}_{l_{1}}- \boldsymbol{\mathcal{T}}^{*}\right\|_{\mathrm{F}},\] _where_ \(\boldsymbol{\mathcal{T}}_{l_{1}}\) _is the output of the first phase and_ \(l_{1}=O\big{(}\log(\underline{\lambda}^{*}/\sqrt{d^{*}}\gamma)\big{)}\)_. Therefore, by choosing_ \(M=\Omega\big{(}\log(\underline{\lambda}^{*}/\sqrt{d^{*}}\gamma)+\log(\gamma/ b_{0})+\min\{\log(n/\textsf{DoF}_{m}),\log(1/\alpha)\}\big{)}\)_, Algorithm 3 outputs an estimator_ \(\widehat{\boldsymbol{\mathcal{T}}}=\boldsymbol{\mathcal{T}}_{M}\) _achieving the error rate_ \[d^{*-1}\|\widehat{\boldsymbol{\mathcal{T}}}-\boldsymbol{\mathcal{T}}^{*}\|_{ \mathrm{F}}^{2}=O\Big{(}b_{0}^{2}\cdot\Big{(}\frac{\textsf{DoF}_{m}\log\bar{ d}}{n}+\alpha^{2}\Big{)}\Big{)};\] \[\|\widehat{\boldsymbol{\mathcal{T}}}-\boldsymbol{\mathcal{T}}^{*}\|_{ \infty}^{2}=O\Big{(}b_{0}^{2}\cdot\Big{(}\frac{\textsf{DoF}_{m}\log\bar{d}}{n}+ \alpha^{2}\Big{)}\Big{)},\] _if treating_ \(\mu^{*},m\) _as constants, holding with the aforementioned probability._
By ignoring the log terms involved in \(M\), the established rates of \(\widehat{\mathbf{\mathcal{T}}}\) in Frobenius norm and sup-norm are minimax optimal with respect to the sample size \(n\), degree of freedom \(\mathsf{DoF}_{m}\), and the corruption rate \(\alpha\). The sample size requirement \(n=\Omega_{m,\mu^{*},r^{*}}(\bar{d}\log\bar{d})\) is sharp in view of existing works (Xia and Yuan, 2019; Cai et al., 2022c). Theorem 5 also allows a wide range of corruption rate under Huber's contamination model.
## 7 Numerical Simulations
We evaluate the convergence of our algorithm (written as RsGrad in short) and the error rate of the estimator, comparing them with two recent methods Cai et al. (2022b); Auddy and Yuan (2022). We present the simulation results from two perspectives: convergence dynamics and the accuracy of the output. In fact, Algorithms 1 and 2 demonstrate considerable tolerance with respect to parameter selections. Specifically, the stepsize decaying rate in the first phase can take values in the range \(0.8<q<1\), all of which lead to roughly similar performance. Furthermore, a selection of \(\eta\in[0.01,0.1]\) for the second phase stepsize is acceptable and does not significantly influence the accuracy.
Algorithm convergenceWe assess the convergence dynamics of our algorithm in comparison with RGrad (Cai et al., 2022b), for which algorithmic parameters are exhaustively searched. Dimensions are set as \(d_{1}=d_{2}=d_{3}=100\) and Tucker rank as \(r_{1}=r_{2}=r_{3}=2\). Figure 3 represents the scenario under Student's \(t\)-distributed noise with degrees of freedom \(\nu=2.01\), in the absence of sparse corruptions. The left figure 2(a) illustrates a low signal-to-noise ratio scenario where \(\left\|\mathbf{\mathcal{T}}^{*}\right\|_{\mathrm{F}}/\mathbb{E}|\xi|=300\). In this setting, the signal-to-noise ratio fulfills the condition \(\underline{\lambda}^{\leq}\gamma d^{*1/2}\); according to Theorem 1 and Theorem 2, it should bypass phase one and directly enter phase two. As expected, Figure 2(a) shows that the iterations do enter the second phase after a few steps, aligning with our theoretical analysis. Conversely, Figure 2(b) demonstrates a high signal-to-noise ratio setting where \(\left\|\mathbf{\mathcal{T}}^{*}\right\|_{\mathrm{F}}/\mathbb{E}|\xi|=1500\), clearly exhibiting the two-phase convergence of RsGrad. In both cases (figures 2(a) and 2(b)), RsGrad performs better. Figure 4 is plotted under conditions of both dense noise and sparse corruptions. For achieving the typical PCA optimal rate \(\mathsf{DoF}_{m}^{1/2}\)(Zhang and Xia, 2018), the corruption rate should be bounded by \((\mathsf{DoF}_{m}/d^{*})^{1/2}\approx 0.02\) according to Theorem 1. Therefore, we fix the corruption rate \(\alpha\) to be either \(0.01\) or \(0.02\). To differentiate from the scheme in Chen et al. (2021b), we set all the non-zero entries of the corruptions to large positive values, such as exceeding \(100\times\left\|\mathbf{\mathcal{T}}^{*}\right\|_{\infty}\). The top two figures 3(a) and 3(b) depict the scenario under Student's t noise with degrees of freedom \(\nu=2.01\). The bottom two figures 3(c) and 3(d) illustrate the scenario under Gaussian noise. The results show that under heavy-tailed noise, RsGrad
significantly outperforms RGrad. Conversely, under Gaussian noise, RGrad and RsGrad exhibit similar performance.
AccuracyWe assess the accuracy of output estimators by comparing them with the robust HOSVD approach (Auddy and Yuan, 2022). The robust HOSVD method employs Catoni's estimator for initialization, followed by a one-step power iteration. This approach achieves statistically optimal accuracy up to a logarithmic factor with a smaller probability \(1-\Omega\big{(}(\log d)^{-1}\big{)}\). It's important to note that the robust HOSVD approach primarily provides eigenvector estimations for rank-one tensors under heavy-tailed noise conditions. Consequently, we have fixed the setting to \(d_{1}=d_{2}=d_{3}=100\), \(r_{1}=r_{2}=r_{3}=1\), with Student's t noise with a degree of freedom \(\nu=2.01\), and we are comparing the accuracy of eigenvector estimation using the \(\sin\Theta\) distance. Figure 5 presents a box-plot based on 50 replications. The left figure pertains to a low signal-to-noise ratio setting, where \(\left\|\boldsymbol{\mathcal{T}}^{*}\right\|_{\mathrm{F}}/\mathbb{E}|\xi|=150\), while the right figure corresponds to a scenario where \(\left\|\boldsymbol{\mathcal{T}}^{*}\right\|_{\mathrm{F}}/\mathbb{E}|\xi|=1000\). The results demonstrate that RsGrad exhibits greater robustness against heavy-tailed noise, along with superior accuracy and reduced deviation, which aligns with established theories.
Figure 3: Convergence dynamics of RGrad (Cai et al., 2022b), RsGrad-\(\ell_{1}\) (Algorithm 2) and RsGrad-Pseudo Huber (Algorithm 1) under Student \(t\) noise with d.f. \(\nu=2.01\). Dimension \(d_{1}=d_{2}=d_{3}=100\), Tucker rank \(r_{1}=r_{2}=r_{3}=2\).
Figure 4: Convergence dynamics of RGrad (Cai et al., 2022b), RsGrad-\(\ell_{1}\) (Algorithm 2) and RsGrad-PseudoHuber (Algorithm 1) under dense noise and sparse corruptions, with dimension \(d_{1}=d_{2}=d_{3}=100\), Tucker rank \(r_{1}=r_{2}=r_{3}=2\) and a high signal-to-noise ratio \(\left\|\boldsymbol{\mathcal{T}}^{*}\right\|_{\mathrm{F}}/\mathbb{E}|\xi|=1500\).
## 8 Real Data Applications
### Food balance dataset
We collected the Food Balance Dataset from [https://www.fao.org/faostat/en/#data/FBS](https://www.fao.org/faostat/en/#data/FBS). This dataset provides an intricate breakdown of a country or region's food supply during a specified period. Our analysis focuses on the food balance data in the year 2018. We have incorporated all metrics for all items, excluding population, such as 'production quantity', 'import quantity', and 'food supply' for 'wheat and products', 'apples and products'. It is crucial to acknowledge that some values in the dataset are imputed, while others are estimated, as per the notes on its website. This necessitates the use of robust statistical methods.
We first analyze the food balance data in Asian regions, consisting of 45 countries or regions, such as Yemen, Viet Nam and so on. Consequently, we procure a three-way tensor \(\mathsf{Region}\times\mathsf{Measurement}\times\mathsf{Items}\), sized \(45\times 20\times 97\). It's worth noting that some of the measurements are the total value for the entire country for the year, while some represent per capita value per day; some indicate fat supply quantity, while others denote protein supply quantity. To unify different measurements and negate the influence of population size, we scale the \(45\times 20\) vectors of size 97 such that each vector has a unit Euclidean length. The entries of the scaled tensor depict the proportion of a specific food type overall, and the entire tensor can reflect the dietary habits of a country or a region. For instance, different regions may have preferences for various kinds of meat or oil, despite each type providing protein or fat. We employ the RsGrad algorithm with an input
Figure 5: Accuracy Comparisons of Robust HOSVD (Auddy and Yuan, 2022), RsGrad-\(\ell_{1}\) (Algorithm 2) and RsGrad-Pseudo Huber (Algorithm 1) under Studentβs \(t\) noise with d.f. \(\nu=2.01\), replicated 50 times, dimension \(d_{1}=d_{2}=d_{3}=100\) and Tucker rank \(r_{1}=r_{2}=r_{3}=1\).
Tucker rank of \((r_{1},r_{2},r_{3})=(5,2,5)\), as increasing ranks do not significantly reduce the residuals. In fact, choices within the region \((2,2,2)-(10,5,10)\) yield similar results. We obtained Figure 5(a) by plotting the second component eigenvector against the first one along the Region trajectory. Southeast Asian countries, renowned for their Southeast Asian cuisine, occupy the top left of the figure. The center of the figure primarily consists of East Asian and South Asian countries or regions, which share similar dietary habits. The bottom right clusters West Asian countries that are geographically proximate. The figure effectively encapsulates the differences and similarities in dietary habits across Asia.
Studies by Cai et al. (2022); Dong et al. (2022) have indicated that varying robustness parameters can yield significantly different results. In our case, such confusion is not an issue. Although soft thresholding (Dong et al., 2022) or quantile thresholding (Cai et al., 2022) can be employed to identify outliers, we provide a heatmap of absolute residuals measured with 'food supply' in Figure 6(a). This method demonstrates that, barring a few outlying entries, the remaining values are sufficiently small. It reveals notable deviations in the supply of soybean oil in Taiwan, as well as maize supply in the Democratic People's Republic of Korea and Timor-Leste. Figure 6(b) presents a heatmap of the scaled dataset within the 'food supply' slice. However, it cannot identify the outlying entries, and can only illustrate which types of food are in high demand. Particularly, some staple food columns such as rice and wheat stand out.
In parallel, Figures 5(b), 6(c), and 6(d) are derived from the European Food Balance Dataset. They
Figure 6: Food balance in Asia and Europe. Node embedding by the leading two eigenvectors are presented. In the left figure, Southeast Asian, East Asian and South Asian, West Asian countries or regions are clustered, respectively, consistent with Asian culture. The right figure is obtained from European data and is also able to demonstrate the country habitat similarities.
also illustrate dietary similarities in Europe, where geographically close countries tend to cluster, such as Iceland, Finland, Norway, and others. Similar to the Asian dataset, the absolute residuals here can pinpoint outlying entries like maize supply in Albania and olive oil in Greece and Spain. However, the scaled original data can't provide this information, only indicating that wheat, milk, and sugar are in substantial demand across Europe..
### Trade flow dataset
We colloected trade flow data from [https://comtradeplus.un.org/TradeFlow](https://comtradeplus.un.org/TradeFlow), containing the trading quantity among countries. The goods are categorized according to HS code which could be found in [https://www.foreign-trade.com/reference/hscode.htm](https://www.foreign-trade.com/reference/hscode.htm). We focus on the import data among 47 countries or regions. Specifically, 12 of the countries are from Asia, 17 from Europe, and 6 from American.
The import amount is measured using the 'CIF value', and we examine the trade of all goods categories (encoded as HS codes 01-97) during the year 2018. This results in a \(47\times 47\times 97\) tensor, corresponding to \(\mathsf{Import\ Places}\times\mathsf{Export\ Places}\times\mathsf{Goods\ Category}\). After discarding the zero slices, we are left with a \(45\times 47\times 96\) tensor. Given that population size significantly influences the quantity of imported goods, we scale the 45 slices of the \(47\times 96\) matrices, ensuring each slice has a unit Frobenius norm. Consequently, each entry now represents the import proportion of certain goods from a specific country over the total import quantity. This scaled tensor can reflect a country's goods requirements or economic structure, and demonstrate whether two countries maintain a close trade relationship. We input this tensor into the RsGrad algorithm with a Tucker rank of \((r_{1},r_{2},r_{3})=(3,3,8)\), aiming to uncover the latent low-rank structure. Notably, the visualization is insensitive to rank selections: we have experimented with ranks in the region \((2,2,2)-(8,8,8)\), all of which produce similar outputs. Figure 7(a) and 7(b) display the leading three eigenvectors in the \(\mathsf{Import\ Places}\) direction. Countries from the Americas, Asia, and Europe are denoted with blue circles, red triangles, and cyan plus signs respectively. In both figures, European countries cluster together, while Asian countries merge with American countries. This outcome aligns with the fact that a significant amount of trade occurs within Europe (Cai et al., 2022b).
We also illustrate four slices of absolute residuals, corresponding to 'clocks and watches and parts thereof', 'glass and glassware','mineral fuels, mineral oils and products of their distillation; bituminous substances; mineral waves', and 'printed books, newspapers, pictures and other products of the printing industry; manuscripts, typescripts and plans' (encoded as HS codes 91, 70, 27 and 49 respectively). In Figure 8(a), we observe that the import of glass and glassware from Portugal constitutes a significant portion of Spain's total imports. This is understandable given that Marinha Grande, a city in Portugal known as 'The Crystal City', is renowned for its glass
Figure 7: Slice of food supply measurement. The aforementioned figures illustrate that the scaled original data can indicate which types of food are in high demand. On the other hand, the outlying entries visible in the absolute residuals plot represent data that cannot be approximated by a low-rank structure, essentially indicating deviations from the pattern. This demonstrates the ability of our methods to uncover structures that may not be immediately discernible from the original data. Moreover, it underscores the robustness of the RsGrad method in handling outliers.
and glassware manufacturing. Figure (b)b shows that the import proportions of clocks and watches from Switzerland are notably high in China and France, reflecting Switzerland's prestige in watch manufacturing. Figure (c)c depicts the absolute residual plot in the mineral products slice, corroborating the fact that Norway is a major importer of mineral fuels. Finally, Figure (d)d reveals that the import of printed books and newspapers is significant in Germany, particularly from Austria and Switzerland.
|
2310.17490 | Improving Zero-shot Reader by Reducing Distractions from Irrelevant
Documents in Open-Domain Question Answering | Large language models (LLMs) enable zero-shot approaches in open-domain
question answering (ODQA), yet with limited advancements as the reader is
compared to the retriever. This study aims at the feasibility of a zero-shot
reader that addresses the challenges of computational cost and the need for
labeled data. We find that LLMs are distracted due to irrelevant documents in
the retrieved set and the overconfidence of the generated answers when they are
exploited as zero-shot readers. To tackle these problems, we mitigate the
impact of such documents via Distraction-aware Answer Selection (DAS) with a
negation-based instruction and score adjustment for proper answer selection.
Experimental results show that our approach successfully handles distraction
across diverse scenarios, enhancing the performance of zero-shot readers.
Furthermore, unlike supervised readers struggling with unseen data, zero-shot
readers demonstrate outstanding transferability without any training. | Sukmin Cho, Jeongyeon Seo, Soyeong Jeong, Jong C. Park | 2023-10-26T15:45:12Z | http://arxiv.org/abs/2310.17490v3 | # Improving Zero-shot Reader by Reducing Distractions
###### Abstract
Large language models (LLMs) enable zero-shot approaches in open-domain question answering (ODQA), yet with limited advancements as the reader is compared to the retriever. This study aims at the feasibility of a zero-shot reader that addresses the challenges of computational cost and the need for labeled data. We find that LLMs are distracted due to irrelevant documents in the retrieved set and the overconfidence of the generated answers when they are exploited as zero-shot readers. To tackle these problems, we mitigate the impact of such documents via **D**istraction-aware **A**nswer **S**election (DAS) with a negation-based instruction and score adjustment for proper answer selection. Experimental results show that our approach successfully handles distraction across diverse scenarios, enhancing the performance of zero-shot readers. Furthermore, unlike supervised readers struggling with unseen data, zero-shot readers demonstrate outstanding transferability without any training.
## 1 Introduction
Open domain question answering (ODQA) is a task for answering questions with the evidence documents fetched from a large corpus (Voorhees and Tice, 2000). A _retrieve-read_ framework has achieved remarkable performance in ODQA by fine-tuning the language models with labeled datasets (Lee et al., 2019; Karpukhin et al., 2020; Izacard and Grave, 2021). The emergence of large language models (LLMs) has enabled the exploration of zero-shot approaches in this framework, with less emphasis on the reader component (Sachan et al., 2022; Chuang et al., 2023; Levine et al., 2022).
Utilizing an LLM as a reader provides an advantage in the generalization ability with the rich world knowledge, unlike conventional small-sized supervised readers (Karpukhin et al., 2020; Izacard and Grave, 2021). While the supervised readers show remarkable performance on ODQA, they are hampered by two weaknesses: the high computational cost involved in training and the necessity of annotated query-document datasets. These limitations impede the transferability of readers to diverse tasks and domains. To solve this, we aim to validate the feasibility of using an LLM as a reader, leveraging its inherent advantages while reducing the aforementioned limitations.
However, the performance of an LLM in various tasks is easily distracted by irrelevant documents (Li et al., 2023; Shi et al., 2023), underscoring the importance of resolving these challenges in ODQA. The tendency of an LLM to generate incorrect answers becomes apparent when reading retrieved sets that include irrelevant documents. These documents, while related to the query, may lack the necessary information to provide an answer, leading to the occurrence of hallucination. This emphasizes the need for proper handling of such documents to fully harness the potential of an LLM, thereby achieving reliable performance as a reader. This paper addresses the requisite of hallucination mitigation to validate the possibility of an LLM as a zero-shot reader.
In this paper, we propose **D**istraction-aware **A**nswer **S**election (DAS), handling the challenges posed by irrelevant documents and overconfident scores as shown in Figure 1. First, we provide
Figure 1: An overview of distraction from the irrelevant documents when exploiting LLM as a zero-shot reader.
models with an "unanswerable" instruction, allowing them to abstain from answering. Then, we adjust the answer scores by reflecting the query generation score as the relevance between the given query-document pairs. These approaches reduce the impact of irrelevant documents and improve the selection of the correct answer from the relevant document.
We evaluate our proposed method on representative ODQA benchmarks with two publicly open LLMs, FLAN-T5 Chuang et al. (2023) and OPT-IML-MAX Iyer et al. (2022). This results in substantial performance improvements achieved by ours compared to a naive LLM across all scenarios. Note that ours effectively alleviates the hallucination induced by irrelevant documents by enhancing the robustness against the number of documents that are read. Furthermore, an LLM with our method exhibits excellent transferability compared to the supervised reader, offering the untapped potential of an LLM as a zero-shot reader.
Our contributions in this paper are threefold:
* We tackle the distraction incurred by irrelevant documents and overconfident scores when exploiting an LLM as a zero-shot reader in ODQA tasks.
* We introduce **D**istraction-aware **A**nswer **S**election (DAS) for a zero-shot reader, with the unanswerable instruction and the score adjustment eliciting its deductive ability.
* We empirically verify the efficacy of our proposed approach in effectively mitigating hallucination and unlocking the feasibility of zero-shot readers with a generalization ability.
## 2 Related Work
Zero-shot Approach in ODQAThe advent of an LLM has shown the potential that it can be used in two stages without parameter updates. For the retrieval stage, an LLM is exploited as a re-ranker via query generation or document permutation Sachan et al. (2022); Cho et al. (2023); Sun et al. (2023) or expanded query to diverse pseudo queries for improving the performance of supervised retrievers Liu et al. (2022); Yu et al. (2023); Chuang et al. (2023). For the reader stage, Levine et al. (2022) attempted to utilize an LLM as a zero-shot reader, addressing the irrelevant documents through a re-ranker. In this study, we focus on a fully zero-shot reader without an additional module.
### Distraction-aware Answer Selection
We present simple yet effective **D**istraction-aware **A**nswer **S**election (DAS) for a zero-shot reader. We aim to reduce the negative impact of irrelevant documents in a two-step answering pipeline. Initially, we offer an option to refuse responses to irrelevant documents via an unanswerable instruction. To improve the final answer selection, we incorporate the relevance of the query-document pair into the scoring process.
Document Selection (D.S.)We utilize the unanswerable instruction to enhance the deduction capability by giving the option not to respond. We exclude responses that belong to the unanswerable response set \(U\) as follows:
\[S^{\prime}=\{(a_{i},d_{i})|a_{i}\notin U,(a_{i},d_{i})\in S\} \tag{1}\]
We construct an unanswerable response set \(U=\{\)"Unanswerable", "Answer not in context"\(\}\). The answers in \(U\) are judged unanswerable as if the reader rejects to respond to the irrelevant documents.
Answer Selection (A.S.)Then, we adjust the answer score by multiplying the query generation score in consideration for the query-document relevance. This is formulated as follows:
\[(a^{*},d^{\prime})=\operatorname*{arg\,max}_{(a^{\prime}_{i},d^{\prime}_{i}) \in S^{\prime}}P_{M}(a^{\prime}_{i}|q,d^{\prime}_{i},\rho_{rc})\cdot P_{M}(q|d^ {\prime}_{i},\rho_{qg}) \tag{2}\]
where \(\rho_{qg}\) denotes the query generation instruction.
The query generation score from the given document is computed as:
\[\log P(q|d)=\frac{1}{|q|}\sum_{t}\log P(q_{t}|q_{<t},d) \tag{3}\]
## 4 Experimental Setup
DatasetWe experiment on **Natural Question** (NQ) [15], **TriviaQA** (TQA) [16], **WebQuestions** (WebQ) [17] and **SQuAD**[15] (SQD). 1 For annotated evidence documents for query, the development sets of each dataset are used.
Footnote 1: Following the settings from Karpukhin et al. (2020), the English Wikipedia dump from Dec 20, 2018, is used.
RetrieverWe employ the representative sparse retriever, **BM25**[16], and the dense one, **DPR**[14].
\begin{table}
\begin{tabular}{l l|c c c c|c c c} \hline \hline \multirow{2}{*}{**Retriever**} & \multirow{2}{*}{**Reader**} & \multicolumn{3}{c|}{**Top-20**} & \multicolumn{3}{c}{**Top-100**} \\ & & \multicolumn{1}{c}{NO} & \multicolumn{1}{c}{TQA} & \multicolumn{1}{c}{WebQ} & \multicolumn{1}{c}{SOD} & \multicolumn{1}{c}{NO} & \multicolumn{1}{c}{TQA} & WebQ & SOD \\ \hline \multirow{4}{*}{**BM25**} & **FLAN-TS-XL** & 23.37 & 52.68 & 16.19 & 19.40 & 17.86 & 46.12 & 15.83 & 15.79 \\ & w/ DAS & 31.51 & **64.54** & 20.14 & **39.39** & 33.34 & **68.86** & **25.90** & **46.71** \\ & w/ DAS & (40.87) & (42.25) & (42.44) & (41.07) & (33.58) & (49.53) & (40.87) & (49.76) \\ \cline{2-10} & **OPT-IM-MAX** & 20.21 & 53.21 & 23.38 & 22.93 & 16.32 & 46.57 & 18.71 & 18.34 \\ & w/ DAS & 28.72 & 56.95 & 24.10 & 23.32 & 29.76 & 59.87 & 24.10 & 37.74 \\ & (44.19) & (7.08) & (43.15) & (41.26) & (42.24) & (42.68) & (42.86) & (42.88) & (40.16) \\ \hline \multirow{4}{*}{**DPR**} & **FLAN-TS-XL** & 22.43 & 47.44 & 20.50 & 12.85 & 15.90 & 39.17 & 16.55 & 10.30 \\ & w/ DAS & **37.77** & 64.48 & **26.98** & 26.66 & **37.96** & 68.22 & 25.18 & 34.12 \\ & (46.48) & (43.59) & (43.59) & (43.15) & (46.079) & (41.89) & (47.24) & (45.21) & (45.21) \\ \cline{2-10} & **OPT-IM-MAX** & 23.28 & 50.24 & 21.58 & 16.03 & 16.65 & 43.67 & 19.42 & 14.47 \\ & w/ DAS & 33.69 & 56.61 & **26.98** & 21.97 & 32.95 & 59.05 & 25.54 & 28.46 \\ \cline{2-10} & **OPT-IM-MAX** & (44.76) & (41.27) & (42.78) & (42.70) & (43.71) & (49.76) & (40.523) & (43.15) & (49.76) \\ \hline \hline \end{tabular}
\end{table}
Table 1: EM accuracy of the final answer among the answer candidates generated from the top-\(k\) retrieved documents. The best scores are marked in **bold**. The number in parentheses means the improvement percentage from DAS.
\begin{table}
\begin{tabular}{l c|c c|c c} \hline \hline
**Reader Model** & **Train Set** & **NQ** & **TQA** & **SQD** & **RQA** \\ \hline \multirow{2}{*}{DPR\({}^{\dagger}\)} & Multi & 41.5 & 56.8 & 29.8 & - \\ \cline{2-6} & NQ & 45.1 & 54.1 & 34.1 & 29.8 \\ \cline{2-6} & TQA & 26.9 & 64.5 & 27.5 & 33.2 \\ \hline \multirow{2}{*}{FD-large} & NO & 50.8 & 59.2 & 36.2 & 34.0 \\ & TQA & 30.9 & 69.0 & 31.5 & 34.4 \\ \hline \multirow{2}{*}{FLAN-TS-XL w/ DAS} & \multirow{2}{*}{-} & 34.0 & 57.2 & 43.5 & 35.8 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of ours against the supervised readers on the test set under the condition of exploiting DPR. \({\dagger}\) denotes the performance from its paper.
Figure 2: EM accuracy depending on the number of the documents retrieved by BM25 and DPR on TQA.
Language ModelWe select two publicly open LLMs: **1) FLAN-T5**Chung et al. (2022) is the family of T5 Raffel et al. (2020) with instruction tuning; **2) OPT-IML**Iyer et al. (2022) is the fine-tuned version of OPT Zhang et al. (2022) by instruction meta learning. We exploit FLAN-T5-XL containing 3B parameters and OPT-IML-MAX-1.3B in our main experiments.
MetricsIn our evaluation, we employ the exact match (EM) accuracy metric to assess whether the reader generates the same answer as the annotated answer, after applying normalization techniques such as punctuation removal. We adhere to the same normalization process utilized in previous works Chen et al. (2017); Lee et al. (2019).
Implementation DetailsThe reading comprehension instruction is _"Read the following context and answer the question"_. We add _"If you don't know the answer, return unanswerable"_ for the unanswerable instruction, as mentioned in Sanh et al. (2022). Also, we compute the query generation score, following settings from Sachan et al. (2022). More details are in Appendix B.
## 5 Result
### Main Result
Table 1 demonstrates the significant performance improvements achieved by DAS regardless of retrievers, LLMs, and datasets. Our method achieves an increase in EM of 64% on average against the default, with a remarkable improvement of 231%.
As the size of the retrieved set increases the likelihood of including relevant documents, the reader should be robust to irrelevant documents. Nevertheless, the presence of discritation becomes apparent as indicated by the performance decline without DAS, as shown in Table 1 and Figure 2, when processing more documents. This challenge is addressed by mitigating the negative impact of irrelevant documents. Our approach achieves an average enhancement of 17% in EM when reading 100 documents compared to 20. This shows the robustness of our approach in handling the problem stemming from the irrelevant documents.
Also, we find that when reading 100 documents, the use of documents collected through BM25 has a more positive impact on the performance of the reader compared to documents from DPR. This finding is noteworthy, especially considering that DPR generally performs better in retriever tasks. When employing a zero-shot reader, it cannot be definitively concluded that better retrieval will necessarily lead to enhanced reader performance. More details are in Appendix C.
Comparison against Supervised ReaderWe directly compare with the supervised readers on the aforementioned datasets and an additional held-out dataset, RealTimeQA (RQA) Kasai et al. (2022). The query of RQA is based on the information of the real world, not included in the training procedure. As shown in Table 2, the zero-shot reader with ours shows robust performance compared to supervised readers, DPR Karpukhin et al. (2020) and FiD Izacard and Grave (2021), which perform poorly on unseen data such as SQuAD and RQA. We highlight their potential as a valuable alternative that avoids the limitations and costs associated with supervised readers.
Figure 4: Distribution of answer pairs \(p^{*}\) based on document-query relevance and answer correctness.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Reader** & **Correct Answer** & **Incorrect Answer** & **Total Answer** \\ \hline
**FLAN-T5-XL** & 5.50 (5.50\%) & 94.50 (94.50\%) & 100 \\ w/ DAS. & 2.89 (21.93\%) & 10.27 (78.07\%) & 13.16 \\ \hline
**OPT-IML-MAX** & 5.13 (5.13\%) & 94.87\% & 100 \\ w/ DAS. & 2.85 (10.97\%) & 23.14 (90.03\%) & 26.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average number of answers in the candidate set \(S\). The number in parentheses represents the proportion relative to the total number in \(S\).
Figure 3: EM accuracy depending on the model size. The exploited models are the families of FLAN-T5.
### Analysis
Our analysis is conducted on NQ with the top 100 documents retrieved by DPR with FLAN-T5-XL. Detailed analysis are in Appendix D.
Impact of Model SizeWe conduct experiments to assess the impact of model size on performance. As shown in Figure 3, the results demonstrate that even with smaller models, ours maximizes the performance of an LLM as a zero-shot reader. This indicates that our approach enables LLMs to function effectively as zero-shot readers, even without the need for extensively large parameter sizes.
Answer Candidate SetWe examine the effects of applying DAS on the answer candidate set \(S\) as presented in Table 3. Our findings highlight a remarkable shift in the distribution of answers, with changes of 16.43%p and 5.84%p observed in each reader. Substantial increases in the ratio of correct answers demonstrate that ours effectively mitigates the inclusion of incorrect answers from irrelevant documents.
Final Answer PairFigure 4 illustrates an analysis of the distribution of the final answer pair \(p^{*}\). The results provide evidence that ours successfully selects documents that are relevant to the given query and enable the extraction of a higher number of correct answers from the relevant documents. Additionally, ours shows a reduction of approximately 5% in the rate of incorrect answers generated from irrelevant documents.
## 6 Conclusion
In this paper, we propose **D**istraction-aware **A**nswer **S**election (DAS) to address the irrelevant documents in the retrieved set when an LLM is used as a zero-shot reader. To validate its capability, we define hallucination caused by irrelevant documents and overconfident answer scores in ODQA setting. Ours aims to mitigate the impact of these aspects by incorporating unanswerable instruction and adjusting answer scores for better answer selection. Experimental results demonstrate the effectiveness of our proposal in handling hallucination across various scenarios, thereby improving the performance of ODQA benchmarks. Our approach, utilizing an LLM, showcases strong generalization capabilities across diverse datasets, distinguishing it from supervised readers and highlighting the potential of a zero-shot reader.
## Limitations
Our methodology utilizes a two-step pipeline to enhance the performance of an LLM as a zero-shot reader, addressing hallucination issues and leveraging its functionality. While ours fully elicit the inherent ability of the zero-shot reader from LLM, its effectiveness is dependent on the capabilities and characteristics of the LLM. For example, the prompt sensitivity of an LLM is one of the important aspects to consider, as different prompts may lead to varying results. Also, the performance of an LLM is size-dependent. Although our experiments have yielded consistent results in numerous cases, further investigation is required to evaluate our approach with larger LLMs. Despite these limitations, the zero-shot approach holds great promise in terms of cost-effectiveness and leveraging abundant world knowledge. As future advancements in LLMs are anticipated, we expect even greater improvements in performance over the state-of-the-art supervised readers.
## Ethics Statement
We acknowledge the possibility of bias or offensive answer sets in utilizing an LLM as a zero-shot reader. Since this paper primarily focuses on the mitigating impact of irrelevant documents in ODQA without parametric updates, addressing the issue of bias and offensive language within an LLM is beyond the scope of our paper. We are aware that ongoing research and efforts are being made by researchers to address these concerns and improve the ethical aspects of LLMs. It is expected that future advancements and research in the field will contribute to addressing these biases and ensuring an ethical use of LLMs.
## Acknowledgements
This work was supported by an Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (No. 2018-0-00582, Prediction and augmentation of the credibility distribution via linguistic analysis and automated evidence document collection). This work was also supported by the Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT (MSIT, Korea) & Gwangju Metropolitan City. |
2302.07492 | Envisioning the Next-Gen Document Reader | People read digital documents on a daily basis to share, exchange, and
understand information in electronic settings. However, current document
readers create a static, isolated reading experience, which does not support
users' goals of gaining more knowledge and performing additional tasks through
document interaction. In this work, we present our vision for the next-gen
document reader that strives to enhance user understanding and create a more
connected, trustworthy information experience. We describe 18 NLP-powered
features to add to existing document readers and propose a novel plug-in
marketplace that allows users to further customize their reading experience, as
demonstrated through 3 exploratory UI prototypes available at
https://github.com/catherinesyeh/nextgen-prototypes | Catherine Yeh, Nedim Lipka, Franck Dernoncourt | 2023-02-15T06:43:12Z | http://arxiv.org/abs/2302.07492v1 | # Envisioning the Next-Gen Document Reader
###### Abstract
People read digital documents on a daily basis to share, exchange, and understand information in electronic settings. However, current document readers create a static, isolated reading experience, which does not support users' goals of gaining more knowledge and performing additional tasks through document interaction. In this work, we present our vision for the next-gen document reader that strives to enhance user understanding and create a more connected, trustworthy information experience. We describe 18 NLP-powered features to add to existing document readers and propose a novel plug-in marketplace that allows users to further customize their reading experience, as demonstrated through 3 exploratory UI prototypes available at: github.com/catherineyeh/nextgen-prototypes.
## Introduction
Digital documents (e.g., portable document format (PDF) files or Word documents) are a popular format for sharing, exchanging, and understanding information in electronic settings. Reading such documents is an integral part of countless people's daily routines, and many choose to engage with these files through document readers such as _Adobe Acrobat_, _Foxit_, and _Sumatra PDF_. However, with these current applications, document reading can feel relatively static and isolated, as the reading experience is usually confined to within the document reader itself. Additionally, there is typically not much interaction between the user and the information they are reading when scrolling through a digital document.
This presents a problem as documents themselves are usually not the end goal for users. Rather, they represent a starting point for people to gain more knowledge or perform additional actions. Thus, in this work, we present our **vision for the next-gen document reader** that strives to better support users in achieving their goals through harnessing the power of natural language processing (NLP). We design this next-gen document reader to 1) enhance user understanding of digital files and 2) transform currently static, isolated documents into connected, trustworthy, and interactive sources of information.
The key contributions of our work include:
* A set of proposed NLP-powered plug-ins to add to existing document readers toward enhancing human-document interaction, including 12 **open-domain** and 6 **domain-specific** features.
* A preliminary vision for a centralized **plug-in marketplace** that would allow further customization of the user experience in document readers and feature development to be outsourced.
* 3 exploratory **UI prototypes** illustrating a sub
set of features and the plug-in marketplace proposed for the next-gen document reader (github.com/catherineyeh/nextgen-prototypes).
## Related Work
While older document readers such as _Adobe Acrobat_, _Foxit_, and _Sumatra PDF_ tend to only support static, in-document features, recent NLP efforts are beginning to explore the possibility of creating a more connected, trustworthy information experience for users.
For example, _ScholarPhi_[1] strives to improve the readability of scientific papers by creating an augmented reading interface with features such as position-sensitive definitions, a decluttering filter, and an automatically generated glossary for the important terms and symbols. Similarly, _Paper Plain_[1] is an interactive interface that aims to make medical research papers more accessible with its definition feature, section gists, and Q & A passages. _Scim_[12] is another AI-augmented document reader that helps researchers skim scientific papers by automatically identifying, classifying, and highlighting salient sentences.
_Sioyek_[21], a document viewer designed for reading technical books and research papers, has some interesting features such as smart jump for references and figures, searchable bookmarks, and portals to display linked information in a separate window. _Explainpaper_ is a novel AI-powered reading interface for reading academic papers as well, offering live explanations to users upon highlighting sections of text and an interactive Q & A feature. However, these works are currently very limited in their features and scope.
## Vision
Our vision for the next-gen document reader includes the following components as illustrated in Figure 1:
* A set of **open-domain** features that can enhance the document reading experience for various document types,
* A set of features that are more **domain-specific**, and
* A centralized **plug-in marketplace** that would allow users to further customize document readers with additional features.
Throughout this paper, we use plug-in and feature synonymously to mean any software add-on that serves to extend the core functionality of a static document reader. Once a plug-in is installed, it could be toggled on/off by users and when active, the plug-in could be accessed through a **contextual pop-up menu**. Figure 2 illustrates how this type of menu could work with a sample PDF cake recipe. In this UI prototype, the unit conversion plug-in is toggled on, but the corresponding tooltip icon only appears if relevant text is selected (i.e., text containing numerical values). Ideally, the document reader would also automatically identify the correct unit of measurement selected by the user and auto-populate this information into the pop-up conversion tool.
In the following sections, we provide more details about our proposed features and plug-in marketplace for the next-gen document reader.
## Features
To begin the design process, we brainstormed features that would be helpful to add to static document readers such as _Acrobat_ or _Foxit_, focusing on features that can leverage NLP. During this stage, we surveyed the literature and investigated existing plug-ins supported by newer document viewers [1, 12, 13] as described in the Related Work section. Some ideas were also contributed by peers and collaborators.
This process resulted in 26 potential feature suggestions, which we narrowed down to 18 based on feasibility of implementation. These ideas were then categorized by domain
\begin{table}
\begin{tabular}{l l} \hline \hline
**Open-Domain** & \\ \hline Definitions & Equation Exporting \\ Acronyms/Abbreviations & Speed Reading \\ Unit Conversions & Sentiment Analysis \\ Translations & Form Auto-Fill \\ Spelling Changes & Scholar Notes \\ Table Copying & Shared Commenting \\ \hline
**Domain-Specific** & \\ \hline Linkifying Known Entities & Smart Jumps \\ Linkifying Relevant Content & Citation Warnings \\ Action Tasks & Portals \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of open-domain and domain-specific features proposed for the next-gen document reader
Figure 2: UI prototype demonstrating our contextual pop-up menu concept. Users can a) toggle installed plug-ins on or off and when active, b) the corresponding plug-in tooltip only appears if relevant text is selected. _(Recipe source: BBC Good Good)_
type, ultimately yielding 12 **open-domain** and 6 **domain-specific** features. A list of all proposed features is included in Table 1. Selected features are highlighted below using our second UI prototype of a PDF pizza restaurant menu (as previewed in Figure 1).
Open-Domain FeaturesWhen a single word is highlighted, document readers could show users potential **definitions** of the term (Figure 3), similar to [11, 10]. The displayed definitions could be retrieved from the document itself [12, 10], from online sources as in [11], or crowd-sourced.
Similarly, the long forms of **acronyms/abbreviations** could be shown to the user via pop-up tool tips (Figure 4). If the acronym is defined elsewhere in the paper, we could take a similar approach to [12, 10, 11] for extracting definitions; otherwise, retrieving it from online sources is also possible. A list of key definitions and acronyms could be included at the beginning or the end of the document as well, following [12]. The definition of math symbols could also be extracted from the text [13, 10].
**Unit conversions** could be either a case-by-case basis (i.e., users highlight specific numbers to convert like definitions/acronyms) or document-level (i.e., document reader automatically converts all units at once). Figure 2 shows the former option, assuming that unit selection is embedded inside of the plug-in pop-up. Allowing users to select their unit of choice via the main document reader toolbar (see Figure 5) would be another possibility.
As with unit conversions, there could be a toolbar option at the top of document readers that would allow users to choose the language they want to read the document in. Alternatively, **translations** could be performed on a more case-by-case basis. The former could be similar to the Google Translate browser extension. Figure 6 illustrates the latter option. Related to translation is the idea of automatically suggesting **spelling changes** based on the current document language. For example, the spellings in a document could automatically be converted from American to British English (e.g., color \(\rightarrow\) colour).
Document readers could also provide users with the capability of directly copying tables from files like PDFs into Microsoft Word, Excel, Markdown, etc. to further manipulate, share or analyze. This **table copying** feature would reduce the need for hand transcribing and combining data from multiple tables in digital files. There could also be a selection hierarchy, allowing users to select specific parts of a table (e.g., a single cell, row, column, etc.) or the whole itself. A version of this feature is included in the _Adobe PDF Extract API_, but currently, tables can only be exported to CSV formats, so there is room for extension and it is still missing from all the PDF readers we surveyed. Similarly, an **equation exporting** plug-in could allow users to export math equations present in digital documents to their corresponding LaTeX formulas so they are directly editable. Implementing this feature would be possible using image-to-latex algorithms [1, 10, 11].
Another way to enhance the document reading experience could be including a **speed-reading** plug-in that would allow allow users to customize the speed at which they read text in document readers, similar to the service offered by _Spritz_. Additionally, a **settlement analysis** feature could allow users to assess sentiment at the document level and potentially at the sentence level as well. Sentiment classification would be useful for a wide variety of document types, particularly when it is beneficial to understand a document's tone/attitude. Some approaches for document-level
Figure 4: Example acronym feature
Figure 5: Main toolbar in Adobe Acrobat
Figure 3: Example definition feature
Figure 6: Example translation feature
sentiment analysis have been proposed by [15, 16].
For certain documents such as history books, scientific papers, and poems, reading applications could also offer **scholar notes**. As an example, the document reader could include a critic's analysis of a text (e.g., in the sidebar) and/or their annotations throughout the file as comments that the user could view. The notes could be distributed via a marketplace, and some of them could be set as paid access only if monetization is of interest. Users may be willing to pay to get access to the meta-information given by a scholar in the field to better understand the text itself, its historical context, the equations, potential errors, the author's mindset at the time of the writing, and so on. A related feature idea is allowing users to leave **shared comments** in digital documents. For example, users could highlight a sentence or figure and then create a thread for further discussion (e.g., asking a question, offering clarification, etc.), which could open up in a sidebar.
These human-in-the-loop features could help make up for the imperfections and the limitations of other AI-powered document plug-ins. However, the main challenges with implementing such features would be moderating/filtering the user content and respecting users' privacy (e.g., we do not want a user to mistakenly post their comments as public if they did not intend too).
Domain-Specific FeaturesDocument readers could also incorporate domain-specific features such as **linking text to known entities**. For example, addresses or business names could be automatically linked to Google Maps and phone numbers could be linked to an app/website for further action, as Google Chrome or Android currently does. The former is illustrated in Figure 7. Other ideas include linking protein names to the Protein Data Bank for biology documents, linking references to their Google Scholar entry in scholarly articles, or linking ticker symbols to their Yahoo Finance pages (e.g., finance.yahoo.com/quote/ADBE \(\rightarrow\) ADBE) for finance documents.
Similarly, document readers could **link text to relevant content**. This is a trickier task than linkifying known entities, but the required extrapolation may be feasible in certain cases. For instance, if a file is identified as a restaurant menu, the document reader could link to the corresponding Yelp or Google reviews page so users could see more pictures/reviews of different items (Figure 8). Or, if a movie title is identified inside a document, links to available movie times or streaming platforms could be generated. Another possibility would be searching selected keywords/phrases in a search engine or e-commerce website (e.g., Amazon, Alibaba, etc.) to see related products; _Axesso Amazon API_ has implemented one such keyword search feature. Ultimately, this feature could be similar to how YouTube recommends products based on the videos a user watches.
Additionally, we could identify and create **action tasks** that users could complete within the document reader itself. For example, if a date/deadline is identified in the text, users could be given the option to add it to their calendar. Similarly, if a payment is mentioned, users could have the option to pay directly inside of the document. Or, if there is language such as "You should notify X..." or "Please reach out to Y..." in a digital file, it might be helpful to give users the ability to send messages/emails from the document reader as well. In general, these tasks could be accessed via pop-up icons throughout the document, but there could also be an overall list on the sidebar, for instance.
For documents like academic papers or textbooks, a **smart jump** feature (terminology from _Sioyek_) could be offered that allows users to jump to any referenced figure or reference in the document, even if links are not explicitly provided. A similar feature has already been implemented by _Sioyek_. Currently, _Sioyek_'s smart jump feature automatically links references to their Google Scholar page as well, connecting to our idea for linkifying known entities described above. Another related idea would be including a **citation warning** feature that displays a warning to users when a citation in a document has been retracted [17, 1].
One last domain-specific feature would be **portals** (terminology from _Sioyek_). This feature would allow users to link figures to specific paragraph locations so they can view them simultaneously on a separate monitor/window, as im
Figure 8: Example restaurant linkifying feature
Figure 7: Example address linkifying feature
plemented by _Sioyek's_ portal feature (Figure 9). Such "portals" would enhance the reading experience by removing the need to scroll back and forth in a document to find the relevant figures for each section of text, which could especially be helpful for academic/scientific papers.
### Plug-in Marketplace
In the previous sections, we describe many potential plug-ins for the next-gen document reader. However, the average user will not need all these features when engaging with digital files. Thus, to allow users to further customize their document reading experience and choose which features they want to use, we propose the creation of a centralized **plug-in marketplace**. This way, document readers could come with a few default plug-ins (e.g., definitions/acronyms or other open-domain features that could be useful for most document types and users) and users could add more via the marketplace if they wanted to.
Having a plug-in marketplace would also prevent document readers from growing excessively in terms of size and computational requirements. Instead, the user would individually decide which plug-ins to install and use, just like in virtually all modern code and text editors.
#### Key Features
An example of what such a marketplace might look like is illustrated in Figure 10. To our knowledge, there currently exist no plug-in marketplaces for document readers such as _Adobe Acrobat_, _Foxit_, and _Sumatra PDF_. Consequently, many features we include in our UI prototype are inspired by the marketplaces for integrated development environments (IDEs) like _Visual Studio_, _IntelliJ_, and _Eclipse_ as well as marketplaces for text editors like _Emacs_, _Notepad++_, and _Subline Text_.
On the main marketplace page (Figure 10), we envision a space where users can **discover** new features with the search bar or featured plug-ins section, which could highlight the newest or most popular plug-ins. Plug-ins could also be tagged (e.g., by domain) and reviewed to allow users to **filter** and **sort** the results. Inside the marketplace, users would have the option to **(un)install** plugins at any time.
Each user could also have their own "My Plugins" page inside the plug-in marketplace, as shown in Figure 11. On this page, users would again have the ability to search for, filter, sort, and uninstall plugins. In addition, users could **pin** their favorite plug-ins for easy access. Here, users would be able to globally **toggle plugins on/off** as well. There could also be an option to turn plugins on/off at a document level, illustrated in Figure 1(a).
Another feature that could be added within the plug-in marketplace is a **feedback/feature request** page, where users could submit general feedback about the marketplace or propose new ideas for features to add. Along these lines, several IDEs/text editors include a community **forum** (e.g., eclipse.org/forums) where users can openly discuss topics and ask questions about different plug-ins, so implementing a similar discussion platform for document readers could also be valuable.
Figure 10: UI Prototype of our proposed plug-in marketplace for the next-gen document reader. Users can discover new plug-ins via the search bar or featured section, filter or sort the results, and (un)install plug-ins at any time.
## Future Work
This work represents a preliminary, exploratory vision for the next-gen document reader. The next steps include working toward concretizing our ideas and assessing the viability of implementation. Specifically, we hope to conduct formal **user studies** to collect additional feedback on our vision, further hone the proposed designs, and better understand which features would be most useful to end-users. Through these user studies, we may also generate additional ideas for potential document reader plugins.
Further out in the future, we could also consider more **complex features**. For example, a filtering option would help readers focus on only the most relevant parts of the document, similar to the declutter feature from [1]. Similarly, fact-checking sentences and displaying a warning symbol next to text containing incorrect facts would be extremely valuable. Other complex features for consideration include summarization [11, 12], section title generation [13, 14], key sentence highlighting [15, 16], and question-answering [17, 18]. These ideas are more challenging to realize at the moment and may not be mature enough to be released to the general public, but the recent progresses in large language models are making some of these features more achievable [15, 16].
## Conclusion
In this paper, we present our **vision for the next-gen document reader** that will transform static, isolated documents into connected, trustworthy, interactive sources of information. This vision includes 12 **open-domain** and 6 **domain-specific** features powered by NLP, which can be accessed by the user through contextual plug-in pop-up menus while reading digital files. To allow users to customize their reading experiences with document readers, we also propose a centralized **plug-in marketplace** inspired by modern IDEs and text editors. Next steps include conducting formal user studies to further hone our UI prototypes (github.com/catherinesyeh/nexgen-prototypes) and vision, while also considering additional complex features to improve user-document interaction such as filtering or question-answering. We hope this work inspires and excites others about the future of document readers.
|
2303.08488 | Hot-electron resonant terahertz bolometric detection in the
graphene/black-AsP field-effect transistors with a floating gate | We evaluate the terahertz (THz) detectors based on field effect transistor
(FET) with the graphene channel {GC} and a floating metal gate (MG) separated
from the GC by a black-phosphorus (b-P) or black-arsenic (b-As) barrier layer
(BL). The operation of these GC-FETs is associated with the heating of the
two-dimensional electron gas in the GC by impinging THz radiation leading to
thermionic emission of the hot electrons from the GC to the MG. This results in
the variation of the floating gate potential, which affects the source-drain
current. At the THz radiation frequencies close to the plasmonic resonance
frequencies in the gated GC, the variation of the source-drain current and,
hence, the detector responsivity can be resonantly large. | V. Ryzhii, C. Tang, T. Otsuji, M. Ryzhii, V. Mitin, M. S. Shur | 2023-03-15T09:45:12Z | http://arxiv.org/abs/2303.08488v2 | Hot-electron resonant terahertz bolometric detection in the graphene/black-AsP field-effect transistors with a floating gate
###### Abstract
We evaluate the terahertz (THz) detectors based on field effect transistor (FET) with the graphene channel GC and a floating metal gate (MG) separated from the GC by a black-phosphorus (b-P) or black-arsenic (b-As) barrier layer (BL). The operation of these GC-FETs is associated with the heating of the two- dimensional electron gas in the GC by impinging THz radiation leading to thermionic emission of the hot electrons from the GC to the MG. This results in the variation of the floating gate potential, which affects the source-drain current. At the THz radiation frequencies close to the plasmonic resonance frequencies in the gated GC, the variation of the source-drain current and, hence, the detector responsivity can be resonantly large.
## I Introduction
The specific properties of graphene channel (GCs) [1; 2; 3] and black-P (b-P), black-As (b-As), or black-AsP (b-AsP) layers [4; 5; 6; 7; 8] open up prospects for devices based on the GCs (see, for example, the review [9]) and on the GC/b-AsP heterostructures [6], including the electron devices using the real-space transfer over the b-AsP layers [10; 11] and different optoelectronic devices [12; 13; 14; 15]. Due to relatively low energy barriers for the electrons and holes at the GC/b-AsP interface, the thermionic emission through such an interface can be effective, particularly, enabling the creation of the GC/b-AsP- bolometric terahertz (THz) detectors.
In this paper, we evaluate the characteristics of the bolometric detectors based on the field-effect transistor (FET) structures with the GC, b-AsP barrier layer (BL), and floating metal gate (MG). Similar GC/b-AsP FET detectors were recently proposed and analyzed by us [16]. The principal difference between the GC/b-AsP FETs considered previously and the GC/b-AsP FETs under consideration here is the floating MG. The idea of using MG in graphene bolometers has been applied to pyroelectric graphene mid-infrared detectors. In these detectors, pyroelectric substrate charge is collected by a floating gate [17].
In contrast, the operation of the bolometric detectors considered in this paper is associated with the thermionic emission of the electrons heated by the impinging THz radiation from the GC into the MG via the b-AsP BL. However, contrary to the devices studied in [16], in which the gate current serves as the detected signal, in the detector with the floating MG considered here the detected signal is associated with the variations of the source-drain current in the GC stimulated by the varying potential of the MG. The potential of the latter is controlled by the thermionic emission from the GC, which reinforces with increasing THz power. This effect can become rather strong at the plasmonic oscillations resonantly excited by the impinging THz radiation in the gated GC [18; 19; 20; 21; 22]. The features of the GC-FET detector operation with the floating gate require the development of a fairly different device model. Using this model, we calculate the signal current and the detector responsivity as functions of the structural parameters. As demonstrated, the floating gate GC/b-AsP FET detectors might exhibit elevated values of the responsivity, particularly, at the plasmonic resonances. We also compare the performance of GC/b-AsP FET detectors with the floating and biased MGs.
Figure 1: Schematic view of the GC/b-AsP FET detectors (a) with the floating MG and (b) with the MG biased by the gate voltage \(V_{G}\)[16].
## II Electron transport
We consider the GC/b-AsP FETs with the floating MG and the b-AsP gate BL. Figures 1 and 2 schematically show the cross-section of the device structures and their band diagrams. The bias voltage \(V_{SD}\) and the signal voltage \(\delta V_{\omega}\) are applied between the FET source and drain as shown in Fig. 1(a). The signal voltage is produced by an antenna receiving the impinging THz radiation with the frequency \(\omega\). The GC of the FETs is doped by donors. For definiteness, the work functions of the gate metal and the b-AsP BL, and the GC doping level (the electron Fermi energy \(\mu_{D}\) in the GC in equilibrium when no bias is applied) are chosen to provide the band alignment in the equilibrium. This corresponds to \(\Delta_{M}=\Delta_{C}-\mu_{D}\), where \(\Delta_{M}\) and \(\Delta_{C}\) are the differences between the work functions of the gate metal and the b-AsP BL and between the b-AsP BL and the GC.
At the source-drain bias voltage \(V_{SD}\) and the THz irradiation, the source-drain current, \(J_{SD}\), and the electron effective temperature, \(T\), in the GC averaged over the THz radiation period \(2\pi/\omega\) can be presented as
\[J_{SD}=\overline{J}_{SD}+\langle\delta J_{\omega}\rangle,\qquad T=\overline{T} +\langle\delta T_{\omega}\rangle. \tag{1}\]
Here \(\overline{J}_{SD}=J_{0}+\Delta\overline{J}_{DC}\) and \(\overline{T}=T_{0}+\Delta\overline{T}\), \(J_{0}\) is the source-drain current at the 2DEG effective temperature equal to the lattice \(T_{0}\), \(\Delta\overline{J}_{DC}\) and \(\Delta\overline{T}\) are the pertinent current and temperature variations, and \(\langle\delta J_{SD}\rangle\) and \(\langle\delta T_{\omega}\rangle\) are the variations caused by the source-drain bias voltage \(V_{SD}\) and the signal voltage \(\delta V_{\omega}\).
The source-drain current \(J_{SD}\) per unit of the GC width is governed by the following equations:
\[\frac{dJ_{SD}}{dx}=-j,\qquad J_{SD}=-\sigma\frac{d\varphi}{dx}, \tag{2}\]
where \(j\) is the density of the thermionic current between the GC and the MG, \(\sigma=\sigma_{D}(\mu/\mu_{D})\) and \(\sigma_{D}=(e^{2}\mu_{D}/\pi\hbar^{2}\nu)\) is the electron Drude conductivity in equilibrium with \(\nu\) being the characteristic electron scattering frequency in the GC (the inverse electron momentum relaxation time), and \(\mu\) is the electron Fermi energy, which generally differs from \(\mu_{D}\) due to the MG charging.
The averaged GC potential \(\varphi\) (dependent on the coordinate \(x\) directed along the GC) satisfies the following conditions at the source and drain contacts:
\[\varphi|_{x=\pm L}=\pm\frac{V_{SD}}{2}, \tag{3}\]
where \(2L\) is the spacing between the contacts (the GC length). Considering the difference between the GC potential \(\varphi\) and the MG potential \(\varphi_{G}\) and accounting for the quantum capacitance [23; 24] of the gated GC, at not too-large potential swing (\(\varphi-\varphi_{G}\)) we obtain
\[\mu\simeq\mu_{D}-\varkappa\,e(\varphi-\varphi_{G}). \tag{4}\]
Here \(\varkappa=\mu_{0}/(\mu_{0}+\mu_{D})\), \(\mu_{0}=\kappa\,\hbar^{2}v_{W}^{2}/8e^{2}W\), \(\kappa\) is the BL dielectric constant, and \(v_{W}\simeq 10^{8}\) cm/s is the electron velocity in GCs. This implies that an increase in \(\varphi\) leads to an increase in the 2DEG density and, hence, its Fermi energy \(\mu\). The contribution of the quantum capacitance to Eq. (4) is characterized by a factor \(\mu_{0}/\mu_{D}\propto W^{-1}\).
Since the MG is disconnected (floating MG),
\[\int_{-L}^{L}dxj=0. \tag{5}\]
Due to the trapezoid shape of the barrier between the GC and the MG, the potential barrier heights for the electron emitted from the GC and the MG, \(\Delta_{BL}^{\leftarrow}\) and \(\Delta_{BL}^{\rightarrow}\), are equal to:
\(\Delta_{BL}^{\leftarrow}=\Delta_{M}+e(\varphi-\varphi_{G})\) and \(\Delta_{BL}^{\rightarrow}=\Delta_{M}\) for \(\varphi>\varphi_{G}\), and
\(\Delta_{BL}^{\leftarrow}=\Delta_{M}-(\mu-\mu_{D})\) and \(\Delta_{BL}^{\rightarrow}=\Delta_{M}-(\mu-\mu_{D})-e(\varphi-\varphi_{G})\) for \(\varphi<\varphi_{G}\).
In this situation, the density of the thermionic current, \(j\), between the GC and the MG is given by
\[j=j^{m}\biggl{[}\exp\biggl{(}-\frac{\Delta_{M}+e(\varphi-\varphi _{G})}{T}\biggr{)}\] \[-\exp\biggl{(}-\frac{\Delta_{M}}{T_{0}}\biggr{)}\biggr{]} \tag{6}\]
when \(\varphi-\varphi_{G}>0\), and
\[j=j^{m}\biggl{[}\exp\biggl{(}-\frac{\Delta_{M}-(\mu-\mu_{D})}{T} \biggr{)}\] \[-\exp\biggl{(}-\frac{\Delta_{M}-(\mu-\mu_{D})-e(\varphi-\varphi_{G })}{T_{0}}\biggr{)}\biggr{]} \tag{7}\]
when \(\varphi-\varphi_{G}<0\). Here \(j^{m}\simeq e\Sigma/\tau_{\perp}\) is the maximum current density, \(\Sigma\) is the electron density in the G-channel,
Figure 2: Band diagrams of GC/b-AsP FET detector with the floating MG [shown in Fig. 1(a)] near (a) the source (\(x\gtrsim-L\), the GC potential \(\varphi-\varphi_{G}<0\))and (b) the drain (\(x\lesssim L\), the GC potential \(\varphi-\varphi_{G}>0\) ).
and \(\tau_{\perp}\) is the characteristic try-to-escape time from the G-channel. From Eqs. (2), (4), (6), and (7), we obtain
\[-\frac{L^{2}}{\mu_{D}}\frac{d}{dx}\biggl{[}\biggl{(}1-\varkappa\, \frac{e(\varphi-\varphi_{G})}{\mu_{D}}\biggr{)}\frac{d\varphi}{dx}\biggr{]}\] \[=\eta\biggl{[}\exp\biggl{(}\frac{\Delta_{M}(T-T_{0})}{T_{0}T} \biggr{)}\exp\biggl{(}-\frac{e(\varphi-\varphi_{G})}{T}\biggr{)}-1\biggr{]} \tag{8}\]
when \(\varphi-\varphi_{G}>0\), and
\[-\frac{L^{2}}{\mu_{D}}\frac{d}{dx}\biggl{[}\biggl{(}1-\varkappa\, \frac{e(\varphi-\varphi_{G})}{\mu_{D}}\biggr{)}\frac{d\varphi}{dx}\biggr{]}\] \[=\eta\biggl{[}\exp\biggl{(}\frac{\Delta_{M}(T-T_{0})}{T_{0}T} \biggr{)}\exp\biggl{(}-\varkappa\frac{e(\varphi-\varphi_{G})}{T}\biggr{)}\] \[-\exp\biggl{(}(1-\varkappa)\frac{e(\varphi-\varphi_{G})}{T_{0}} \biggr{)}\biggr{]} \tag{9}\]
when \(\varphi-\varphi_{G}<0\).
Here
\[\eta=\frac{ej^{m}}{\mu_{D}\sigma_{D}}\exp\biggl{(}-\frac{\Delta_{M}}{T_{0}} \biggr{)}=\frac{\nu\,L^{2}}{\tau_{\perp}v_{W}^{2}}\exp\biggl{(}-\frac{\Delta_{ M}}{T_{0}}\biggr{)}. \tag{10}\]
Setting, for example, \(\Delta_{M}=85\) meV, \(T_{0}=25\) meV, \(\nu=1\) ps\({}^{-1}\), \(\tau_{\perp}=10\) ps, \(L=1.0\)\(\mu\)m, \(\kappa=4-6\), \(W=10\) nm, and \(\mu_{D}=140\) meV, we obtain \(\eta\simeq(3.3)\times 10^{-3}\) and \(\varkappa\simeq 0.088-0.127\) [\(\mu_{0}\simeq(13.6-20.4)\) meV].
At low or moderate bias source-drain voltages and THz radiation powers, \(\psi\) and \(|T-T_{0}|/T_{0}\) are small. In this case, linearizing Eqs. (8) and (9), we arrive at
\[L^{2}\frac{d^{2}}{dx^{2}}\biggl{[}e(\varphi-\varphi_{G})+\frac{ \varkappa}{2}\frac{e^{2}(\varphi-\varphi_{G})^{2}}{\mu_{D}}\biggr{]}\] \[\simeq\frac{\eta\mu_{D}}{T_{0}}\biggl{[}\frac{\Delta_{M}}{T_{0}} (T-T_{0})-e(\varphi-\varphi_{G})\biggr{]} \tag{11}\]
Equation (11) corresponds to the thermionic current density
\[j\simeq\frac{j^{m}}{T_{0}}\biggl{[}\frac{\Delta_{M}}{T_{0}}(T-T_{0})-e(\varphi -\varphi_{G})\biggr{]}. \tag{12}\]
Using Eq. (11) with Eq. (3) and taking into account the smallness of parameter \(\eta\) [i.e., neglecting the term in the right-hand side of Eq. (11)], for the source-drain current \(J_{SD}=\sigma(HV_{SD}/2L)=\sigma_{D}(\mu/\mu_{D})(HV_{SD}/2L)\) with the pertinent accuracy we obtain
\[J_{SD}\simeq\sigma_{D}\biggl{(}1+\varkappa\frac{e\varphi_{G}}{\mu_{D}}\biggr{)} \frac{H}{2L}V_{SD}. \tag{13}\]
Using Eqs. (5) and (12), we find the MG potential:
\[e\varphi_{G}=\frac{eV_{SD}}{4}-\frac{\Delta_{M}}{2L}\int_{-L}^{L}dx\frac{(T- T_{0})}{T_{0}}. \tag{14}\]
The latter equation corresponds to an increase in the source-drain current with increasing gate potential \(\varphi_{G}\) (due to an increase in the Fermi energy and, hence, the G-channel conductivity). One can see that an increase in the 2DEG effective temperature leads to the intensification of the electron transfer from the G-channel to the gate which results in its lower potential.
Equations (13) and (14) for the source-drain current components yield
\[\overline{J}_{SD}\simeq J_{0}\biggl{[}1+\frac{\varkappa}{\mu_{D}}\biggl{(}eV_ {SD}-\frac{\Delta_{M}}{2L}\int_{-L}^{L}dx\frac{(\overline{T}-T_{0})}{T_{0}} \biggr{)}\biggr{]} \tag{15}\]
and
\[\langle\delta J_{\omega}\rangle\simeq-J_{0}\frac{\varkappa\Delta_{M}}{\mu_{D} }\frac{\langle\langle\delta T_{\omega}\rangle\rangle}{T_{0}} \tag{16}\]
with \(J_{0}=\sigma_{D}V_{SD}(H/2L)\) and \(\langle\langle\delta T_{\omega}\rangle\rangle=\int_{-L}^{L}dx\langle\delta T_ {\omega}\rangle/2L\) and \(H\) being the effective temperature average of the THz period and the GC length and the GC width, respectively. The quantity \(\langle\delta J_{\omega}\rangle\) given by Eq. (16) represents the response of the GC-FET to the impinging THz radiation.
## III Electron heating and heat transport
The electron heat transport equation can be presented as
\[-h\frac{d^{2}T}{dx^{2}}+\frac{T-T_{0}}{\tau_{e}}\] \[+\frac{\Delta_{C}\Delta_{M}}{\tau_{\perp}T_{0}}\exp\biggl{(}-\frac {\Delta_{M}}{T_{0}}\biggr{)}\biggl{[}\frac{\Delta_{M}}{T_{0}}(T-T_{0})-e( \varphi-\varphi_{G})\biggr{]}\] \[\simeq\frac{\sigma}{\Sigma}\biggl{[}\biggl{(}\frac{V_{SD}}{2L} \biggr{)}^{2}+\frac{\text{Re}\sigma_{\omega}}{\sigma}\langle|\delta E_{\omega}|^ {2}\rangle\biggr{]}. \tag{17}\]
For the variation of the 2DEG averaged effective temperature \(\langle\delta T_{\omega}\rangle\), in view of Eqs. (5) and (12), Eq. (17) yields
\[-h\frac{d^{2}\langle\delta T_{\omega}\rangle}{dx^{2}}+\frac{\langle \delta T_{\omega}\rangle}{\tau_{e}}\] \[+\frac{\Delta_{C}\Delta_{M}}{\tau_{\perp}T_{0}}\exp\biggl{(}-\frac{ \Delta_{M}}{T_{0}}\biggr{)}\biggl{(}\frac{\langle\delta T_{\omega}\rangle-\langle \langle\delta T_{\omega}\rangle\rangle}{T_{0}}\biggr{)}\] \[\simeq\frac{\text{Re}\sigma_{\omega}}{\Sigma}\langle|\delta E_{ \omega}|^{2}\rangle\biggr{)}. \tag{18}\]
Here \(h\simeq v_{W}^{2}/2\nu\) is the electron thermal conductivity (per electron), \(\tau_{e}\) is the electron energy relaxation time, \(\text{Re}\sigma_{\omega}=\sigma_{D}\,\nu^{2}/(\nu^{2}+\omega^{2})\) is the real part of the 2DEG ac conductivity, and \(\delta E_{\omega}\) is the signal electric fields in the G-channel created due to the THz signals. The first, second,
and third terms in the left-hand side of Eq. (18) are associated with the removal of the electron heat through the contact (due to a substantial electron lateral heat conductivity along the GC [25; 26]), the transfer to the lattice (primarily due to the interaction with optical phonons in the GC and the interface optical phonons [27; 28; 29; 30; 31]) and the MG over the BL (i.e., corresponding to the Peltier cooling [32; 33]), respectively. The term on the right-hand side of Eq. (13) corresponds to the 2DEG Joule heating.
We use the following boundary conditions for Eq. (18):
\[\langle\delta T_{\omega}\rangle|_{x=\pm L}=0. \tag{19}\]
For the THz radiation asymmetric input via the antenna corresponding to the signal potential at the contacts equal to \(\pm\delta V_{\omega}/2\), accounting for the excitation of plasmonic oscillations in the GC we obtain
\[\langle|\delta E_{\omega}|^{2}\rangle=\frac{1}{2}\bigg{(}\frac{ \delta V_{\omega}}{2L}\bigg{)}^{2}\bigg{|}\frac{\gamma_{\omega}\cos(\gamma_{ \omega}x/L)}{\sin\gamma_{\omega}}\bigg{|}^{2}. \tag{20}\]
Here \(\gamma_{\omega}=\pi\sqrt{\omega(\omega+i\nu)}/\Omega\) and \(\Omega=(2\pi\,e/\hbar\,L)\sqrt{\mu\,W/\kappa}\) are the effective wavenumber and the plasmonic frequency, respectively, with \(\kappa\) and \(W\) being the dielectric constant of the BL and its thickness,
Restricting our consideration by the most interesting case of the pronounced fundamental plasmonic resonance in the G-channel (\(\omega=\Omega\gg\nu\)) and using Eq (15), we obtain
\[\mathrm{Re}\sigma_{\Omega}(|\delta E_{\Omega}|^{2})\simeq 2 \sigma_{D}\cos^{2}\!\left(\frac{\pi\,x}{L}\right)\!\left(\frac{\delta V_{ \Omega}}{2L}\right)^{2}\!. \tag{21}\]
One needs to note that the Joule power at the plasmonic resonance given by Eq. (21) exceeds that at low frequencies (at least near \(x\simeq 0\)) by a factor \(\sim(2\Omega/\pi\nu)^{2}\).
Solving Eq. (18) accounting for boundary condition (19) and Eq. (21), for the values \(\langle\langle\delta T_{\omega}\rangle\rangle\) at the fundamental plasmonic resonance we obtain
\[\langle\langle\delta T_{\Omega}\rangle\rangle\simeq\frac{2\pi \sigma_{D}\hbar^{2}v_{W}^{2}}{\mu_{D}^{2}}\frac{\tau_{\varepsilon}\,\Theta}{(1 +\theta)}\bigg{(}\frac{\delta V_{\Omega}}{2L}\bigg{)}^{2}. \tag{22}\]
Here
\[\Theta=\bigg{[}\frac{1-\frac{\mathcal{L}}{L}\tanh\!\left(\frac{L} {\mathcal{L}}\right)}{1+\frac{\mathcal{L}}{L}\tanh\!\left(\frac{L}{\mathcal{ L}}\right)}\bigg{]} \tag{23}\]
is the factor characterizing the role of electron thermal transport. and
\[\theta=\frac{\tau_{\varepsilon}}{\tau_{\perp}}\frac{\Delta_{C} \Delta_{M}}{T_{0}^{2}}\exp\!\left(-\frac{\Delta_{M}}{T_{0}}\right)\!,\qquad \mathcal{L}=\sqrt{\frac{h\tau_{\varepsilon}}{(1+\theta)}},\]
The characteristic length \(\mathcal{L}\) is the electron heat transfer (cooling) length.
Equations (16) and (22) yield
\[-\langle\delta J_{\Omega}\rangle\simeq\frac{2\pi\sigma_{D}^{2} \hbar^{2}v_{W}^{2}}{\mu_{D}^{2}T_{0}}\bigg{(}\frac{\varkappa\Delta_{M}}{\mu_ {D}}\bigg{)}\frac{\tau_{\varepsilon}\,\Theta}{(1+\theta)}\bigg{(}\frac{HV_{SD} }{2L}\bigg{)}\bigg{(}\frac{\delta V_{\Omega}}{2L}\bigg{)}^{2}. \tag{24}\]
The sign "minus" in Eq. (24) reflects the fact that the THz irradiation leads to an increase in the electron effective temperature, reinforcement of the electron emission from the GC and, hence, to a negative charging of the MG. The latter, in turn, decreases the source-drain current in the donor-doped GC.
## IV Detector responsivity
Considering that for the half-wavelength dipole antenna with the gain \(g\) one obtains \(\delta V_{\Omega}^{2}=32P_{\Omega}/gc\), where \(P_{\Omega}\) is the THz power at the frequency \(\omega=\Omega\) collected by the detector antenna and \(c\) is the speed of light in vacuum, and accounting for that the GC channel resistance is equal to \(r_{SD}=2L/H\,\sigma_{D}\), for the detector voltage responsivity \(R_{\Omega}^{V}=|\langle\delta J_{\Omega}\rangle|\,r_{SD}/P_{\Omega}\) (at the radiation frequency corresponding to the fundamental plasmonic resonance), we obtain
\[R_{\Omega}^{V}\simeq\frac{16e^{2}}{g\omega_{D}^{2}}\bigg{(} \frac{L_{\varepsilon}}{L}\bigg{)}^{2}\bigg{(}\frac{\varkappa\Delta_{M}}{T_{0}} \bigg{)}\,\Theta\,V_{SD}\] \[=\frac{16}{137g}\frac{\hbar}{g_{D}^{2}}\bigg{(}\frac{L_{ \varepsilon}}{L}\bigg{)}^{2}\bigg{(}\frac{\varkappa\Delta_{M}}{T_{0}}\bigg{)} \,\Theta\,V_{SD}. \tag{25}\]
Here \(L_{\varepsilon}=\sqrt{\frac{v_{W}^{2}\tau_{\varepsilon}}{\nu(1+ \theta)}}\).
The quantities \(\Delta_{M}\), \(\mu_{D}\), and \(\varkappa\), are determined by the material of the MG and the molar fractions of As in the BL (due to the condition \(\Delta_{M}=\Delta_{C}-\mu_{D}\) assumed in our model).
Examples of the parameters of the GC-FET detectors based on Al/b-P/GC and Ti/b-As/GC heterostructures (see, for example, Refs. [34] and [35]) and the estimates of their resonant responsivity are listed in Table 1. We
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Structure & \(\Delta_{M}\)(meV) & \(\Delta_{C}\)(meV) & \(\mu_{D}\) (meV) & \(L\) (\(\mu\)m) & \(\Omega/2\pi\) (THz) & \(\nu\) (ps\({}^{-1}\)) & \(\theta\) & \(\mathcal{L}\) (\(\mu\)m) & \(R_{\Omega}^{V}\) (V/W) \\ \hline GC/b-P/Al & 85 & 225 & 140 & 1.0 & 1.136 & 1.0 - 2.5 & 1.02 & 1.58 - 1.00 & \((2.1-1.8)\times 10^{3}\) \\ \hline GC/b-As/Ti & 70 & 190 & 120 & 1.0 & 1.052 & 1.0 -2.5 & 1.29 & 0.93 - 0.59 & \((2.7-2.3)\times 10^{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the GC/b-As and GC/b-P FET detectors and their responsivities.
assume also that \(\kappa=4\), \(\tau_{\varepsilon}=10\) ps, \(\tau_{\perp}=10\) ps, \(W=10\) nm, \(T_{0}=25\) meV (\(\sim 300\) K), and \(V_{SD}=1.6\) V. The above parameters (with \(\nu=1\) ps\({}^{-1}\) and \(H=2L\) or with \(\nu=2.5\) ps\({}^{-1}\) and \(H=5L\)) correspond to the GC-FET detector resistances \(r_{SD}\simeq 55\)\(\Omega\) and \(r_{SD}\simeq 64\)\(\Omega\), respectively (at \(H=2L\)). One needs to note that at the above parameters the electron thermal transport factor depending on the ratio \(\mathcal{L}/L\) in Eq. (23) is rather small (about 0.061 - 0.151 at \(\nu=1\) ps\({}^{-1}\)) substantially decreasing the responsivity. The role of the electron cooling due to the thermal transport to the side contacts can be decreased by increasing \(\nu\) (this decreases the electron thermal conductivity) or choosing the longer GC length \(L\).
Figure 3 shows the responsivity of the GC/b-AsP and GC/b-As FETs with a floating MG at \(\omega=\Omega\) (i.e., at the fundamental plasmonic resonance) calculated for the main parameters corresponding to Table 1 but for different electron scattering frequencies \(\nu\). For the definiteness, we set \(V_{SD}=1.6\) V. The inset in Fig. 3 shows how the role of the electron thermal transport along the GC changes with varying scattering frequencies. The pertinent dependence is associated with the \(h\) vs \(\nu\) dependence. As follows from Fig. 3, an increase in \(\Theta\) (i.e., a weakening of the electron heat transfer to the source and drain contacts when \(\nu\) becomes larger) leads to slowing of the \(R_{\Omega}^{V}\) vs \(\nu\) dependence. Thus, a relatively weak dependence of \(R_{\Omega}^{V}\) on \(\nu\) is interpreted by the decrease in the electron system Joule heating in the GC by the signal electric field (because of \(\sigma_{D}\propto\nu^{-1}\)) accompanied with a decrease in the power transferred to the source and drain contacts.
Figure 4 shows the spectral dependences of the responsivity, \(R_{\omega}^{V}\), of the GC/b-As and GC/b-P detectors calculated for different \(\nu\) and the same parameters as for Fig. 3. We limited our consideration by the signal frequencies around the fundamental plasmonic resonance, where the obtained dependences exhibit pronounced maxima provided that \(\Omega\gg\nu\). As seen from Fig. 4, an increase in \(\nu\) gives rise to a smearing of the resonant peak. In a wider frequency range, the responsivity of the detectors under consideration is a profoundly oscillatory function of the radiation signal frequency \(\omega\) with a set of the maxima at the plasmonic resonances \(\omega=n\Omega\) (\(n\) is the resonance index). These oscillations are described by the relation, which follows from the above equations:
\[R_{\omega}^{V}\propto\text{Re}\sigma_{\omega}\bigg{|}\frac{\gamma_{\omega} \cos(\gamma_{\omega}x/L)}{\sin\gamma_{\omega}}\bigg{|}^{2}. \tag{26}\]
As follows from Eq. (26), the dependences \(R_{\omega}^{V}\) vs \(\omega/2\pi\) exhibit the alternation of sharp maxima and relatively deep minima. At the intermediate frequencies, the responsivity at the minima, is smaller than the resonant responsivity by a factor of \((\pi\nu/2\Omega)^{2}\ll 1\). At elevated collision frequencies \(\nu\), the spectral characteristics of \(R_{\omega}^{V}\) become smoother. However, up to \((\pi\nu/2\Omega)^{2}\sim 1\) (this corresponds to \(\nu\sim 4\) ps\({}^{-1}\), at the plasmonic resonances and between the resonances the responsivity can be still relatively high. Hence, the non-resonant response can also be useful.
Figure 3: Resonant responsivity \(R_{\Omega}\) of the GC/b-As/Ti (blue line) and GC/b-P/Al (red line) FET detectors and electron thermal transport factor \(\Theta\) (inset) as functions of electron scattering frequency \(\nu\).
Figure 4: Responsivity \(R_{\omega}\) of the (a) GC/b-As/Ti and (b) GC/b-P/Al FET detectors vs signal frequency \(\omega/2\pi\) for different values of electron scattering frequency \(\nu\).
Comments
Above we assumed that \(J_{SD}\propto\sigma\propto\mu\). Theoretical studies show that the doped GC(\(\mu_{D}\gg T_{0}\)) conductivity \(\sigma\) can exhibit different dependences on \(\mu\)[36; 37; 38]. In particular, it can vary from \(\sigma\) virtually independent of \(\mu\) if the short-range scattering of electrons is dominant to \(\sigma\propto\mu^{2}\) in the case of the long-range scattering (for example, on charged clusters) [38]. In the first situation \(\nu\propto p\), where \(p\) is the electron momentum. In the second case, \(\nu\propto p^{-1}\). In this regard, our model corresponds to an intermediate \(\sigma\) vs \(\mu\) relation (see, for example, Ref. [37]), in which the momentum dependence of \(\nu\) is disregarded. This provides \(\sigma\propto\sqrt{\Sigma}\propto\sqrt{V_{G}}\) (\(V_{G}\) is the voltage swing between the GC and the gate). The latter qualitatively agrees with the experimental data [1]. In such a case, setting \(\nu=\) constant (see, for example,Ref. [38]), we obtain the relation
\[\sigma=\frac{e^{2}T}{\pi\hbar^{2}\nu}\int_{0}^{\infty}d\xi\xi\frac {d}{d\xi}\biggl{[}-\frac{1}{\exp(\xi-\mu/T)+1}\biggr{]}\\ =\frac{e^{2}}{\pi\hbar^{2}\nu}[\mu+T\ln(1+e^{-\mu/T})\simeq\frac {e^{2}\mu}{\pi\hbar^{2}\nu}, \tag{27}\]
which was used above.
Considering that the resonant voltage responsivity, \(R_{\omega}^{V,GG}\), of GC/b-AsP FETs with the biased gate can be estimated as [16]
\[R_{\omega}^{V,GG}\sim\frac{16\pi^{2}}{137}\frac{\hbar}{eT_{0}}\frac{\tau_{ \epsilon}}{\tau_{\perp}}\frac{\Delta_{M}}{T_{0}}\exp\biggl{(}-\frac{\Delta_{ M}}{T_{0}}\biggr{)}, \tag{28}\]
for the ratio of the voltage responsivities we obtain
\[\frac{R_{\Omega}^{V}}{R_{\Omega}^{V,GG}}\simeq\frac{\varkappa}{2}\biggl{(} \frac{L_{\varepsilon}}{L}\biggr{)}^{2}\biggl{(}\frac{\tau_{\epsilon}}{\tau_{ \perp}}\biggr{)}\biggl{(}\frac{T_{0}}{\mu_{D}}\biggr{)}\exp\biggl{(}\frac{ \Delta_{M}}{T_{0}}\biggr{)}\biggl{(}\frac{eV_{SD}}{\mu_{D}}\biggr{)}. \tag{29}\]
For the typical parameters used above and \(V_{SD}\sim(10-25)\) mV, the latter ratio is about unity, although it increases with further (linearly) increase in \(V_{SD}\). The latter might be limited by the lattice heat removal via the substrate and the contacts. Setting \(\nu=(1-2)\) ps\({}^{-1}\), for the thermal power we obtain \(P_{Th}\sim(0.3-0.7)\) mW at \(V_{SD}=0.2\) V and \(P_{Th}\sim(41-47)\) mW at \(V_{SD}=1.6\) V.
As follows from the obtained results, both the current and voltage responsivities are proportional to the source-drain bias voltage \(V_{SD}\). The dark current is also proportional to \(V_{SD}\). This implies that the noise-equivalent power (NEP) and the dark current-limited detectivity of the detectors under consideration vary with increasing source-drain voltage as NEP\(\propto 1/\sqrt{V_{SD}}\) and \(D_{\Omega}^{*}\propto\sqrt{V_{SD}}\), respectively.
For the GC/b-As and GC/b-P detectors with the above parameters at \(V_{SD}=1.6\) V, we obtain NEP\(\simeq 2.2\) pW/Hz\({}^{1/2}\) and NEP\(\simeq 2.5\) pW/Hz\({}^{1/2}\) (for the GC/b-As and GC/b-P FETs, respectively), which appears to be promising (compare with other THz bolometers [9]). If \(\sqrt{2LH}=2\)\(\mu\)m, the latter corresponds to \(D_{\Omega}^{*}\simeq 9\times 10^{7}\) cm Hz\({}^{1/2}\)/W\(\simeq 8\times 10^{7}\) cm Hz\({}^{1/2}\)/W, which is about of or exceeding the detectivity of other uncooled THz bolometers (see, for example, Ref. [39]). However, one needs to note that NEP increases and \(D_{\Omega}^{*}\) decreases with increasing \(\nu\).
Since the operation speed of the detectors under consideration is determined by the characteristic times of the electron cooling, \(t_{\theta}\lesssim\tau_{\varepsilon}/(1+\theta)\), associated with the energy relaxation on phonons and the heat transfer over the BL, and the gate recharging time. The latter is estimated as \(t_{rc}\sim\tau_{\perp}(2T_{0}\mu_{0}/\mu_{D}^{2})\exp(\Delta_{M}/T_{0})\). The comparison of these characteristic times yields
\[\frac{t_{\theta}}{t_{rc}}\simeq\frac{\tau_{\varepsilon}}{\tau_{\perp}(1+ \theta)}\biggl{(}\frac{\mu_{D}^{2}}{2T_{0}\mu_{0}}\biggr{)}\exp\biggl{(}-\frac {\Delta_{M}}{T_{0}}\biggr{)}. \tag{30}\]
For the device structural parameters assumed above \(T_{0}=25\), we obtain \(t_{\theta}/t_{rc}\sim 0.5\). This implies that the GC/b-AsP FET bolometers response time is about \(t_{r}\sim t_{\theta}+t_{rc}\sim 20\) ps.
The values of the collision frequencies used in the above calculations can be expressed via the electron mobility \(M\). Using the relation \(M=ev_{W}^{2}/\mu_{D}\nu\), where \(m=\mu/v_{W}^{2}\) is the so-called fictitious electron mass in GCs, for \(\mu_{D}=140\) meV and \(\nu=(1-4)\) ps\({}^{-1}\), we obtain the range \(M\simeq(1.78-7.14)\times 10^{4}\) cm/V\(\cdot\)s (compare, for example, with Refs. [40; 41]). According to the estimates [41], the room temperature mobility in the GCs on h-BN at the electron density corresponding to the above the Fermi energy can be about \(m\simeq 10^{5}\)cm/V\(\cdot\)s. The latter corresponds to \(\nu\simeq 0.714\) ps\({}^{-1}\). The quality of the interface between the GC (placed atop of h-BN) and the b-P BL can limit the values of \(M\) and \(\nu\). The pertinent room temperature electron mobility obtained experimentally several years ago is equal to \(M\simeq(7-8)\times 10^{3}\) cm/Vs [42] (\(M\) exceeds \(10^{4}\) cm/Vs at \(T_{0}\leq 200\) K). This corresponds to not too small \(\nu\). One can expect that the contemporary technology is able to provide the GC/b-P interface with sufficiently small \(\nu\), at which the parameter (\((\pi\nu/2\Omega)^{2}<1\), so that the plasmonic resonances are pronounced. A substantial reinforcement of the plasmonic resonances in the GC-FET detectors can be realized in the case of the composite gate BL, which includes a relatively narrow b-P BL (and the MG) and the h-BN (or WSe\({}_{2}\)[43]) sections between the b-P section and the source in drain. In such a GC-FET detector, the sharpness of the plasmonic resonances might be determined by the GC main part (encapsulated by h-BN and providing small electron collision frequency), by the thermionic current flows via the narrow b-P window.
Similar THz detection properties can be expected in the GC/b-AsP FET devices with the floating isolated doped graphene gate (GG). The main distinction between the detectors with the MG and the detectors with the GG is the different plasmonic response in the latter
because of the GG influence on the plasmonic oscillations in the double-graphene structures [44; 45; 46]. Another option is to use the MG consisting of an array of metallic islands (MIs) or quantum dots (QDs). In such a case, each MI/QD has its own floating potential determined by the electron exchange between the MG/QD and the GC (compare with the devices analyzed in Ref. [9]). Due to this, the potential distribution along the GC and the effect of the floating MG on the source-drain current can be markedly different from that considered above. However, the consideration of the detectors in question requires a proper modification of the device model and, therefore, a separate treatment.
## VI Conclusions
We estimated the room temperature characteristics of the proposed GC/b-AsP FETs with the floating MG operating in the THz frequency range at room temperature. We showed that these detectors can exhibit high values of responsivity at the plasmonic resonances (\(\gtrsim 10^{3}\) V/W) and rather short response times (\(\sim 20\) ps).
## Author contributions
All authors contributed equally to this work.
###### Acknowledgements.
The Japan Society for Promotion of Science (KAKENHI Grants # 21H04546 and # 20K20349), Japan; RIEC Nation-Wide Collaborative Research Project # R04/A10; the US Office of Scientific, Research Contract N00001435, (Project Monitor Dr. Ken Goretta).
## Conflict of interest
The authors declare no conflict of interest
**DATA availability**
All data that support the findings of this study are available within the article.
|
2308.03877 | CECM: A continuous empirical cubature method with application to the
dimensional hyperreduction of parameterized finite element models | We present the Continuous Empirical Cubature Method (CECM), a novel algorithm
for empirically devising efficient integration rules. The CECM aims to improve
existing cubature methods by producing rules that are close to the optimal,
featuring far less points than the number of functions to integrate.
The CECM consists on a two-stage strategy. First, a point selection strategy
is applied for obtaining an initial approximation to the cubature rule,
featuring as many points as functions to integrate. The second stage consists
in a sparsification strategy in which, alongside the indexes and corresponding
weights, the spatial coordinates of the points are also considered as design
variables. The positions of the initially selected points are changed to render
their associated weights to zero, and in this way, the minimum number of points
is achieved.
Although originally conceived within the framework of hyper-reduced order
models (HROMs), we present the method's formulation in terms of generic
vector-valued functions, thereby accentuating its versatility across various
problem domains. To demonstrate the extensive applicability of the method, we
conduct numerical validations using univariate and multivariate Lagrange
polynomials. In these cases, we show the method's capacity to retrieve the
optimal Gaussian rule. We also asses the method for an arbitrary
exponential-sinusoidal function in a 3D domain, and finally consider an example
of the application of the method to the hyperreduction of a multiscale finite
element model, showcasing notable computational performance gains.
A secondary contribution of the current paper is the Sequential Randomized
SVD (SRSVD) approach for computing the Singular Value Decomposition (SVD) in a
column-partitioned format. The SRSVD is particularly advantageous when matrix
sizes approach memory limitations. | J. A. Hernandez, J. R. Bravo, S. Ares de Parga | 2023-08-07T19:14:23Z | http://arxiv.org/abs/2308.03877v1 | CECM: A continuous empirical cubature method with application to the dimensional hyperreduction of parameterized finite element models
###### Abstract
We present the Continuous Empirical Cubature Method (CECM), a novel algorithm for empirically devising efficient integration rules. The CECM aims to improve existing cubature methods by producing rules that are close to the optimal, featuring far less points than the number of functions to integrate.
The CECM consists on a two-stage strategy. First, a point selection strategy is applied for obtaining an initial approximation to the cubature rule, featuring as many points as functions to integrate. The second stage consists in a sparsification strategy in which, alongside the indexes and corresponding weights, the spatial coordinates of the points are also considered as design variables. The positions of the initially selected points are changed to render their associated weights to zero, and in this way, the minimum number of points is achieved.
Although originally conceived within the framework of hyper-reduced order models (HROMs), we present the method's formulation in terms of generic vector-valued functions, thereby accentuating its versatility across various problem domains. To demonstrate the extensive applicability of the method, we conduct numerical validations using univariate and multivariate Lagrange polynomials. In these cases, we show the method's capacity to retrieve the optimal Gaussian rule. We also asses the method for an arbitrary exponential-sinusoidal function in a 3D domain, and finally consider an example of the application of the method to the hyperreduction of a multiscale finite element model, showcasing notable computational performance gains.
A secondary contribution of the current paper is the Sequential Randomized SVD (SRSVD) approach for computing the Singular Value Decomposition (SVD) in a column-partitioned format. The SRSVD is particularly advantageous when matrix sizes approach memory limitations.
keywords: Empirical Cubature Method, Hyperreduction, reduced-order modeling, Singular Value Decomposition, quadrature +
Footnote β : journal:
## 1 Introduction
The present paper is concerned with a classical problem of numerical analysis: the approximation of integrals over 1D, 2D and 3D domains of parameterized functions as a weighted sum of the values of such functions at a set of \(m\) points \(\{\mathbf{x}_{1},\mathbf{x}_{2}\ldots\mathbf{x}_{m}\}\):
\[\int_{\Omega}\,f(\mathbf{x};\mu)\ d\Omega\approx\sum_{g=1}^{m}f(\mathbf{x}_{g};\mu)w_{ g}, \tag{1}\]
(with \(m\) as small as possible). This problem is generally known as either quadrature (for 1D domains) or _cubature_ (for higher dimensions), and has a long pedigree stretching back as far as C.F. Gauss, who devised in 1814 the eponymous quadrature rule for univariate polynomials.
### Cubature problem in hyperreduced-order models
The recent development of the so-called _hyperreduced-order models_ (HROMs) for parameterized finite element (FE) analyses [10; 17] has sparked the resurgence of interest in this classical problem. Indeed, a crucial step in the construction of such HROMs is the solution of the cubature problem associated to the evaluation of the nonlinear term(s) in the
pertinent governing equations. For instance, in a Galerkin-based structural HFOM, the nonlinear term is typically the projection of the nodal FE internal forces \(\mathbf{F}^{h}\in\mathbb{R}^{N_{def}}\) (here \(N_{dof}\) denotes the number of degrees of freedom of the FE model) onto the span of the displacement modes, i.e.: \(\mathbf{F}=\mathbf{\phi}^{T}\mathbf{F}^{h}\), \(\mathbf{\phi}\in\mathbb{R}^{N_{def}\times n}\) being the matrix of displacement modes. The basic premise in these HFOMs is that the number of modes is much smaller that the number of FE degrees of freedom (\(n<<N_{dof}\)). This in turn implies that the internal forces per unit volume will also reside in a space of relatively small dimensions (independent of the size of the underlying FE mesh), and therefore, its integral over the spatial domain will be, in principle, amenable to approximation by an efficient cubature rule, featuring far less points than the original FE-based rule. The challenge lies in determining the minimum number of cubature points necessary for achieving a prescribed accuracy, as well as their location and associated _positive_ weights. The requisite of positive weights arises from the fact that, in a Galerkin FE framework, the Jacobian matrix of the discrete system of equations is a weighted sum of the contribution at each FE Gauss point. Thus, if the Jacobian matrices at point level are positive definite, the global matrix is only guaranteed to inherit this desirable attribute if the cubature weights are positive [17].
Before delving into the description of the diverse approaches proposed to date to deal with this cubature problem in the context of HFOMs, it proves convenient to formally formulate the problem in terms of a generic parameterized vector-valued function \(\mathbf{a}:\Omega\times\mathcal{D}\rightarrow\mathbb{R}^{n}\). Let \(\Omega=\cup_{e=1}^{N_{el}}\Omega^{e}\) be a finite element partition of the spatial domain \(\Omega\subset\mathbb{R}^{d}\) (\(d=1,2\) or \(3\)). For simplicity of exposition, assume that all elements are isoparametric and of the same order of interpolation, possessing \(r\) Gauss points each. Suppose we are given the values of the integrand functions for \(P\) instantiations of the input parameters ( \(\{\mathbf{\mu}_{i}\}_{i=1}^{P}=\mathcal{D}^{s}\subset\mathcal{D}\)) at all the Gauss points of the discretization. The integral of the function over \(\Omega\) for each \(\mathbf{\mu}_{j}\) (\(j=1,2\ldots P\)) can be calculated by the corresponding element Gauss rule as
\[b_{k}=\sum_{e=1}^{N_{el}}\int_{\Omega^{e}}a_{i}(\mathbf{x},\mathbf{\mu}_{j})\ d\Omega= \sum_{e=1}^{N_{el}}\sum_{g=1}^{r}a_{i}(\bar{\mathbf{x}}_{g}^{e},\mathbf{\mu}_{j})W_{g} ^{e},\ \ \ \ k=(j-1)n+i;\ \ \ j=1,2\ldots P;\ \ i=1,2\ldots n. \tag{2}\]
Here, \(\bar{\mathbf{x}}_{g}^{e}\in\Omega^{e}\) denotes the position of the \(g\)-th Gauss point of element \(\Omega^{e}\), whereas \(W_{g}^{e}>0\) is the product of the Gauss weight and the Jacobian of the isoparametric transformation at such a point. Each \(b_{k}\) (\(k=1,2\ldots Pn\)) is therefore considered as the "exact" integral, that is, the reference value we wish to approximate. The above expression can be written in a compact matrix form as
\[\mathbf{b}_{FE}=\mathbf{A}_{FE}^{T}\mathbf{W}_{FE}, \tag{3}\]
where \(\mathbf{b}_{FE}\in\mathbb{R}^{nP}\) is the vector of "exact" integrals defined in Eq. (2), \(\mathbf{A}_{FE}\) is the matrix obtained from evaluating the integrand functions at all the FE Gauss point, \(\mathbf{X}_{FE}=\{\{\bar{\mathbf{x}}_{g}^{e}\}_{g=1}^{r}\}_{e=1}^{N_{el}}\), while \(\mathbf{W}_{FE}\) designates the vector of FE weights, formed by gathering all the Gauss weights in a single column vector. Each column of \(\mathbf{A}_{FE}\) is the discrete representation of a scalar-valued integrand function, and thus the total number of columns is equal to the the number of sampling parameters \(P\) times the number of integrand functions per parameter, \(n\). The number of rows of \(\mathbf{A}_{FE}\), on the other hand, is equal to the total number of integration points (\(M=N_{el}\cdot r\)). In terms of element contributions, matrix \(\mathbf{A}_{FE}\) is expressible as
\[\mathbf{A}_{FE}=\begin{bmatrix}\mathbf{A}_{FE}^{(1)}\\ \mathbf{A}_{FE}^{(2)}\\ \vdots\\ \mathbf{A}_{FE}^{(N_{el})}\end{bmatrix}_{N_{el}\cdot r\times nP}\
Here, \(\|\bullet\|_{2}\) is the standard Euclidean norm, whereas \(\mathbf{A}(\mathbf{X})\) and \(\mathbf{\omega}\) denote the matrix of the integrand evaluated at the set of points \(\mathbf{X}\) and their associated weights, respectively:
\[\mathbf{A}(\mathbf{X})=\begin{bmatrix}\mathbf{A}(\mathbf{x}_{1})\\ \mathbf{A}(\mathbf{x}_{2})\\ \vdots\\ \mathbf{A}(\mathbf{x}_{m})\end{bmatrix}:=\begin{bmatrix}a_{1}(\mathbf{x}_{1},\mathbf{\mu}_{1} )&a_{2}(\mathbf{x}_{1},\mathbf{\mu}_{1})&\cdots&a_{n}(\mathbf{x}_{1},\mathbf{\mu}_{1})&\cdots&a _{n}(\mathbf{x}_{1},\mathbf{\mu}_{P})\\ a_{1}(\mathbf{x}_{2},\mathbf{\mu}_{1})&a_{2}(\mathbf{x}_{2},\mathbf{\mu}_{1})&\cdots&a_{n}(\mathbf{ x}_{2},\mathbf{\mu}_{1})&\cdots&a_{n}(\mathbf{x}_{2},\mathbf{\mu}_{P})\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ a_{1}(\mathbf{x}_{m},\mathbf{\mu}_{1})&a_{2}(\mathbf{x}_{m},\mathbf{\mu}_{1})&\cdots&a_{n}(\bm {x}_{m},\mathbf{\mu}_{1})&\cdots&a_{n}(\mathbf{x}_{m},\mathbf{\mu}_{P})\end{bmatrix}_{m \times P_{n}}\mathbf{\omega}=\begin{bmatrix}\omega_{1}\\ \omega_{2}\\ \vdots\\ \omega_{m}\end{bmatrix}_{m\times 1}. \tag{7}\]
**Remark 1.1**.: _A remark concerning notation is in order here. In Eq.(7), \(\mathbf{A}(\mathbf{x})\) (\(\mathbf{x}\in\Omega\)) represents a vector-valued function that returns the \(n\) entries of the integrand function \(\mathbf{a}(\mathbf{x})\) for the \(P\) samples of the input parameters, in the form of row matrix (i.e., \(\mathbf{A}(\mathbf{x})\in\mathbb{R}^{1\times nP}\)). On the other hand, when the argument of \(\mathbf{A}\) is not a single point, but a collection of \(m\) points \(\mathbf{X}:=\{\mathbf{x}_{g}\}_{g=1}^{m}\), then \(\mathbf{A}(\mathbf{X})\) represents a matrix with as many rows as points in the set, i.e. \(\mathbf{A}(\mathbf{X})\in\mathbb{R}^{m\times nP}\). According to this notational convention, the matrix defined in Eq.(4) can be compactly written as \(\mathbf{A}_{FE}=\mathbf{A}(\mathbf{X}_{FE})\)._
### State-of-the-art on cubature rules for HROMs
The first attempts to solve the above described cubature problem in the context of reduced-order modeling were carried by the computer graphics community. The cubature scheme proposed by An et al. in Ref. [1] in 2010 for dealing with the evaluation of the internal forces in geometrically nonlinear models may be regarded as the germinal paper in this respect. An and co-workers [1] addressed the cubature problem (6) as a _best subset selection problem_ (i.e., the desired set of points is considered a subset of the entire set of Gauss points, \(\mathbf{X}\subset\mathbf{X}_{FE}\)). They proposed a greedy strategy that incrementally constructs the set of points by minimizing the norm of the residual of the integration at each iteration, while enforcing the positiveness of the weights. Subsequent papers in the computer graphics community (see Ref. [20] and references therein) revolved around the same idea, and focused fundamentally in improving the efficiency of the scheme originally proposed by An et al. [1]--which turned out to be ostensibly inefficient, for it solves a nonnegative-least squares problem, using the standard Lawson-Hanson algorithm [21], each time a new point enters the set.
Interesting re-interpretations of the cubature problem came with the works of Von Tycowicz et al. [33] and Pan et al. [29] --still within computer graphics circles. Both works recognized the analogy between the discrete cubature problem and the quest for _sparsest solution of underdetermined systems of equations_, a problem which is common to many disciplines such as signal processing, inverse problems, and genomic data analysis [9]. Indeed, if we regard the vector of reduced weights \(\mathbf{\omega}\) as a sparse vector of the same length as \(\mathbf{W}_{FE}\), then the _best subset selection_ problem can be posed as that of minimizing the nonzero entries of \(\mathbf{\omega}\):
\[\min_{\mathbf{\omega}\geq\mathbf{0}}\|\mathbf{\omega}\|_{0},\ \ \text{subject to}\ \|\mathbf{A}_{FE}^{T}\mathbf{\omega}-\mathbf{b}_{FE}\|_{2}\leq\epsilon_{b}\|\mathbf{b}_{FE}\|_ {2} \tag{8}\]
where \(\|\cdot\|_{0}\) stands for the \(\ell_{0}\) pseudo-norm --the number of nonzero entries of the vector. It is well-known [3] that this problem is computationally intractable (NP hard), and therefore, recourse to either suboptimal greedy heuristic or convexification is to be made. Von Tycowicz et al [33] adapted the algorithm proposed originally in Ref. [2] for compressed sensing applications (called _normalized iterative hard thresholding_, abbreviated NIHT) by incorporating the positive constraints, reporting significant improvements in performance with respect to the original NNLS-based algorithm of An et al. [1]. The work by Pan et al. [29], on the other hand, advocated an alternative approach --also borrowed from the compressed sensing literature, see Ref. [36] -- based on the _convexification_ of problem (8). Such a convexification consists in replacing the \(\ell_{0}\) pseudo-norm by the \(\ell_{1}\) norm --an idea that, in turn, goes back to the seminal paper by Chen et al. [7]. In doing so, the problem becomes tractable, and can be solved by standard Linear Programming (LP) techniques.
Cubature schemes did not enter the computational engineering scene until the appearance in 2014 of the _Energy-Conserving Mesh Sampling and Weighting_ (ECSW) _scheme_ proposed by C. Farhat and co-workers [10]. The ECSW is, in essence, a nonnegative least squares method (NNLS), very much aligned to the original proposal by An et al [1], although much more algorithmically efficient. Indeed, Farhat and co-workers realized that the NNLS itself produces sparse approximations, and therefore it suffices to introduce a control-error parameter inside the standard NNLS algorithm -- rather than invoking the NNLS at each greedy iteration, as proposed originally in An's paper [1]. The efficiency of the ECSW was tested against other sparsity recovery algorithms by Farhat's team in Ref. [6], arriving at the conclusion that, if equipped with an updatable QR decomposition for calculating the unrestricted least-squares problem of each iteration, the ECSW outperformed existing implementations based on convexification of the original problem. It should be pointed out that, although the ECSW is a mesh sampling procedure, and therefore, the entities selected by the ECSW are finite elements rather than single Gauss points, the formulation of the problem is rather similar to the one described in the foregoing: the only differences are that, firstly, each element contribution \(\mathbf{A}_{FE}^{(e)}\) in Eq.(4) collapses into a single row
obtained as the weighted sum of the Gauss points rows; and, secondly, the vector of FE weights \(\mathbf{W}_{FE}\) is replaced by an all-ones vector.
The Empirical Cubature Method, hereafter referred to as Discrete Empirical Cubature Method (DECM), introduced by the first author in Ref. [17] for parametrized finite element structural problems, also addresses the problem via a greedy algorithm, in the spirit of An's approach [1], but exploits the fact that deriving a cubature rule for integrating the set of functions contained column-wise in matrix \(\mathbf{A}_{FE}\) is equivalent to deriving a cubature rule for a set of _orthogonal bases_ for such functions. Ref. [17] demonstrates that this brings two salient advantages in the points selection process. Firstly, the algorithm invariably converges to zero integration error when the number of selected points is equal to the number of orthogonal basis functions; and secondly, the algorithm need not enforce the positiveness of the weights at each iteration. Furthermore, Ref. [17] recognizes that the cubature problem is ill-posed when \(\mathbf{b}_{FE}\approx\mathbf{0}\) --this occurs, for instance, in self-equilibrated structural problems, such as computational homogenization [15; 28]--and shows that this can be overcome by enforcing the sum of the reduced weights to be equal to the volume of the domain. In Ref. [16], the first author proposed an improved version of the original DECM, in which the local least-squares problem at each iteration are solved by rank-one updates.
Another approach also introduced recently in the computational engineering community is the _Empirical Quadrature Method_ (EQM), developed by A. Patera and co-workers [30; 38; 37]. It should be noted that the name similarity with the above described Empirical Cubature Method is only coincidental, for the EQM is not based on the nonnegative least squares method, like the ECM, but rather draws on the previously mentioned \(\ell_{1}\) convexification of problem 8. Thus, in the EQM, the integration rule is determined by linear programming techniques, as in the method advocated in the work by Pan et al. [29] for computer graphics applications.
### Efficiency of best subset selection algorithms
The _best subset selection_ algorithms described in the foregoing vary in the way the corresponding optimization problem is formulated, and also in computational performance (depending on the nature and size of the problem under consideration), yet all of them have something in common: none of them is able to provide the optimal solution, not even when the optimal integration points are contained in the set of FE Gauss points. We have corroborated this claim by examining the number of points provided by all these methods when the integrand is a 1D polynomial in the interval \(\Omega=[-1,1]\). In Figure 1, we show, for the case of polynomials of order \(P=5\), the location of the points and the associated weights provided by:1\(1\)) the nonnegative least-squares method (NNLS); 2) the linear-programming based method (LP); 3) the Discrete Empirical Cubature Method (DECM); 4) and the normalized iterative hard thresholding (NIHT). We also show in each Figure the 3-points optimal Gauss rule, which in this case is \(\omega_{1}^{*}=\omega_{3}^{*}=5/9\), \(\omega_{2}^{*}=8/9\), and \(x_{1}^{*}=-x_{3}^{*}=\sqrt{3/5}\)\(x_{2}^{*}=0\). The employed spatial discretization features \(N_{el}=1000\) elements, with one Gauss point per element (located at the midpoint), and it was arranged in such a way that \(3\) of the corresponding element midpoints coincide with the optimal Gauss points. It can be seen that, as asserted, none of the the four schemes is able to arrive at the optimal quadrature rule. Rather, the four methods provide quadrature rules with \(m=P+1=6\) points, that is, with as many points as functions to be integrated; in the related literature, these rules are known as _interpolatory quadrature rules_[12]. Different experiments with different initial discretizations and/or polynomial orders led invariably to the same conclusion (i.e., all of them produce interpolatory rules).
Footnote 1: The NNLS and LP analyses can be carried out by calling standard libraries (here we have used the Matlab functions _lsqnonneg_ and _linprog_, respectively), the ECM algorithm is given in Ref. [16], whereas for the NIHT we have used the codes given in Ref. [19]
### Goal and methodology
Having described the capabilities and limitations of existing cubature methods in the context of HROMs, we now focus on the actual goal of the present paper, which is to enhance such methods so that they produce rules close to the optimal ones --or at least rules featuring far less points than integrand functions. Our proposal in this respect draws inspiration from the _elimination_ algorithm advocated, apparently independently, by Bremer et al. [4] and Xiao et al. [35] in the context of the so-called _Generalized Gaussian Rules_ (see Refs. [23], [27], [26]), which, as its name indicates, is a research discipline that seeks to extend the scope of the quadrature rule originally developed for polynomials by C.F. Gauss. To the best of the authors' knowledge, cross-fertilization between this field and the field of hyperreduction of parameterized finite element models has not yet taken place. This lack of cross-fertilization may be attributed to the fact that the former is fundamentally concerned with parametric families of functions whose analytical expression is known, while the latter concentrates in huge databases of _empirical_ functions (i.e., functions derived from _computational experiments_), whose values are only given at certain points of the spatial domain ( the Gauss points of the FE mesh). The present work, thus, appears to be the first attempt to combine ideas from these two related disciplines.
The intuition behind the elimination algorithm presented in Refs. [4; 35] goes as follows. Consider, for instance (the same arguments can be used with either the LP or NIHT approaches ), the points and weights provided by the _interpolatory_ DECM rule shown in Figure 1.b. Observe that the distribution of weights is rather irregular, being the difference between the largest and smallest weights more pronounced than in the case of the optimal rule --for instance, the smallest weight is only 5 % of the total length of the domain. This suggests that we may get rid of some of the points in the initial set, on the grounds that, as their contribution to the integration error is not significant (relatively small weights), a slight "readjustment" of the positions and weights of the remaining points may suffice to return the integration error to zero. Since we cannot know a priori how many points in total can be eliminated, this operation must be carried out carefully, removing one point at a time.
#### 1.4.1 Sparsification problem
Although inspired by this elimination scheme, our approach addresses the problem from a different perspective, more in line with the _sparsification_ formulation presented in expression (8), in which the goal is to drive to zero as many weights as possible. To understand how our sparsification scheme works, it proves useful to draw a physical analogy in which the integration points are regarded as _particles_ endowed with _nonnegative masses_ (the weights), and which are subject to nonlinear conservation equations (the integration conditions). At the beginning, the particles have the positions and masses (all positive) determined by one of the interpolatory cubature rules discussed previously in Section 1.3. The goal is to, progressively, drive to zero the mass of as many particles as possible, while keeping the remaining particles within the spatial domain, and with nonnegative masses. To this end, at each step, we reduce the mass of the particle that least contributes to the conserved quantities, and then calculate the position and masses of the remaining particles so that the nonlinear conservation equations are satisfied.
For solving the nonlinear balance equations using standard methods (i.e., Newton's), it is necessary to have a continuous (and differentiable) representation of the integrand functions. In contrast to the cases presented in Refs. [4] and [35], in our case, the analytical expression of such functions are in general not available. To overcome this obstacle, we propose to construct local polynomial interpolatory functions using the values of the integrand functions at the Gauss points of each finite element traversed by the particles.
Another crucial difference of our approach with respect to Refs. [4; 35] is the procedure to solve the nonlinear equations at each step. Due to the underdetermination of such equations, there are an infinite number of possible configurations of the system for the majority of the steps. Both Refs. [35] and [4] use the pseudo-inverse of the Jacobian matrix, a fact that is equivalent to choosing the (non-sparse) \(\ell_{2}\) minimum-norm solution [3] in each iteration. By contrast, here we employ sparse solutions, with as many nonzero entries as functions to be integrated. The rationale for employing this sparse
Figure 1: Location of the points and magnitude of the weights of the integration rules for polynomial of order \(P=5\) in \(\Omega=[-1,1]\) provided by the: a) Linear programming-based strategy (LP); b) Discrete Empirical Cubature Method (DECM); c) Non-negative least-squares (NNLS). d) Normalized iterative hard thresholding (NIHT). Note that in the 4 cases, the number of integration points is equal to the number of functions to be integrated (i.e., monomials) \(m=P+1=6\). The optimal solution is provided by the Gaussian quadrature rule of \(m^{*}=3\) points, also displayed in each of the four graphs.
solution is that, on the one hand, it minimizes the number of particles that move at each iteration, and consequently, diminish the computational effort of tracking the particles through the mesh; and, on the other hand, it reduces the overall error inherent to the recovery of the integrand functions via interpolation.
It should be stressed that we do not employ a specific strategy for directly enforcing the positiveness of the masses (weights). Rather, we force the constant function to appear in the set of integrand functions; in our physical analogy, this implies that one of the balance equations is the _conservation of mass_. Since the total mass of the system is to be conserved, reducing the mass of one particle leads to an increase in the overall mass of the remaining particles, and this tends to ensure that their masses remain positive. On the other hand, when a particle attempts to leave the domain, we return it back to its previous position, and proceed with the solution scheme. If convergence is not achieved, or the constraints are massively violated, we simply abandon our attempt of reducing the weight of the current controlled particle, and move to the next particle in the list. The process terminates --hopefully at the optimum-- when we have tried to make zero the masses of all particles.
We choose as initial interpolatory rule --over the other methods discussed in Figure 1-- the Discrete Empirical Cubature Method, DECM. The reason for this choice is twofold. Firstly, we have empirically found that the DECM gives points that tend to be close to the optimal ones --for instance, in Figure 1.b two of the points calculated by the DECM practically coincide with the optimal Gauss points \(x_{1}=\sqrt{3/5}\) and \(x_{2}=0\). Secondly, the DECM does not operate directly on the sampling matrix \(\mathbf{A}_{FE}\) defined in Eq.(4), but rather on an orthogonal basis matrix for its column space [16]. As a consequence, the cubature problem translates into one of integrating orthogonal basis functions, and this property greatly facilitates the convergence of the nonlinear problem alluded to earlier. The combination of the DECM followed by the continuous search process will be referred to hereafter as the _Continuous Empirical Cubature Method_ (CECM).
### Sequential randomized SVD (SRSVD)
We use the ubiquitous Singular Value Decomposition (SVD) to determine the orthogonal basis matrix for the column space of \(\mathbf{A}_{FE}\) required by the DECM. Computationally speaking, the SVD is by far the most memory-intensive operation of the entire cubature algorithm. For instance, in a parametric function of dimension \(n=10\), with \(P=100\) parametric samples and a mesh of \(N_{el}=10^{6}\) linear hexahedra elements featuring \(n_{g}=8\) Gauss points each, matrix \(\mathbf{A}_{FE}\) occupies 64 Gbytes of RAM memory. To overcome such potential memory bottlenecks, we have devised a scheme for computing the SVD in which the matrix is provided into a column-partitioned format, with the submatrices being processed one at a time. In contrast to other partitioned schemes, such as the one proposed by the first author in Ref. [17] or the partitioned Proper Orthogonal Decomposition of Ref. [34], which compute the SVD of the entire matrix from the individual SVDs of each submatrices, our scheme addresses the problem in an incremental, sequential fashion: at each increment, the current basis matrix (for the column space of \(\mathbf{A}_{FE}\)) is enriched with the left singular vectors coming from the SVD of the _orthogonal complement_. The advantage of this sequential approach over the concurrent approaches in Refs. [34; 17] is that it exploits the existence of linear correlations across the blocks. For instance, in a case in which all submatrices are full rank, and besides, a linear combination of the first submatrix (this may happen when analyzing periodic functions), our sequential approach would require performing a single SVD --that of the first matrix. By contrast, the concurrent approaches in Refs. [34; 17], would not only need to calculate the SVD of all the submatrices, but they would not provide any benefit at all in terms of computer memory (in fact the partitioned scheme would end up being more costly than the standard one-block implementation). Lastly, to accelerate the performance of each SVD on the orthogonal complement of the submatrices, we employ a modified version of the randomized blocked SVD proposed by Martinsson et al. [25], using as prediction for the rank of a given submatrix that of the previous submatrix in the sequence.
### Organization of the paper
The paper is organized as follows. The determination of the orthogonal basis functions and their gradients by using the SVD of the sampling matrix are discussed in Section 2. Although an original contribution of the present work, we have relegated the description of the Sequential Randomized SVD (SRSVD) algorithm to Appendix A (in order not to interrupt the continuity of the presentation of the cubature algorithm, which constitutes the primary focus of this paper). On the other hand, the computation of the interpolatory cubature rule by the Discrete Empirical Cubature Method, DECM, is presented in Section 3, and the solution of the continuous sparsification problem in Sections 4 and 5. Except for the DECM, which can be found in the original reference [16], we provide the pseudo-codes of all the algorithms involved in both the cubature and the SRSVD. Likewise, we have summarized all the implementation steps in Box 5.1 of Section 5.3. The logic of the proposed methodology can be followed without the finer details from the information in this Box.
Sections 6.1 and 6.2 are devoted to the numerical validation by comparison with the (optimal) quadrature and cubature rules of univariate and multivariate Lagrange polynomials. The example presented in Section 6.3, on the other hand, is intended to illustrate the performance of the method in scenarios where the proposed SRSVD becomes essential --because
the integrand matrix exhausts the memory capabilities of the computer at hand. Finally, the application of the proposed CECM to the hyperreduction of a multiscale finite element model is explained in Section 6.4.
## 2 Orthogonal basis for the integrand
### Basis matrix via SVD
As pointed out in the foregoing, our cubature method does not operate directly on the integrand sampling matrix \(\mathbf{A}_{FE}\), defined in Eq. (4), but on a basis matrix for its column space, denoted henceforth by \(\mathbf{U}\in\mathbb{R}^{M\times p}\). Since \(\mathbf{U}\) will be a linear combination of the columns of \(\mathbf{A}_{FE}\), which are in turn the discrete representation of the scalar integrand functions we wish to integrate, it follows that the columns of \(\mathbf{U}\) themselves will be the discrete representations of basis functions for such integrand functions. These basis functions will be denoted hereafter by \(u_{i}:\Omega\to\mathbb{R}\) (\(i=1,2\ldots p\)). In analogy to Eq.(4), we can write \(\mathbf{U}\) in terms of such basis functions as
\[\mathbf{U}=\begin{bmatrix}\mathbf{U}^{(1)}\\ \mathbf{U}^{(2)}\\ \vdots\\ \mathbf{U}^{(N_{el})}\end{bmatrix}_{M\times p}\quad\text{ where }\quad\quad\mathbf{U}^{(c)}:= \begin{bmatrix}\mathbf{u}(\mathbf{\bar{x}}_{1}^{e})\\ \mathbf{u}(\mathbf{\bar{x}}_{2}^{e})\\ \vdots\\ \mathbf{u}(\mathbf{\bar{x}}_{r}^{e})\end{bmatrix}_{r\times p}\quad. \tag{9}\]
while
\[\mathbf{u}(\mathbf{x}):=\begin{bmatrix}u_{1}(\mathbf{x})&u_{2}(\mathbf{x})&\cdots&u_{p}(\mathbf{ x})\end{bmatrix}_{1\times p}. \tag{10}\]
We shall require these basis functions to be \(L_{2}(\Omega)\)-orthogonal, i.e.:
\[\int_{\Omega}\,u_{i}u_{j}\ d\Omega=\delta_{ij},\quad\ i,j=1,2\ldots p, \tag{11}\]
\(\delta_{ij}\) being the Kronecker delta. By evaluating the above integral using the FE-Gauss rule (as done in Eq 2), we get:
\[\int_{\Omega}\,u_{i}u_{j}\ d\Omega=\sum_{e=1}^{N_{el}}\sum_{g=1}^{r}u_{i}(\mathbf{ \bar{x}}_{g}^{e})W_{g}^{e}u_{j}(\mathbf{\bar{x}}_{g}^{e})=\mathbf{U}_{i}^{T}\operatorname {diag}(\mathbf{W}_{FE})\mathbf{U}_{j},\quad\ i,j=1,2\ldots p. \tag{12}\]
In the preceding equation, \(\mathbf{U}_{i}\) and \(\mathbf{U}_{j}\) represents the \(i\)-th and \(j\)-th columns of \(\mathbf{U}\), while \(\operatorname{diag}(\mathbf{W}_{FE})\) stands for a diagonal matrix containing the entries of the vector of FE weights \(\mathbf{W}_{FE}\) ( defined in Eq. 5). The above condition can be cast in a compact form as
\[\mathbf{U}^{T}\operatorname{diag}(\mathbf{W}_{FE})\mathbf{U}=\mathbf{I}, \tag{13}\]
\(\mathbf{I}\) being the \(p\times p\) identity matrix. The preceding equation reveals that orthogonality in the \(L_{2}(\Omega)\) sense for the basis functions \(u_{i}\) translates into orthogonality for the columns of \(\mathbf{U}\) in the sense defined by the following inner product
\[\langle\mathbf{v}_{1},\mathbf{v}_{2}\rangle_{W}:=\mathbf{v}_{1}^{T}\operatorname{diag}( \mathbf{W}_{FE})\mathbf{v}_{2} \tag{14}\]
\((\mathbf{v}_{1},\mathbf{v}2\in\mathbb{R}^{M})\).
In order to determine \(\mathbf{U}\) from \(\mathbf{A}_{FE}\), we compute first the (truncated) Singular Value Decomposition of the _weighted_ matrix defined by
\[\mathbf{\bar{A}}:=\operatorname{diag}(\sqrt{\mathbf{W}_{FE}})\mathbf{A}_{FE} \tag{15}\]
that is:
\[\mathbf{\bar{A}}=\mathbf{\bar{U}}\mathbf{SV}^{T}+\mathbf{\bar{E}}, \tag{16}\]
symbolyzed in what follows as the operation:
\[[\mathbf{\bar{U}},\mathbf{S},\mathbf{V}]=\texttt{SVD}(\mathbf{\bar{A}},\epsilon_{svd}). \tag{17}\]
Here, \(\mathbf{\bar{U}}\in\mathbb{R}^{M\times p}\), \(\mathbf{S}\in\mathbb{R}^{p\times p}\) and \(\mathbf{V}\in\mathbb{R}^{dP\times p}\) are the matrices of left-singular vectors, singular values and right-singular vectors, respectively. The matrix of singular values is diagonal with \(S_{ii}\geq S_{i-1,i-1}>0\), while the matrices of left-singular and right-singular vectors obey the orthogonality conditions
\[\mathbf{\bar{U}}^{T}\mathbf{\bar{U}}=\mathbf{I},\quad\quad\mathbf{V}^{T}\mathbf{V}=\mathbf{I}. \tag{18}\]
Matrix \(\bar{\mathbf{E}}\) in Eq.(16), on the other hand, represents the truncation term, which is controlled by a user-specified tolerance \(0\leq\epsilon_{svd}\leq 1\) such that
\[\|\bar{\mathbf{E}}\|_{F}\leq\epsilon_{svd}\|\bar{\mathbf{A}}\|_{F} \tag{19}\]
(here \(\|\bullet\|_{F}\) denotes the Frobenius norm). The desired basis matrix \(\mathbf{U}\) is computed from \(\bar{\mathbf{U}}\) as
\[\mathbf{U}=\,\mathrm{diag}(\sqrt{\mathbf{W}_{FE}})^{-1}\bar{\mathbf{U}}. \tag{20}\]
It can be readily seen that, in doing so, the \(\mathbf{W}_{FE}\)-orthogonality condition defined in Eq.(13) is satisfied. Multiplication of both sides of Eq.(16) allows us to write
\[\mathbf{A}_{FE}=\mathbf{U}\mathbf{S}\mathbf{V}^{T}+\mathbf{E}, \tag{21}\]
where
\[\mathbf{E}:=\,\mathrm{diag}(\sqrt{\mathbf{W}_{FE}})^{-1}\bar{\mathbf{E}}. \tag{22}\]
Notice that, by virtue of the definition of Frobenius norm, and by using the preceding expression, we have that
\[\|\bar{\mathbf{E}}\|_{F}^{2}=\mathrm{tr}\,(\bar{\mathbf{E}}^{T}\bar{\mathbf{E}})=\mathrm{ tr}\,(\mathbf{E}^{T}\,\mathrm{diag}(\mathbf{W}_{FE})\mathbf{E})=\|\mathbf{E}\|_{W}^{2}, \tag{23}\]
where \(\mathrm{tr}\,(\bullet)\) stands for the trace operator, and \(\|\bullet\|_{W}\) designates the norm induced by the inner product introduced in Eq. (14). Since the same reasoning can be applied to \(\|\bar{\mathbf{A}}\|_{F}\), we can alternatively write the truncation condition (19) as
\[\|\mathbf{E}\|_{W}\leq\epsilon_{svd}\|\mathbf{A}_{FE}\|_{W}. \tag{24}\]
**Remark 2.1**.: _When \(\mathbf{A}_{FE}\) is too large to be processed as a single matrix, we shall use, rather than the standard SVD (17), the sequential randomized SVD alluded to in the introductory section (1.5):_
\[[\bar{\mathbf{U}},\mathbf{S},\mathbf{V}]=\texttt{{SRSVD}}([\bar{\mathbf{A}}_{1},\bar{\mathbf{A} }_{2},\ldots\bar{\mathbf{A}}_{s}],\epsilon_{svd}) \tag{25}\]
_(here \([\bar{\mathbf{A}}_{1},\bar{\mathbf{A}}_{2},\ldots\bar{\mathbf{A}}_{s}]\) stands for a partition of the weighted matrix \(\bar{\mathbf{A}}\) ). The implementation details are provided in Algorithm 6 of Appendix A._
### Constant function
We argued in Section 1.4 that the efficiency of the proposed search algorithm relies on one fundamental requirement: the volume of the domain is to be exactly integrated --i.e., the sum of the cubature weights must be equal to the volume of the domain \(V=\int_{\Omega}\ d\Omega\). If the integrand functions are provided as a collection of analytical expressions, this can be achieved by incorporating a constant function in such a collection, with the proviso that the value for the constant should be sufficiently high so that the SVD regards the function as representative within the sample.
The same reasoning applies when the only data available is the empirical matrix \(\mathbf{A}_{FE}\): in this case, we may make \(\mathbf{A}_{FE}\leftarrow[\mathbf{A}_{FE},c\mathbf{1}]\), where \(\mathbf{1}\) is an all-ones vector and \(c\in\mathbb{R}\) the aforementioned constant. Alternatively, to make the procedure less contingent upon the employed constant \(c\), we may expand, rather than the original matrix \(\mathbf{A}_{FE}\), the basis matrix \(\mathbf{U}\) itself. To preserve column-wise orthogonality, we proceed by first computing the component of the all-ones vector orthogonal to the column space of \(\mathbf{U}\) (with respect to the inner product (14) ):
\[\mathbf{v}=\mathbf{1}-\mathbf{U}\mathbf{U}^{T}\,\mathrm{diag}(\mathbf{W}_{FE})\mathbf{1}=\mathbf{1}-\mathbf{ U}\mathbf{U}^{T}\mathbf{W}_{FE}. \tag{26}\]
If \(\|\mathbf{v}\|_{2}\approx 0\), then no further operation is needed (the column space of \(\mathbf{U}\) already contains the all-ones vector); otherwise, we set \(\mathbf{v}\leftarrow\mathbf{v}/\|\mathbf{v}\|_{2}\), and expand \(\mathbf{U}\) as \(\mathbf{U}\leftarrow[\mathbf{v},\mathbf{U}]\).
**Lemma 2.1**.: _If the column space of the basis matrix \(\mathbf{U}\) contains the all-ones (constant) vector, then_
\[\mathbf{E}^{T}\mathbf{W}_{FE}=\mathbf{0}, \tag{27}\]
_that is, the integrals of the functions whose discrete representation are the truncation matrix \(\mathbf{E}\) in the SVD (21) are all zero._
_Proof._ By construction, the truncation term \(\mathbf{E}\) admits also a decomposition of the form \(\mathbf{E}=\mathbf{U}_{\perp}\mathbf{S}_{\perp}\mathbf{V}_{\perp}^{T}\), where \(\langle\mathbf{U}_{\perp},\mathbf{U}_{\perp}\rangle_{W}=\mathbf{I}\), \(\langle\mathbf{U}_{\perp},\mathbf{U}\rangle_{W}=\mathbf{0}\) and \(\mathbf{V}_{\perp}^{T}\mathbf{V}=\mathbf{I}\). Thus, replacing this decomposition into Eq.(27), we arrive at
\[\mathbf{E}^{T}\mathbf{W}_{FE}=\mathbf{V}_{\perp}\mathbf{S}_{\perp}(\mathbf{U}_{\perp}^{T}\mathbf{W}_{ FE})=\mathbf{0}. \tag{28}\]
The proof boils down thus to demonstrate that \(\mathbf{U}_{\perp}^{T}\mathbf{W}_{FE}=\mathbf{0}\). This follows easily from the condition that \(\langle\mathbf{U}_{\perp},\mathbf{U}\rangle_{W}=\mathbf{0}\). Indeed, since the all-ones vector pertains to the column space of \(\mathbf{U}\), the matrix of trailing modes \(\mathbf{U}_{\perp}\) is also orthogonal to the all-ones vector, hence
\[\langle\mathbf{U}_{\perp},\mathbf{1}\rangle_{W}=\mathbf{0}\ \ \Rightarrow\ \mathbf{U}_{\perp}^{T}\, \mathrm{diag}(\mathbf{W}_{FE})\mathbf{1}=\mathbf{U}_{\perp}^{T}\mathbf{W}_{FE}=\mathbf{0}. \tag{29}\]
### Evaluation of basis functions
During the weight-reduction process, it is necessary to repeatedly evaluate the basis functions, as well as their spatial gradient, at points, in general, different from the Gauss points of the mesh.
#### 2.3.1 Integrand given as analytical expression
If the analytical expressions for the integrand functions \(\mathbf{A}(\mathbf{x})\) and their spatial derivatives \(\dfrac{\partial\mathbf{A}(\mathbf{x})}{\partial x_{i}}\) (\(i=1,2\ldots d\)) are available, these evaluations can be readily performed by using the singular values and right-singular vectors of decomposition (21) as follows:
\[\mathbf{u}(\mathbf{x})=\mathbf{A}(\mathbf{x})\mathbf{V}\mathbf{S}^{-1}, \tag{30}\]
and
\[\dfrac{\partial\mathbf{u}(\mathbf{x})}{\partial x_{i}}=\dfrac{\partial\mathbf{A}(\mathbf{x}) }{\partial x_{i}}\mathbf{V}\mathbf{S}^{-1},\ \ \ \ i=1\ldots d. \tag{31}\]
_Proof_. Post-multiplication of both sides of Eq.(21) by \(\mathbf{V}\) leads to
\[\mathbf{A}_{FE}\mathbf{V}=\mathbf{U}\mathbf{S}\mathbf{V}^{T}\mathbf{V}+\mathbf{E}\mathbf{V}=\mathbf{U}\mathbf{S}\mathbf{V} ^{T}\mathbf{V}+\mathbf{U}_{\perp}\mathbf{S}_{\perp}\mathbf{V}_{\perp}^{T}\mathbf{V}, \tag{32}\]
where we have used the matrices introduced in the proof of Lemma (2.1). By virtue of the orthogonality conditions \(\mathbf{V}^{T}\mathbf{V}=\mathbf{I}\) and \(\mathbf{V}_{\perp}^{T}\mathbf{V}=\mathbf{0}\), the above equation becomes \(\mathbf{A}_{FE}\mathbf{V}=\mathbf{U}\mathbf{S}\); postmultiplication by \(\mathbf{S}^{-1}\) finally leads to \(\mathbf{U}=\mathbf{A}_{FE}\mathbf{V}\mathbf{S}^{-1}\). This equation holds, not only for \(\mathbf{A}_{FE}=\mathbf{A}(\mathbf{X}_{FE})\), but for any \(\mathbf{A}(\mathbf{x})\), as stated in Eq.(30).
\(\square\)
**Remark 2.2**: _Eq.(31) indicates that the gradient of the \(j\)-th basis function depends inversely on the \(j\)-th singular value. Negligible singular values, thus, may give rise to inordinately high gradients, causing convergence issues during the nonlinear readjustment problem. To avoid these numerical issues, the SVD truncation threshold \(\epsilon_{svd}\) (see Expression 24) should be set to a sufficiently large value (typically \(\epsilon_{svd}\geq 10^{-6}\))._
#### 2.3.2 Interpolation using Gauss points
In general, however, the analytical expression for the integrand functions are not available, and therefore, the preceding equations cannot be employed for retrieving the values of the orthogonal basis functions. This is the case encountered when dealing with FE-based reduced-order models, where the only information we have at our disposal is the value of the basis functions at the Gauss points of the finite elements, represented by submatrices \(\mathbf{U}^{(e)}\in\mathbb{R}^{r\times p}\) (\(e=1,2\ldots N_{el}\)) in Eq. (9).
In a FE-based reduced-order model, at element level, the integrand functions are, in general, a nonlinear function of the employed nodal shape functions. It appears reasonable, thus, to use also polynomial interpolatory functions to estimate the values of the basis functions at other points of the element using as interpolatory points, rather than the nodes of the element, their Gauss points. If we denote by \(\mathbf{N}^{(e)}:\Omega^{e}\rightarrow\mathbb{R}^{1\times r}\) the \(r\) interpolatory functions (arranged as a row matrix), then we can write
\[\mathbf{u}(\mathbf{x})=\mathbf{N}^{(e)}(\mathbf{x})\mathbf{U}^{(e)},\ \ \ \ \ \ \ \ \mathbf{x}\in\Omega^{e}. \tag{33}\]
Likewise, the spatial derivatives can be determined as
\[\dfrac{\partial\mathbf{u}(\mathbf{x})}{\partial x_{i}}=\mathbf{B}_{i}^{(e)}(\mathbf{x})\mathbf{U} ^{(e)},\ \ \ \ \ \ \ \ \ \mathbf{x}\in\Omega^{e},\ \ i=1\ldots d \tag{34}\]
where
\[\mathbf{B}_{i}^{(e)}:=\dfrac{\partial\mathbf{N}^{(e)}}{\partial x_{i}},\ \ i=1\ldots d. \tag{35}\]
The level of accuracy of this estimation will depend on the number of Gauss point per element with respect to the order of the nodal shape functions, as well as the distorsion of the physical domain \(\Omega^{e}\) with respect to the parent domain --which is the cause of the aforementioned nonlinearity. It may be argued that if the element has no distorsion, the evaluation of the integrand via Eq.(33) will be exact if the proper number of Gauss points is used. For instance, in a small-strains structural problem, if the element is a 4-noded bilinear quadrilateral, with no distorsion (i.e., a rectangle), and the term to be integrated is the virtual internal work, then the integrand is represented exactly by a quadratic2
polynomial. Such a polynomial possesses \((2+1)^{2}=9\) monomials, and therefore a \(3\times 3\) Gauss rule would be needed. Notice that this is an element integration rule with one point more per spatial direction than the integration standard rule for bilinear quadrilateral elements \((2\times 2)\).
The expression for \(\mathbf{N}^{(e)}\) and \(\mathbf{B}^{(e)}_{i}\) in terms of the coordinates of the Gauss points \(\{\mathbf{\bar{x}}^{e}_{1},\mathbf{\bar{x}}^{e}_{2}\ldots\mathbf{\bar{x}}^{e}_{r}\}\) can be obtained by the standard procedure used in deriving FE shape functions, see e.g. Ref. [22]. Firstly, we introduce the mapping \(\mathbf{\varphi}^{e}:\Omega^{e}\rightarrow\Omega^{e^{\prime}}\) defined by
\[[\mathbf{x}^{\prime}]_{i}=\frac{1}{L_{i}}[\mathbf{x}-\mathbf{\bar{x}}^{e}_{0}]_{i},\ \ \ \ \ \ \ \ i=1\ldots d \tag{36}\]
where \([\bullet]_{i}\) symbolizes the \(i\)-th component of the argument, \(\mathbf{\bar{x}}^{e}_{0}=(\sum_{g=1}^{r}\mathbf{\bar{x}}^{e}_{g})/r\) is the centroid of the Gauss points, and \(L_{i}\) is a scaling length defined by
\[L_{i}=\max_{g=1\ldots r}(|[\mathbf{\bar{x}}^{e}_{g}-\mathbf{\bar{x}}^{e}_{0}]_{i}|), \ \ \ \ \ \ \ \ i=1\ldots d. \tag{37}\]
The expression of the shape functions in terms of the scaled positions of the Gauss points \(\mathbf{\bar{X}}^{\prime}=\{\mathbf{\bar{x}}^{\prime}_{1},\mathbf{\bar{x}}^{\prime}_{2} \ldots\mathbf{\bar{x}}^{\prime}_{r}\}\), where \(\mathbf{\bar{x}}^{\prime}_{g}=\mathbf{\varphi}^{e}(\mathbf{\bar{x}}^{e}_{g})\), is given by
\[\mathbf{N}^{(e)}(\mathbf{x}^{\prime})=\mathbf{P}(\mathbf{x}^{\prime})\mathbf{P}^{-1}(\mathbf{\bar{X}}^ {\prime}). \tag{38}\]
Here, \(\mathbf{P}(\mathbf{x}^{\prime})\in\mathbb{R}^{1\times r}\) is the row matrix containing the monomials up to the order corresponding to the number and distribution of Gauss points at point \(\mathbf{x}^{\prime}=\mathbf{\varphi}^{e}(\mathbf{x})\); for instance, for the case of a 2D \(q\times q\) rule, where \(r=q^{2}\), this row matrix adopts the form
\[\mathbf{P}(\mathbf{x}^{\prime}):=[x^{\prime 0}_{\ 1}x^{\prime 0}_{\ 2},x^{\prime 1}_{\ 2}x^{ \prime 0}_{\ 2}\cdots x^{\prime i}_{\ 1}x^{\prime j}_{\ 2}\cdots x^{\prime q-1}_{\ 1}x^{ \prime q-1}_{\ 2}]_{1\times r}. \tag{39}\]
The other matrix appearing in Eq.(38), \(\mathbf{P}(\mathbf{\bar{X}}^{\prime})\in\mathbb{R}^{r\times r}\), known as the _moment matrix_[22], is formed by stacking the result of applying the preceding mapping to the set of scaled Gauss points \(\mathbf{\bar{X}}^{\prime}\). Provided that the element is not overly distorted (no negative Jacobians in the original isoparameteric transformation), the invertibility of \(\mathbf{P}(\mathbf{\bar{X}}^{\prime})\) is guaranteed thanks to the coordinate transformation Eq.(36) --which ensures that the coordinates of all points range between -1 and 1, therefore avoiding scaling issues in the inversion.
As for the gradient of the shape functions in Eq.(35), by applying the chain rule, we get that
\[\mathbf{B}^{(e)}_{i}=\frac{\partial\mathbf{P}(\mathbf{x}^{\prime})}{\partial x^{\prime}{}_ {j}}\frac{\partial x^{\prime}{}_{j}}{\partial x_{i}}\mathbf{P}^{-1}(\mathbf{\bar{X}}^{ \prime})=\frac{1}{L_{i}}\frac{\partial\mathbf{P}(\mathbf{x}^{\prime})}{\partial x^{ \prime}{}_{i}}\mathbf{P}^{-1}(\mathbf{\bar{X}}^{\prime}). \tag{40}\]
## 3 Discrete Empirical Cubature Method (DECM)
Once the orthogonal basis matrix \(\mathbf{U}\) has been computed by the weighted SVD outlined in Section 2.1, the next step consists in determining an _interpolatory cubature rule_ (featuring as many points as functions to be integrated) for the basis functions \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^{1\times p}\). As pointed out in Section 1.4, we employ for this purpose the Empirical Cubature Method, proposed by the first author in Ref. [17], and further refined in Ref. [16]. We call it here _Discrete_ Empirical Cubature Method, DECM, to emphasize that the cubature points are selected among the Gauss points of the mesh. The DECM, symbolized in whats follows as the operation
\[[\mathbf{z},\mathbf{w}^{*}]\leftarrow\text{DECM}(\mathbf{U},\mathbf{W}_{FE}), \tag{41}\]
takes as inputs the basis matrix \(\mathbf{U}\in\mathbb{R}^{M\times p}\) and the vector of positive FE weights \(\mathbf{W}_{FE}\in\mathbb{R}^{M}\); and returns a set of \(p\) indexes \(\mathbf{z}\subset\{1,2\ldots M\}\) and a vector of _positive_ weights \(\mathbf{w}^{*}\) such that
\[\mathbf{U}(\mathbf{z},:)\mathbf{w}^{*}=\mathbf{b}. \tag{42}\]
Here, \(\mathbf{U}(\mathbf{z},:)\) denotes, in the so-called "colon" notation [13] (the one used by Matlab), the submatrix of \(\mathbf{U}\) formed by the rows corresponding to indexes \(\mathbf{z}\), while \(\mathbf{b}\in\mathbb{R}^{p}\) is the vector of "exact" integrals of the basis functions, that is:
\[\mathbf{b}=\int_{\Omega}\ \mathbf{u}^{T}\ d\Omega=\mathbf{U}^{T}\mathbf{W}_{FE}. \tag{43}\]
The points associated to the selected rows will be denoted hereafter by \(\mathbf{X}^{*}=\{\mathbf{x}^{*}_{1},\mathbf{x}^{*}_{2}\ldots\mathbf{x}^{*}_{p}\}\) ( \(\mathbf{X}^{*}\subset\mathbf{X}_{FE}\)). Hence, according to the notational convention introduced in Remark 1.1, \(\mathbb{P}_{\mathbf{z}}\mathbf{U}\) may be alternatively expressed as
\[\mathbb{P}_{\mathbf{z}}\mathbf{U}=\mathbf{u}(\mathbf{X}^{*})=\begin{bmatrix}\mathbf{u}(\mathbf{x}^{* }_{1})\\ \mathbf{u}(\mathbf{x}^{*}_{2})\\ \vdots\\ \mathbf{u}(\mathbf{x}^{*}_{p})\end{bmatrix}=\begin{bmatrix}u_{1}(\mathbf{x}^{*}_{1})&u_{2} (\mathbf{x}^{*}_{1})&\cdots&u_{p}(\mathbf{x}^{*}_{1})\\ u_{1}(\mathbf{x}^{*}_{2})&u_{2}(\mathbf{x}^{*}_{2})&\cdots&u_{p}(\mathbf{x}^{*}_{2})\\ \vdots&\vdots&\ddots&\vdots\\ u_{1}(\mathbf{x}^{*}_{p})&u_{2}(\mathbf{x}^{*}_{p})&\cdots&u_{p}(\mathbf{x}^{*}_{p})\end{bmatrix}_{ p\times p} \tag{44}\]
**Remark 3.1**.: _It should be stressed that the solution to problem Eq.(42) is not unique. Rather, the number of possible solutions grows combinatorially with the ratio between the total number of Gauss points and the number of functions (\(M/p\)). The situation is illustrated in Figure 2, where we graphically explain how the DECM works for the case of \(M=6\) Gauss points and polynomial functions up to order 1 (it can be readily shown that the orthogonal functions in this case are \(u_{1}=\sqrt{3/2}x\) and \(u_{2}=\sqrt{1/2}\), displayed in Figure 2.a). The problem, thus, boils down to select \(p=2\) points out of \(M=6\), such that the resulting weights are positive. In Figure 2.b, we plot each \(\mathbf{u}(\mathbf{\bar{x}}_{g})\)\((g=1,2\dots 6)\) along with the vector of exact integrals, which in this case is equal to \(\mathbf{b}=[0,\sqrt{2}]^{T}\). It follows from this representation that, out of the \(\binom{M}{p}=\binom{6}{2}=15\) possible combinations, only 9 pairs are valid solutions. The DECM3 chooses \(\mathbf{x}_{1}^{*}=\mathbf{\bar{x}}_{4}\) and \(\mathbf{x}_{2}^{*}=\mathbf{\bar{x}}_{1}\), which is the solution that yields the largest ratio between highest and lowest weight. Other possible solutions are, for instance, pairs \(\{\mathbf{\bar{x}}_{1},\mathbf{\bar{x}}_{6}\}\) and \(\{\mathbf{\bar{x}}_{2},\mathbf{\bar{x}}_{6}\}\) --observe that in both cases, vector \(\mathbf{b}\) lies in the cone4 "positively spanned" by \(\{\mathbf{u}(\mathbf{\bar{x}}_{1}),\mathbf{u}(\mathbf{\bar{x}}_{6})\}\) and \(\{\mathbf{u}(\mathbf{\bar{x}}_{2}),\mathbf{u}(\mathbf{\bar{x}}_{6})\}\), respectively._
Footnote 3: The first vector \(\mathbf{u}(\mathbf{\bar{x}}_{4})\) is chosen because is the one which is most positively parallel to \(\mathbf{b}\) (notice that, because of symmetry, it might have chosen \(\mathbf{u}(\mathbf{\bar{x}}_{3})\) as well). Then it orthogonally projects \(\mathbf{b}\) onto \(\mathbf{u}(\mathbf{\bar{x}}_{4})\), giving \(\mathbf{s}\), and then search for the vector which is more positively parallel to the residual \(\mathbf{b}-\mathbf{s}\), which in this case is \(\mathbf{u}(\mathbf{\bar{x}}_{1})\).
Footnote 4: The cone positively spanned by a set of vectors is the set of all possible positive linear combinations of such vectors [8].
Footnote 5: It should be pointed that the notation employed in Ref. [16] is different from the one used here. The input of the DECM in Ref [16] (which is called therein simply the _ECM_) is not \(\mathbf{U}\), but the transpose of the weighted matrix \(\mathbf{\bar{U}}\) (defined in Eq. 16). Likewise the error threshold appearing in Ref. [16] is to be set to zero in order to produce an interpolatory rule.
The reader interested in the points selection algorithm behind the DECM is referred5 to Ref. [16], Appendix A, Algorithm 7.
Footnote 5: It should be pointed that the notation employed in Ref. [16] is different from the one used here. The input of the DECM in Ref [16] (which is called therein simply the _ECM_) is not \(\mathbf{U}\), but the transpose of the weighted matrix \(\mathbf{\bar{U}}\) (defined in Eq. 16). Likewise the error threshold appearing in Ref. [16] is to be set to zero in order to produce an interpolatory rule.
### Relation between SVD truncation error and the DECM integration error
Let us examine now the error incurred in approximating the "exact" integrals \(\mathbf{b}_{FE}=\mathbf{A}_{FE}^{T}\mathbf{W}_{FE}\) by the DECM cubature rule. This error may be expressed as
\[e_{decm}:=\|\mathbf{A}_{FE}^{T}\mathbf{W}^{*}-\mathbf{A}_{FE}^{T}\mathbf{W}_{FE}\|_{2} \tag{45}\]
where \(\mathbf{W}^{*}:=\mathbb{P}_{\mathbf{z}}^{T}\mathbf{w}^{*}\) (a vector of the same length as \(\mathbf{W}_{FE}\), but with nonzero entries only at the indexes specified by \(\mathbf{z}\)). Inserting decomposition (21) in the preceding equation, we get
\[e_{decm}=\|\left(\mathbf{V}\mathbf{S}(\mathbf{U}^{T}\mathbf{W}^{*}-\mathbf{U}^{T}\mathbf{W}_{FE}) \right)+(\mathbf{E}^{T}\mathbf{W}^{*}-\mathbf{E}^{T}\mathbf{W}_{FE})\|_{2} \tag{46}\]
From condition (42), it follows that the term involving the basis matrix \(\mathbf{U}\) vanishes; besides, since by construction the column space of \(\mathbf{U}\) contains the all-ones vector, we have that, by virtue of Lemma (2.1), \(\mathbf{E}^{T}\mathbf{W}_{FE}=\mathbf{0}\). Thus, Eq.(46) boils down to
\[e_{decm}=\|\mathbf{E}^{T}\mathbf{W}^{*}\|_{2}. \tag{47}\]
The truncation term \(\mathbf{E}\) in the above condition is controlled by the SVD tolerance \(\epsilon_{svd}\) appearing in inequality 24. Thus, the integration error \(e_{decm}\) may be lowered to any desired level by decreasing the SVD tolerance \(\epsilon_{svd}\). Numerical experience shows that for most problems \(e_{decm}/\|\mathbf{b}_{FE}\|_{2}\) is slightly above, but of the same order of magnitude, as \(\epsilon_{svd}\).
## 4 Global sparsification problem
### Formulation
We now concentrate our attention on the _sparsification_ problem outlined in Section 1.4.1. The design variables in this optimization problem will be a vector of \(p\) weights
\[\mathbf{w}=[w_{1},w_{2}\ldots w_{p}]^{T},\hskip 28.452756ptw_{g}\geq 0, \tag{48}\]
( recall that \(p\) is the number of orthogonal basis functions we wish to integrate), and the position of the associated points within the domain:
\[\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2}\ldots\mathbf{x}_{p}\},\hskip 28.452756pt\mathbf{x}_{g} \in\Omega. \tag{49}\]
With a minor abuse of notation, we shall also use \(\mathbf{X}\) to denote the variable formed by stacking the position of the points into a column matrix, i.e.: \(\mathbf{X}=[\mathbf{x}_{1}^{T},\mathbf{x}_{2}^{T}\ldots\mathbf{x}_{p}]^{T}\). On the other hand, we define the _integration residual_ as
\[\mathbf{r}:=\mathbf{u}^{T}(\mathbf{X})\mathbf{w}-\mathbf{b} \tag{50}\]
that is, as the difference between the approximate and the exact integrals of the basis functions. In the preceding equation, \(\mathbf{u}(\mathbf{X})\) designates the matrix formed by stacking the rows of all \(\mathbf{u}(\mathbf{x}_{g})\in\mathbb{R}^{1\times p}\) (\(g=1,2\ldots p\)) into a single matrix, i.e.:
\[\mathbf{u}(\mathbf{X}):=\begin{bmatrix}\mathbf{u}(\mathbf{x}_{1})\\ \mathbf{u}(\mathbf{x}_{2})\\ \vdots\\ \mathbf{u}(\mathbf{x}_{p})\end{bmatrix}=\begin{bmatrix}u_{1}(\mathbf{x}_{1})&u_{2}(\mathbf{x }_{1})&\cdots&u_{p}(\mathbf{x}_{1})\\ u_{1}(\mathbf{x}_{2})&u_{2}(\mathbf{x}_{2})&\cdots&u_{p}(\mathbf{x}_{2})\\ \vdots&\vdots&\ddots&\vdots\\ u_{1}(\mathbf{x}_{p})&u_{2}(\mathbf{x}_{p})&\cdots&u_{p}(\mathbf{x}_{p})\end{bmatrix}_{ p\times p} \tag{51}\]
With the preceding definitions at hand, the sparsification problem can be formulated as follows:
\[\begin{split}\min_{\mathbf{w},\mathbf{X}}&\|\mathbf{w}\|_{0}\\ \text{s.t.}&\mathbf{r}=\mathbf{u}^{T}(\mathbf{X})\mathbf{w}-\mathbf{b}=\mathbf{0}\\ &\mathbf{w}\geq\mathbf{0}\\ &\mathbf{X}\subset\Omega\end{split} \tag{52}\]
Recall that \(\|\mathbf{w}\|_{0}\) stands for the number of nonzero entries of \(\mathbf{w}\). Thus, the goal in the preceding optimization problem is to find the _sparsest_ vector of _positive_ weights, along with their associated positions within the domain, that render the _integration residual_\(\mathbf{r}\) equal to zero.
**Remark 4.1**.: _The differences between the sparsification problem presented above and the one described in the introductory section (see Problem 8) are three. Firstly, in the preceding problem, \(\mathbf{w}\in\mathbb{R}^{p}\), i.e., the number of weights is equal to the number of basis functions to be integrated \(p\), whereas in Problem 8, this number is equal to the total number of FE Gauss points \(M\) (it is assumed that \(p<<M\)). Secondly, the integration residual in Problem 52 appears in the form of an equality constraint, while in Problem 8 appears as an inequality constraint. And thirdly, and most importantly, in Problem 52, the positions of the cubature points are considered design variables --in constrar to the situation encountered in Problem 8, in which the points are forced to coincide with the FE Gauss points, and thus, the only design variables are the weights._
### Proposed sparsification algorithm
The proposed approach for arriving at the solution of the preceding problem is to construct a sequence of \(p\)-points cubature rules
\[\{\mathbf{X}^{0},\mathbf{w}^{0}\},\{\mathbf{X}^{1},\mathbf{w}^{1}\}\ldots\{\mathbf{X}^{i},\mathbf{w}^{i} \}\ldots\{\mathbf{X}^{p-m},\mathbf{w}^{p-m}\} \tag{53}\]
such that
\[\|\mathbf{w}^{k+1}\|_{0}=\|\mathbf{w}^{k}\|_{0}-1 \tag{54}\]
that is, such that each weight vector in the sequence has one non-zero less than the previous one. The first element in the sequence will be taken as the cubature rule provided by the DECM (see Section 3):
\[\mathbf{X}^{0}=\mathbf{X}^{*},\hskip 28.452756pt\mathbf{w}^{0}=\mathbf{w}^{*}. \tag{55}\]
The algorithm proceeds from this initial point by the recursive application of an operation consisting in driving the weight of one single point to zero (while forcing the remaining points and weights to obey the constraints appearing in Problem 52); this step will be symbolized hereafter as the function
\[[\mathcal{C},\mathbf{X}^{new},\mathbf{w}^{new},\mathbf{\mathcal{E}}^{new}]\leftarrow\texttt{ MAKE1ZERO}(\mathbf{X}^{old},\mathbf{w}^{old},N_{steps},\mathbf{\mathcal{A}},\mathbf{\mathcal{A}},\mathbf{\mathcal{E}}^{old}). \tag{56}\]
This function takes as inputs a given cubature rule \(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\), and tries to return a cubature rule \(\{\mathbf{X}^{new},\mathbf{w}^{new}\}\) with at least one less nonzero weight. The success of this operation is indicated by the output Boolean variable \(\mathcal{C}\) (\(\mathcal{C}=false\) if it fails in producing a sparser cubature rule).
The other inputs in (56) are the following: 1) \(N_{steps}\): number of steps used to solve the nonlinear problem associated to the residual constraint \(\mathbf{r}=\mathbf{0}\). 2) \(\mathbf{\mathcal{A}}\): Remaining variables controlling the solution of this nonlinear problem (such as the convergence tolerance for the residual). 3) \(\mathbf{\mathcal{M}}\) and \(\mathbf{\mathcal{E}}^{old}\) are data structures containing the variables needed to evaluate the residual \(\mathbf{r}\) at any point \(\mathbf{x}\in\Omega\). \(\mathbf{\mathcal{M}}\) encompasses those variables that do not change during the execution (such as the basis matrix itself \(\mathbf{U}\), the vector of exact integrals \(\mathbf{b}\), see Eq. (43), the nodal coordinates of the FE mesh, the connectivity table, the Gauss coordinates, and in the case of analytical evaluation, the product \(\mathbf{V}_{s}:=\mathbf{V}\mathbf{S}^{-1}\) appearing in Eqs. 30 and Eqs. 31). \(\mathbf{\mathcal{E}}^{old}\), on the other hand, comprises element variables that are computed _on demand_6, such as the the inverse of the moment matrix in Eq. 38 and the scaling factors in Eq. 36 (required for the interpolation described in Section 2.3.2).
Footnote 6: In the proposed algorithm, we compute the necessary interpolation variables for each element of the mesh dynamically, as they are needed, rather than precomputing them all at once. Indeed, each time the position of the points is updated, we check which elements of the mesh contain the updated points. If all the elements have been previously visited, we use the information stored in \(\mathbf{\mathcal{E}}^{old}\) to perform the interpolation; otherwise, we compute the required interpolation variables for the new elements and update \(\mathbf{\mathcal{E}}^{old}\) into \(\mathbf{\mathcal{E}}^{new}\) with the new data.
Due to its greedy or "myopic" character, the DECM tends to produce weights distribution in which most of the weights are relatively small in comparison with the total volume of the domain. We have empirically observed that the readjustment problem associated to the elimination of these small weights is moderately nonlinear, and in general, one step suffices to ensure convergence. However, as the algorithm advances in the sparsification process, the weights to be zeroed become larger, and, as a consequence, the readjustment problem becomes more nonlinear. In this case, to ensure
convergence, it is necessary to reduce the weights progressively. To account for this fact, we have devised the two-stage procedure described in Algorithm 1. In the first stage (see Line 4), the sparsification process (sketched in turn in Algorithm 2) is carried out by decreasing the weight of each chosen weight in one single step. In the second stage, see Line 5, we take the cubature rule produced in the first stage, and try to further decrease the number of nonzero weights by using a higher number of steps \(N_{steps}>1\).
```
1Function\([\mathbf{X},\mathbf{w},\mathbf{\mathcal{E}}^{new}]\leftarrow\) SPARSIF(\(\mathbf{X}^{old}\),\(\mathbf{w}^{old}\),\(N\),\(\mathbf{\mathcal{A}}\),\(\mathbf{\mathcal{M}}\),\(\mathbf{\mathcal{E}}^{old}\)) Data:\(\mathbf{X}^{old}\subset\Omega\), \(\mathbf{w}^{old}\in\mathbb{R}^{p}\). \(N\) and \(\mathbf{\mathcal{A}}\): Variables controlling the solution of the nonlinear problem \(\mathbf{r}=\mathbf{0}\). \(\mathbf{\mathcal{M}}\) and \(\mathbf{\mathcal{E}}^{old}\): data structures containing variables needed to evaluate the residual \(\mathbf{r}\) at any point \(\mathbf{x}\in\Omega\). Result: Sparsest weight vector \(\mathbf{w}\in\mathbb{R}^{p}\) and associated positions \(\mathbf{X}\) such that \(\mathbf{r}=\mathbf{u}^{T}(\mathbf{X})\mathbf{w}-\mathbf{b}=\mathbf{0}\). \(\mathbf{\mathcal{E}}^{new}\): updated structure data with information necessary for performing element interpolation. \(\mathcal{C}\gets true\) while\(\mathcal{C}=true\)do
2\([\mathcal{C},\mathbf{X}^{new},\mathbf{w}^{new},\mathbf{\mathcal{E}}^{new}]\leftarrow\)MAKE1ZERO(\(\mathbf{X}^{old},\mathbf{w}^{old},N,\mathbf{\mathcal{A}},\mathbf{\mathcal{M}},\mathbf{ \mathcal{E}}^{old}\)) // Described in Algorithm 3. if\(\mathcal{C}\) = falsethenreturn//MAKE1ZERO() has failed to produce a cubature rule with one nonzero weight less.
3if\(w^{new}_{i}\geq\mathbf{0}\), \(\forall i\)then\(\mathbf{X}\leftarrow\mathbf{X}^{new}\) ; \(\mathbf{w}\leftarrow\mathbf{w}^{new}\)// The weights solution \(\mathbf{w}\) can only have positive weights
4\(\mathbf{X}^{old}\leftarrow\mathbf{X}^{new}\); \(\mathbf{w}^{old}\leftarrow\mathbf{w}^{new}\); \(\mathbf{\mathcal{E}}^{old}\leftarrow\mathbf{\mathcal{E}}^{new}\)// Update positions, weights and element interpolation data.
5
6 end while
```
**Algorithm 2** Sparsification process, given an initial cubature rule \(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\) and a number of steps \(N\) ( invoked in Line 5 of Algorithm 1).
## 5 Local sparsification problem
After presenting the global sparsification procedure, we now focus on fleshing out the details of the fundamental building block of such a procedure, which is the above mentioned function MAKE1ZERO(), appearing in Line 4 of Algorithm 2.
The procedural steps are described in the pseudocode of Algorithm 3. Given a cubature rule \(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\), with \(\|\mathbf{w}^{old}\|_{0}=m\) (\(2\leq m\leq p\)), we seek a new cubature rule \(\{\mathbf{X},\mathbf{w}\}\) with \(\|\mathbf{w}\|_{0}=m-1\). Notice that there are \(m\) different routes for eliminating a nonzero weight -as many as nonzero weights. It may be argued that the higher the contribution of a given point to the residual \(\mathbf{r}\), the higher the difficulty of converging to feasible solutions using as initial point the cubature rule \(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\). To account for this fact, we sort the indexes of the points with nonzero weights in ascending order according to its contribution to the residual (which is \(s_{i}=w^{old}_{i}\|\mathbf{u}(\mathbf{x}^{old}_{i})\|_{2}\), see Line 4). The actual subroutine that performs the zeroing operation is SOLVERES() in Line 9. If this subroutine fails to determine a feasible solution in which the chosen weight is set to zero, then the operation is repeated with the next point in the sorted list, and so on until arriving at the desired sparser solution (if such a solution exists at all).
### Modified Newton-Raphson algorithm
We now move to the above mentioned subroutine SOLVERES(), appearing in Line 9 of Algorithm 3 --and with pseudo-code explained in Algorithm 4. This subroutine is devoted to the calculation of the position and weights of the remaining points when the weight of the chosen "control" point R (R \(\in\{1,2\ldots p\}\), \(w^{old}_{\text{R}}\neq 0\)) is set to zero --by solving the nonlinear equation corresponding to the integration conditions \(\mathbf{r}(\mathbf{X},\mathbf{w})=\mathbf{u}^{T}(\mathbf{X})\mathbf{w}-\mathbf{b}=\mathbf{0}\).
To facilitate convergence, the weight \(w_{\text{R}}\) is gradually reduced at a rate dictated by the number of steps \(N\) (so that \(w_{\text{R}}=w^{old}_{\text{R}}(1-n/N)\) at step \(n\), see Line 4).
Suppose we have converged to the solution \(\{\mathbf{X}_{(n-1)},\mathbf{w}_{(n-1)}\}\) and we want to determine the solution for the next step \(n\) using a Newton-Raphson iterative scheme, modified so as to account for the constraints that the points must remain within the domain, and that the weights should be positive (although this latter constraint will be relaxed, as explained in what follows). The pseudo-code of this modified Newton-Raphson scheme is described in turn in Algorithm 5. The integration residual at iteration \(k\leq K_{max}\) is computed in Line 7. This residual admits the following decomposition in terms of unknown and known variables:
\[\mathbf{r}=\mathbf{u}^{T}(\mathbf{X})\mathbf{w}-\mathbf{b}=\mathbf{u}^{T}(\mathbf{X}_{\text{L}})\mathbf{w}_{ \text{L}}+\mathbf{u}^{T}(\mathbf{X}_{\text{P}})\mathbf{w}_{\text{P}}+\mathbf{u}^{T}(\mathbf{X}_{R} )w_{R}-\mathbf{b}. \tag{57}\]
Here, \(\mathbf{L}\subset\{1,2\ldots p\}\) denotes the set of points whose positions and weights are unknown, while \(\mathbf{P}\subset\{1,2\ldots p\}\) is the set in which the positions are fixed, but the weights are unknown. At the first iteration, \(\mathbf{P}=\emptyset\) (see Line 2). The unknown weights will be collectively denoted hereafter by \(\mathbf{w_{\mathrm{S}}}=[\mathbf{w_{\mathrm{L}}^{T}},\mathbf{w_{\mathrm{P}}^{T}}]^{T}\), and the vector of unknowns (including positions and weights ) by \(\mathbf{q}:=[\mathbf{X}_{\mathrm{L}}^{T},\mathbf{w_{\mathrm{S}}^{T}}]^{T}\).
If the Euclidean norm of the residual is not below the prescribed error tolerance ( Line 7), we compute, as customary in Newton-Rapshon procedures, a correction \(\mathbf{\Delta q}=[\Delta\mathbf{X}_{\mathrm{L}}^{T},\Delta\mathbf{w_{\mathrm{S}}^{T}}]^{T}\) by obtaining _one_ solution of the _underdetermined_ linear equation
\[\mathbf{\hat{J}}\,\mathbf{\Delta q}=-\mathbf{r}. \tag{58}\]
Here, \(\mathbf{\hat{J}}\) stands for the block matrix of the Jacobian matrix \(\mathbf{J}\in\mathbb{R}^{p\times(d+1)p}\) formed by the rows corresponding to the indexes of the unknown positions \(\mathbf{X_{\mathrm{L}}}\) and the unknown weights \(\mathbf{w_{\mathrm{S}}}\), i.e:
\[\mathbf{\hat{J}}:=\begin{bmatrix}\mathbf{J_{X_{\mathrm{L}}}}&\mathbf{J_{w_{\mathrm{S}}}} \end{bmatrix}, \tag{59}\]
```
1Function\([\mathcal{C},\mathbf{X},\mathbf{w},\mathbf{\mathcal{E}}]\leftarrow\texttt{NEWTONRmod}(\mathbf{X}^{old}, \mathbf{w}^{old},\)_R,\(\mathbf{\mathcal{A}}\),\(\mathbf{\mathcal{M}}\),\(\mathbf{\mathcal{E}}\)) Data:\(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\): Cubature rule previous step. R \(\in\{1,2\dots p\}\): Index controlled point. \(\mathbf{\mathcal{A}}=\{K_{max},\epsilon_{NR},N_{neg}\}\), \(K_{max}\): Maximum number of iterations; \(\epsilon_{NR}\): Tolerance convergence residual; \(N_{neg}\): Maximum number of negative weights allowed during iterations; \(\mathbf{\mathcal{M}}\) and \(\mathbf{\mathcal{E}}\): data structures containing variables needed to evaluate the residual \(\mathbf{r}\) at any point \(\mathbf{x}\in\Omega\) Result:\(\mathcal{C}=true\): The Newton-based iterative algorithm has converged to a feasible solution \(\{\mathbf{X},\mathbf{w}\}\) of the equation \(\mathbf{r}=\mathbf{u}(\mathbf{X})^{T}\mathbf{w}-\mathbf{b}=\mathbf{0}\), where \(w_{\text{R}}=w_{\text{R}}^{old}\) is given.
2P\(\leftarrow\) 0 // Indexes of points with fixed position but unknown weights
3L - Indexes nonzero entries of \(\mathbf{w}^{old}\), excluding R
4\(k\gets 1\); \(\mathcal{C}\gets false\); \(\mathbf{S}\leftarrow\mathbf{L}\) ; \(\epsilon_{svd}\gets 10^{-10}\) // Initializations
5while\(k\leq K_{max}\)AND\(\mathcal{C}=false\)do
6\([\mathbf{u}(\mathbf{X}^{old}),\mathbf{\nabla}\mathbf{u}(\mathbf{X}^{old}),\mathbf{\mathcal{E}}]\leftarrow\)EVALBASIS\((\mathbf{X}^{old},\)\(\mathbf{\mathcal{M}}\),\(\mathbf{\mathcal{E}})\) // Determine basis functions and gradients at \(\mathbf{X}^{old}\)
7\(\mathbf{r}\leftarrow\mathbf{u}^{T}(\mathbf{X}^{old})\mathbf{w}^{old}-\mathbf{b}\) // Integration residual
8if\(\|\mathbf{r}\|_{2}\leq\epsilon_{NR}\)then
9\(\mathcal{C}\gets true\); \(\mathbf{X}\leftarrow\mathbf{X}^{old}\); \(\mathbf{w}\leftarrow\mathbf{w}^{old}\) // Converged solution
10
11else
12\(\mathbf{J}_{\mathcal{S}}\leftarrow[\mathbf{J}_{\mathbf{X}_{S}},\mathbf{J}_{\mathbf{w}_{S}}]\) // Jacobian matrix (indexes \(\mathbf{S}\)). \(\mathbf{J}_{\mathbf{X}}\) and \(\mathbf{J}_{\mathbf{w}}\) are defined in Eqs. 60 and 61
13\(E_{feas}\gets false\) // \(E_{feas}\gets false\) while tentative solution not feasible
14while\(k\leq K_{max}\)AND\(E_{feas}=false\)do
15\(\mathbf{\hat{J}}\leftarrow[\mathbf{J}_{\mathbf{X}_{L}},\mathbf{J}_{\mathbf{w}_{S}}]\) // L \(\subseteq\) S (L: indexes unknown weights and positions)
16\([\mathbf{U}_{J},\mathbf{S}_{J},\mathbf{G}^{T}]\leftarrow\) SVD(\(\mathbf{\hat{J}},\)\(\mathbf{\hat{r}},\)\(\mathbf{svd}\)) // \(\mathbf{\hat{J}}\approx\mathbf{U}_{J}\mathbf{S}_{J}\mathbf{G}\) with relative error \(\epsilon_{svd}=10^{-10}\)\(n_{dofs}\leftarrow(d+1)\)length(\(\mathbf{L}\)) // Number of unknowns (\(d\): m\({}^{\text{\text{\textregistered}}}\) of spatial dimensions)
17if\(n_{dofs}<\)length(\(\mathbf{S}_{J}\))then
18break// Overdetermined system (no solution). Exiting internal loop without convergence
19else
20\(\mathbf{c}\leftarrow-\mathbf{S}_{J}^{-1}\mathbf{U}_{J}^{T}\mathbf{r}\) // U \(\mathbf{S}_{J}\mathbf{G}\mathbf{\Delta q}=-\mathbf{r}\)\(\Rightarrow\)\(\mathbf{G}\mathbf{\Delta q}=-\mathbf{S}_{J}^{-1}\mathbf{U}_{J}^{T}\mathbf{r}\)
21\(\mathbf{\Delta q}\leftarrow\) Sparse solution of \(\mathbf{G}\mathbf{\Delta q}=\mathbf{c}\) obtained via QR pivoting // In Matlab:\(\mathbf{\Delta q}=\mathbf{G}\backslash\mathbf{c}\)
22\(\mathbf{X}_{\mathbf{L}}\leftarrow\mathbf{X}^{old}_{\mathbf{S}}+\mathbf{\Delta q}_{x}\); \(\mathbf{w}_{\mathbf{S}}\leftarrow\mathbf{w}^{old}_{\mathbf{S}}+\mathbf{\Delta q}_{x}\) // \(\mathbf{\Delta q}_{x}=\mathbf{\Delta q}(1:(d+1)\)length(\(\mathbf{L}\))),\(\mathbf{\Delta q}_{w}\):remaining entries
23\(m_{neg}\leftarrow\) Number of negative entries of \(\mathbf{w}\)
24if\(m_{neg}\geq N_{neg}\)thenbreak
25\(\mathbf{Y}\leftarrow\) Indexes points outside the domain(\(\mathbf{X}_{\mathbf{Y}(i)}\notin\Omega\))
26if\(\mathbf{Y}\neq\emptyset\)then
27\(\mathbf{P}\leftarrow\mathbf{P}\cup\mathbf{Y}\); \(\mathbf{L}\leftarrow\mathbf{L}\setminus\mathbf{Y}\) \(\mathbf{S}\leftarrow\mathbf{L}\cup\mathbf{P}\) // Repeat iteration with \(\mathbf{X}_{\texttt{P}}\) fixed in previous position
28else
29\(\mathbf{X}^{old}\leftarrow\mathbf{X}\); \(\mathbf{w}^{old}\leftarrow\mathbf{w}\); \(E_{feas}=true\) // All points inside, update and exit internal loop
30
31 end if
32\(k\gets k+1\)
33
34 end if
35
36 end if
37
38 end if
39if\(E_{feas}=false\)thenbreak
40
41 end if
42
43 end while
44
55 end while
56
57 end while
58
59 end while
60
61 end while
```
**Algorithm 5** Modified Newton-Rapshon algorithm for solving the constrained nonlinear equation \(\mathbf{r}=\mathbf{u}^{T}(\mathbf{X})\mathbf{w}-\mathbf{b}=\mathbf{0}\), using as initial guess \(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\). The unknowns are the position and weights of the nonzero entries of \(\mathbf{w}^{old}\), except for \(\mathbf{w}^{old}_{\text{R}}\), which is given (this function is invoked in Line 5 of Algorithm 4).
```
1Function\([\mathcal{C},\mathbf{X},\mathbf{w},\mathbf{\mathcal{E}}]\leftarrow\texttt{NEWTONRmod}(\mathbf{X}^{old}, \mathbf{w}^{old},\)_R,\(\mathbf{\mathcal{A}}\),\(\mathbf{\mathcal{M}}\),\(\mathbf{\mathcal{E}}\)) Data:\(\{\mathbf{X}^{old},\mathbf{w}^{old}\}\): Cubature rule previous step. R \(\in\{1,2\dots p\}\): Index controlled point. \(\mathbf{\mathcal{A}}=\{K_{max},\epsilon_{NR},N_{neg}\}\), \(K_{max}\): Maximum number of iterations; \(\epsilon_{NR}\): Tolerance convergence residual; \(N_{neg}\): Maximum number of negative weights allowed during iterations; \(\mathbf{\mathcal{M}}\) and \(\mathbf{\mathcal{E}}\): data structures containing variables needed to evaluate the residual \(\mathbf{r}\) at any point \(\mathbf{x}\in\Omega\) Result:\(\mathcal{C}=true\): The Newton-based iterative algorithm has converged to a feasible solution \(\{\mathbf{X},\mathbf{w}\}\) of the equation \(\mathbf{r}=\mathbf{u}(\mathbf{X})^{T}\mathbf{w}-\mathbf{b}=\mathbf{0}\), where \(w_{\text{R}}=w_{\text{R}}^{old}\) is given.
2P\(\leftarrow\) 0 // Indexes of points with fixed position but unknown weights
3L - Indexes nonzero entries of \(\mathbf{w}^{old}\), excluding R
4\(k\gets 1\); \(\mathcal{C}\gets false\); \(\mathbf{S}\leftarrow\mathbf{L}\) ; \(\epsilon_{svd}\gets 10^{-10}\) // Initializations
5while\(k\leq K_{max}\)AND\(\mathcal{C}=false\)do
6\([\mathbf{u}(\mathbf{X}^{old}),\mathbf{\nabla}\mathbf{u}(\mathbf{X}^{old}),\mathbf{\mathcal{E}}]\leftarrow\)EVALBASIS\((\mathbf{X}^{old},\)\(\mathbf{\mathcal{M}}\),\(\mathbf{\mathcal{E}})\) // Determine basis functions and gradients at \(\mathbf{X}^{old}\)
7\(\mathbf{r}\leftarrow\mathbf{u}^{T}(\mathbf{X}^{old})\mathbf{w}^{old}-\mathbf{b}\) // Integration residual
8if\(\|\mathbf{r}\|_{2}\leq\epsilon_{NR}\)then
9\(\mathcal{C}\leftarrow\)\(true\); \(\mathbf{X}\leftarrow\mathbf{X}^{old}\); \(\mathbf{w}\leftarrow\mathbf{w}^{old}\) // Converged solution
10else
11\(\mathbf{J}_{\mathcal{S}}\leftarrow[\mathbf{J}_{\mathbf{X}_{S}},\mathbf{J}_{\mathbf{w}_{S}}]\) // Jacobian matrix (indexes \(\mathbf{S}\)). \(\mathbf{J}_{\mathcal{X}}\) and \(\mathbf{J}_{\mathbf{w}}\) are defined in Eqs. 60 and 61
12\(E_{feas}\gets false\) // \(E_{feas}\gets false\) while tentative solution not feasible
13while\(k\leq K_{max}\)AND\(E_{feas}=false\)do
14\(\mathbf{\hat{J}}\leftarrow[\mathbf{J}_{\mathbf{X}_{L}},\mathbf{J}_{\mathbf{w}_{S}}]\) // L \(\subseteq\) S (L: indexes unknown weights and positions)
15\([\mathbf{U}_{J},\mathbf{S}_{J},\mathbf{G}^{T}]\leftarrow\) SVD(\(\mathbf{\hat
\[\mathbf{J}_{w}:=\frac{\partial\mathbf{r}}{\partial\mathbf{w}}=\mathbf{u}^{T}(\mathbf{X})=\begin{bmatrix} u_{1}(\mathbf{x}_{1})&u_{1}(\mathbf{x}_{2})&\dots&u_{1}(\mathbf{x}_{p})\\ u_{2}(\mathbf{x}_{1})&u_{2}(\mathbf{x}_{2})&\dots&u_{2}(\mathbf{x}_{p})\\ \vdots&\vdots&\ddots&\vdots\\ u_{p}(\mathbf{x}_{1})&u_{p}(\mathbf{x}_{2})&\dots&u_{p}(\mathbf{x}_{p})\end{bmatrix}_{p \times p}. \tag{61}\]
Recall that the gradients of the basis functions can be determined by Eq.(31), if the analytical expressions of the integrand functions are available, or by interpolation via Eq. (34) --using the values of the basis funtions at the Gauss points of the element containing the corresponding point. These operations are encapsulated in the function EVALBASIS(), invoked in Line 6.
Once we have computed \(\mathbf{\Delta q}\) from Eq.(58), we update the positions of the points and the weights in Line 22. Since the basis functions are only defined inside the domain \(\Omega\) (this is one of the constraints appearing in the sparsification problem 52), it is necessary to first identify ( Line 25) and then correct the positions of those points that happen to fall outside the domain. The identification is made by determining which finite elements contain the points in their new positions; for the sake of computational efficiency, the search is limited to a patch of elements centered at the element containing the point at the previous iteration, and located within a radius \(\|\Delta\mathbf{X}_{I}\|_{2}\) (\(I\in\mathbf{L}\))--the mesh connectivities, stored in the data structure \(\mathbf{\mathcal{M}}\), greatly expedites this search task. If it happens that a given point is not inside any element (\(\mathbf{X}_{I}\notin\Omega\) for some \(I\in\mathbf{L}\)), then we set \(\mathbf{X}_{I}\leftarrow\mathbf{X}_{I}^{old}\), \(\mathbf{P}\leftarrow\mathbf{P}\cup I\), and \(\mathbf{L}\leftarrow\mathbf{L}\setminus I\) (see Line 27). Notice that this amounts to "freezing" the position of this critical point at the value of the previous iteration during the remaining iterations of the current step7. This operation is to be repeated until all the points lie within the domain --ensuring this is the job of the internal _while_ loop starting in Line 13.
Footnote 7: This can be done because system (58) is underdetermined, and therefore, one can constrain some points not to move and still find a solution. It should be noticed that these constrained points are freed at the beginning of each step, see Line 2.
The other constraint defining a feasible solution in the sparsification problem 52 is the positiveness of the weights. However, we argued in Section 1.4.1 that, since the volume is exactly integrated, the tendency when one of the weights is reduced is that the remaining weights increase to compensate for the loss of volume. Furthermore, according to the sorting criterion employed in Line 4 of Algorithm 3, negative weights are the first to be zeroed in each local sparsification step, and, thus, tend to dissapear as the algorithm progresses. For these reasons, the solution procedure does not incorporate any specific strategy for enforcing positiveness of the weights --rather, we limit ourselves to keep the number of negative weights below a user-prescribed threshold \(M_{neg}\) during the iterative procedure (Line 24 in Algorithm 5). Nevertheless, as a precautionary measure, Line 7 in Algorithm 2 prevents negative weights from appearing in the final solution.
### Properties of Jacobian matrix and maximum sparsity
It only remains to addresss the issue of how to solve the system of linear equations 58. Solving this system of equations is worthy of special consideration because of two reasons: firstly, the system is _underdetermined_ (more unknowns than equations), and, secondly, the Jacobian matrix \(\mathbf{\widehat{J}}\) may become _rank-deficient_ during the final steps of the sparsification process, specially in 2D and 3D problems.
#### 5.2.1 Rank deficiency
That the Jacobian matrix \(\mathbf{\widehat{J}}\) may become rank-deficient can be readily demonstrated by analyzing the case of the integration of polynomials in Cartesian domains. A polynomial of order \(t\) gives rise to \(p=(t+1)^{d}\) (\(d=1,2\) or \(3\)) integration conditions (as many as monomials). If one assumes that the Jacobian matrix \(\mathbf{\tilde{J}}\) remains full rank during the entire sparsification process, then it follows that the number of optimal points one can get under such an assumption, denoted henceforth by \(m_{eff}\), is when \(\mathbf{\tilde{J}}\) becomes square8 (or underdetermined with less than \(d\) surplus unknowns); this condition yields
Footnote 8: Because if there are less unknowns than equations, there are no solution to the equation \(\mathbf{r}=\mathbf{0}\).
\[m_{eff}=\texttt{ceil}(\frac{p}{d+1})=\texttt{ceil}(\frac{(t+1)^{d}}{d+1}) \tag{62}\]
where \(\texttt{ceil}()\) rounds its argument to the nearest integer greater or equal than itself. For 1D polynomials (\(d=1\)), it is readily seen that \(m_{eff}\) coincides with the number of points of the well-known (optimal) Gauss quadrature rule; for instance, for \(t=3\) (cubic polynomials), the above equation gives \(m_{eff}=4/2=2\) integration points. This implies that in this 1D case, the Jacobian matrix _does_ remain full rank during the process, as presumed. However, this does not hold in the 2D and 3D cases. For instance, for 3D cubic polynomials (\(t=3\), \(d=3\)), the above equation yields \(m_{eff}=(1+3)^{3}/(1+3)=16\)
points, yet it is well known that 8-points tensor product rule (\(2\times 2\times 2\)) can integrate exactly cubic polynomials in any cartesian domain. This means that, in this 3D case, from the rule with 16 nonzero weights, to the cubature rule with 8 nonzero weights, the Jacobian matrix \(\widehat{\mathbf{J}}\) must remain _necessarily_ rank-deficient.
To account for this potential rank-deficiency, we determine the truncated SVD of \(\widehat{\mathbf{J}}\) (with error threshold \(\epsilon_{SVD}=10^{-10}\) to avoid near-singular cases) in Line 15 of Algorithm 5: \(\widehat{\mathbf{J}}\approx\mathbf{U}_{J}\mathbf{S}_{J}\mathbf{G}\). Replacing \(\widehat{\mathbf{J}}\) by this decomposition in Eq.(58), and pre-multiplying both sides of the resulting equation by \(\mathbf{S}_{J}^{-1}\mathbf{U}_{J}^{T}\), we obtain, by virtue of the property \(\mathbf{U}_{J}^{T}\mathbf{U}_{J}=\mathbf{I}\):
\[\mathbf{G}\mathbf{\Delta}\mathbf{q}=\mathbf{c}, \tag{63}\]
where \(\mathbf{G}\) denotes the transpose of the orthogonal matrix of right-singular vectors of \(\widehat{\mathbf{J}}\), while \(\mathbf{c}=-\mathbf{S}_{J}^{-1}\mathbf{U}_{J}^{T}\mathbf{r}\), \(\mathbf{S}_{J}\) being the diagonal matrix of singular values, and \(\mathbf{U}_{J}\) the matrix of left singular vectors.
#### 5.2.2 Underdetermination and sparse solutions
Let us discuss now the issue of underdeterminacy. It is easy to show that the preceding system of equations remains underdetermined during the entire sparsification process, with a degree of underdeterminacy (surplus of unknowns over number of equations) decaying at each sparsification step until the optimum is reached, when \(\mathbf{G}\) becomes as square as possible. For instance, at the very first step of the process, in a problem with \(p\) basis functions, \(\widehat{\mathbf{J}}\) is by construction9 full rank (i.e., there are \(p\) linearly independent equations), while the number of unknowns is \((p-1)(1+d)\) (\(m=p-1\) points with \(d\) unknowns coordinates associated to the position of each point and one unknown associated to its weight). Thus, the solution space in this case is of dimension \((d(p-1)-1)\).
Footnote 9: On the grounds that \(\mathbf{u}(\mathbf{X}^{*})\) is also full rank because otherwise Eq.(42) would not hold.
To update the weights and the positions, we need to pick up _one_ solution from this vast space. The standard approach in Newton's method for underdetermined systems (and also the method favored in the literature on generalized Gaussian quadratures [27; 35; 4] ) is to use the _least \(\ell_{2}\)-norm solution_, which is simply \(\mathbf{\Delta q}:=\mathbf{G}^{+}\mathbf{c}\), where \(\mathbf{G}^{+}=\mathbf{G}^{T}(\mathbf{G}\mathbf{G}^{T})^{-1}\) is the pseudo-inverse of \(\mathbf{G}\) (notice that in our case10\(\mathbf{G}^{+}=\mathbf{G}^{T}\)). However, we do not use this approach here because the resulting solution tends to be _dense_, and this implies that the positions of all the cubature points have to updated at all iterations. This is a significant disadvantage in our interpolatory framework, since updating the position of one point entails an interpolation error of greater or lesser extent depending on the functions being interpolated and the distance from the FE Gauss points. Thus, it would be beneficial for the overall accuracy of the method to determine solutions that _minimize the number of positions being updated at each iteration_ --incidentally, this would also help to reduce the computational effort associated to the spatial search carried out in Line 25 of Algorithm 5. This requisite natural calls for _solution methods that promote sparsity_. For this reason, we use here (see Line 21 in Algorithm 5) the QR factorization with column pivoting (QRP) proposed in Golub et al. [13] (page 300, Algorithm 5.6.1), which furnishes a solution with as many nonzero entries as equations11. An alternative strategy would be to determine the _least \(\ell_{1}\)-norm solution_, which, as discussed in Section 1.2, also promotes sparsity [3; 7]. However, computing this solution would involve addressing a convex, nonquadratic optimization problem at each iteration, and this would require considerably more effort and sophistication than the simple QRP method employed in Line 21.
Footnote 10: In this regard, it should be pointed out that References [27; 35; 4] calculate the pseudo-inverse of the Jacobian matrix as \(\widehat{\mathbf{J}}^{+}=\widehat{\mathbf{J}}^{T}(\widehat{\mathbf{J}}\widehat{\mathbf{J}}^{T}) ^{-1}\), thus ignoring the fact that, as we have argued in the foregoing, \(\widehat{\mathbf{J}}\) might become rank-deficient, and therefore, \(\widehat{\mathbf{J}}\widehat{\mathbf{J}}^{T}\) cannot be inverted.
Footnote 11: In Matlab, this QRP solution is the one obtained in using the βbackslashβ operator (or \(\mathtt{mldivide}(\mathbf{G},\mathbf{c})\)).
### Summary
By way of conclusion, we summarize in Box 5.1, all the operations required to determine an optimal cubature rule using as initial data the location of the FE Gauss points, their corresponding weights, and the values at such Gauss points of the functions we wish to efficiently integrate.
## 6 Numerical assessment
A repository containing both the Continuous Empirical Cubature Method (CECM), as well as the Sequential Randomized SVD (SRSVD), allowing to reproduce the following examples is publicly available at [https://github.com/Rbravo555/CECM-continuous-empirical-cubature-method](https://github.com/Rbravo555/CECM-continuous-empirical-cubature-method)
### Univariate polynomials
We begin the assessement of the proposed methodology by examining the example used for motivating the proposal: the integration of univariate polynomials in the domain \(\Omega=[-1,1]\). The employed finite element mesh features \(N_{el}=200\)
1. Given the coordinates of the nodes of the finite element mesh, the array of element connectivities, and the position of the Gauss points for each element in the parent domain, determine the location of such points in the physical domain: \(\mathbf{X}_{FE}=\{\mathbf{\bar{x}}_{1}^{1},\mathbf{\bar{x}}_{2}^{1},\ldots\mathbf{\bar{x}}_{i}^{ e}\ldots\}\). Likewise, compute the vector of (positive) finite element weights (\(\mathbf{W}_{FE}\in\mathbb{R}^{M}\)) for each of these points as the product of the corresponding Gauss weights and the Jacobian of the transformation from the parent domain to the physical domain.
2. Determine the values at all Gauss points \(\mathbf{X}_{FE}\) of the parameterized function \(\mathbf{a}:\Omega\times\mathcal{D}\rightarrow\mathbb{R}^{n}\) we wish to efficiently integrate for the chosen parameters \(\{\mathbf{\mu}_{1},\mathbf{\mu}_{2}\ldots\mathbf{\mu}_{P}\}\subset\mathcal{D}\), and store the result in matrix \(\mathbf{A}_{FE}\in\mathbb{R}^{M\times Pn}\) (see Eq. 4). In the case of hyperreduced-order models, the analytical expression of the integrand functions is normally not available as an explicit function of the input parameters, and constructing matrix \(\mathbf{A}_{FE}\) entails solving the corresponding governing equations for the chosen input parameters. If the matrix proves to be too large to fit into main memory, it should be partitioned into column blocks \(\mathbf{A}_{FE1}\), \(\mathbf{A}_{FE2}\), \(\ldots\,\mathbf{A}_{FEp}\). Such blocks need not be loaded into main memory all at once.
3. Compute the weighted matrix \(\mathbf{\bar{A}}:=\,\text{diag}\sqrt{\mathbf{W}_{FE}}\mathbf{A}\) (see Eq 15) (alternatively, one may directly store in Step 2, rather than \(\mathbf{A}_{FE}\), \(\mathbf{\bar{A}}\) itself; this is especially convenient when the matrix is treated in a partitioned fashion, because it avoids loading the submatrices twice).
4. Determine the SVD of \(\mathbf{\bar{A}}\) (\(\mathbf{\bar{A}}\approx\mathbf{\bar{U}}\mathbf{SV}\)), with relative truncation tolerance \(\epsilon_{svd}\) equal to the desired error threshold for the integration (see Eq. 17). If the matrix is relatively small, one can use directly standard SVD implementations (see function SVDT in Algorithm 7 of Appendix A). If the matrix is large but still fits into main memory without compromising the machine performance, the incremental randomized SVD proposed in Appendix A (function RSVDinc, described in Algorithm 9) may be used instead. Lastly, if the matrix does not fit into main memory, and is therefore provided in a partitioned format, the Sequential Randomized SVD described also in Appendix A (function SRSVD() in Algorithm 6) is to be used.
5. Determine a \(\mathbf{W}_{FE}\)-orthogonal basis matrix for the range of \(\mathbf{A}_{FE}\) by making \(\mathbf{U}=\,\text{diag}(\sqrt{\mathbf{W}_{FE}})^{-1}\mathbf{\bar{U}}\). Following the guidelines outlined in Section 2.2, augment \(\mathbf{U}\) with one additional column if necessary so that the column space of \(\mathbf{U}\in\mathbb{R}^{M\times p}\) contains the constant function.
6. Apply the Discrete Empirical Cubature Method (DECM, see Section 3) to compute a set of indexes \(\mathbf{z}\) and positive weights \(\mathbf{w}^{*}\) such that \(\mathbf{U}(\mathbf{z},:)^{T}\mathbf{w}^{*}=\mathbf{U}^{T}\mathbf{W}_{FE}\) (see Eq. 41).
7. Using as initial solution the weights \(\mathbf{w}^{*}\) obtained by the DECM, as well as the corresponding positions \(\mathbf{X}^{*}\), solve the _sparsification_ problem 52 by means of function SPARSFglo in Algorithm 1: \[[\mathbf{X},\mathbf{w},\mathbf{\mathcal{E}}]\leftarrow\texttt{SPARSIFglo}(\mathbf{X}^{*},\mathbf{w} ^{*},N_{steps},\mathbf{\mathcal{A}},\mathbf{\mathcal{M}})\] The desired cubature rule is given by \(\mathbf{w}^{ccm}=[w_{g_{1}},w_{g_{2}},\ldots w_{gn_{m}}]\) (\(w_{g_{i}}>0\)) and \(\mathbf{X}^{ccm}=\{\mathbf{x}_{g_{1}},\mathbf{x}_{g_{2}},\ldots\mathbf{x}_{g_{m}}\}\), where \(g_{1},g_{2}\ldots g_{m}\) denote the indexes of the nonzero entries of the (sparse) output weight vector \(\mathbf{w}\).
**Box 5.1:** Algorithmic steps involved in the proposed _Continuous Empirical Cubature Method_ (CECM).
Figure 3: Integration of univariate Lagrange polynomials of degree \(p=5\). a) Original integrand functions \(a_{1},a_{2}\ldots a_{6}\). b) Orthogonal basis functions \(u_{1}\), \(u_{2}\)\(\ldots u_{6}\) derived from the weighted SVD described in step 5 of Box 5.1). The CECM operates on these basis functions rather than on the original integrand functions in Figure 3(a).
equally-sized elements, and \(r=4\) Gauss points per element, resulting in a total of \(M=800\) Gauss points. Given a degree \(p>0\), and a set of \(P=p+1\) equally space nodes \(x_{1},x_{2}\ldots x_{P}\), we seek the optimal integration rule for the Lagrange polynomials
\[a_{i}(x)=\prod_{j=1,i\neq j}^{P}\frac{x-x_{j}}{x_{i}-x_{j}},\hskip 28.452756pti=1,2 \ldots P \tag{64}\]
(graphically represented in Figure 3(a) for degree up to \(p=5\)).
The value of these Lagrange polynomials at the \(M\) Gauss points are stored in the matrix \(\boldsymbol{A}_{FE}\in\mathbb{R}^{M\times P}\), which is then subjected to the weighted SVD (step 5 in Box 5.1) for determining \(L_{2}\)-orthogonal basis functions \(u_{1}\), \(u_{2}\,\ldots u_{P}\) --plotted in Figure 3(b) for the case \(p=1,2\ldots 5\). The truncation tolerance in the SVD of expression (17) is set in this case to \(\epsilon_{svd}=0\), for we seek quadrature rules that _exactly_ integrate any polynomial up to the specified degree. The resulting basis matrix \(\boldsymbol{U}\) (obtained by Eq. 20 from the left singular vectors of the above mentioned SVD), along with the full-order weight vector \(\boldsymbol{W}_{FE}\in\mathbb{R}^{M}\), are then used to determine an interpolatory quadrature rule by means of the DECM (step 6 in Box 5.1). As commented in Section 3, this algorithm selects one point at each iteration, until arriving at as many points as basis functions. By way of illustration, we show in Figure 4 the iterative sequence leading to the interpolatory quadrature rule with 6 points (i.e., for polynomials up to degree 5).
The final step in the process is the sparsification of the vector of DECM weights to produce the final CECM rule (step 7 in Box 5.1). Table 1 shows the location and weights obtained in this sparsification for polynomials up to degree \(p=12\). The parameters used in this process are: \(K_{max}\)= 40,\(\epsilon_{NR}=10^{-8}\), \(N_{neg}=5\) and \(N_{steps}=20\) (the definition of these parameters is given in Algorithm 5), and we use analytical evaluation of the integrand and their derivatives through formulas (30) and (31), respectively.
It can be inferred from the information displayed in Table 1 that, for polynomials of even degree, the CECM provides rules whose number of points is equal to \((p+2)/2\), whereas for polynomials of odd order, the number of points is equal to \((p+1)/2\) in all cases. Thus, for instance, both CECM rules derived from polynomials of degree 4 and 5 possess 3 points; notice that the rule for the polynomials of degree 4 is asymmetric, whereas the one for polynomials of degree 5 is symmetrical. Furthermore, comparison of this symmetrical rule with the corresponding Gauss rule with the same number of points12 reveals that they are identical (relative error below \(10^{-15}\)). The same trend is observed for the remaining CECM rules for polynomials of odd degree. To further corroborate this finding, we extended the study to cover the cases of polynomials from \(p=13\) to \(p=25\), and the result was invariably the same. Thus, we can assert that, at least for
Figure 4: Location (\(x\)) and corresponding weights (\(w\)) selected by the DECM (see Box 5.1, step 6), at each iteration, for the case of polynomials of degree \(p=5\). The final integration rule (iteration 6, graph 4(f)) is the starting point for the subsequent sparsification problem (illustrated, in turn, in Figure 5).
univariate polynomials, _the proposed CECM is able to arrive at the optimal cubature rule_, that is, the rule with the _minimal number of points_. To gain further insight into the performance of the method, we present in Figure 5 the sequence of rules produced during the sparsification process (from the 6-points DECM rule (Figure 5(a)) to the optimal 3-points (Gauss) rule of Figure 5(d).
\begin{table}
\begin{tabular}{|c|c|c|c|} degree & positions & weights & error.(Gs) \\ \hline
1 & -7.31662253127291E-17 & 2 & 2.2504e-16 \\ \hline
2 & -0.718298153787432 & 0.784973499347808 & \\ \hline & 0.460598497653638 & 1.21502650065219 & \\ \hline
3 & -0.577350269196926 & 1 & 1.6653e-16 \\ \hline & -0.96687924131567 & 0.25396505608136 & \\ \hline
4 & -0.26681724537718 & 1.00671682129073 & \\ \hline & 0.69545518887597 & 0.733918122627913 & \\ \hline & -0.774596669241483 & 0.55555555555556 & \\ \hline & -0.5191995959298667ee-17 & 0.88888888888888 & \\ \hline & 0.747566669241483 & 0.5555555555555556 & \\ \hline & -0.837102793435635 & 0.405516777593141 & \\ \hline & -0.245843261565188 & 0.71713167588906 & \\ \hline & 0.49503056155797 & 0.62534059948469 & \\ \hline & 0.90683693901626 & 0.2520105870834362 & \\ \hline & -0.861136311594053 & 0.347854845137454 & \\ \hline
7 & -0.339981043548565 & 0.6521451862546254 & 5.8993e-16 \\ \hline & 0.861136311594053 & 0.347854845137454 & \\ \hline & -0.998731268512389 & 0.0812276852880355 & \\ \hline & -0.7193801374319 & 0.445922526369483 & \\ \hline & -0.1664345160342323 & 0.623254258303059 & \\ \hline & 0.44666802602019 & 0.562352205951177 & \\ \hline & 0.885864638161693 & 0.287243326766745 & \\ \hline \end{tabular}
\end{table}
Table 1: Quadrature rules computed by the CECM for univariate polynomials of degree up to \(p=12\). The rightmost column represents the relative deviations with respect to the optimal Gaussian rules (for polynomials of odd degree), calculated as \(e^{2}=(\|\mathbf{X}_{\mathit{cccm}}-\mathbf{X}_{\mathit{guass}}\|_{2}^{2}+\|\mathbf{w}_{ \mathit{gauss}}\|_{2}^{2})/(\|\mathbf{X}_{\mathit{guass}}\|_{2}^{2}+\|\mathbf{w}_{ \mathit{guass}}\|_{2}^{2})\).
Figure 5: Location and weights of the quadrature rules generated during the sparsification process for the case of univariate polynomials of degree \(p=5\) in \(\Omega=[-1,1]\). Variable \(t\) represents the number of passess over the loop that selects the weights to be zeroed in Algorithm 3 (see line 7), whereas \(k\) indicates the total number of iterations of the modified Newton-Raphson scheme in Algorithm 5. The initial integration rule, displayed in Figure 5(a) is the one corresponding to the last iteration of the DECM, displayed previously in Figure 4(f). The red point in each graph indicates the point whose weight is to be zeroed in the following step. The final quadrature rule is the one shown in graph 5(d). The values of the coordinates and weights are given in Table 1, in which one can see that this quadrature rule is indeed the optimal 3-points Gauss rule.
### Multivariate polynomials
Let us now extend the preceding assessment to the integration of _multivariate Lagrange polynomials_ in 2D and 3D cartesian domains -- for which it is known that the optimal rules are tensor product of univariate Gauss rules [12]. More specifically, we shall focus here on bivariate and trivariate Lagrange polynomials on biunit squares (\(\Omega=[-1,1]\times[-1,1]\)) and cubes (\(\Omega=[-1,1]\times[-1,1]\times[-1,1]\)), respectively.
Given a degree \(p\), and a set of \(p+1\) equally spaced nodes for each direction, let us define the monomials:
\[\Gamma_{i}(x)=\prod_{h=1,i\neq h}^{P}\frac{x-x_{h}}{x_{i}-x_{h}},\qquad\quad i= 1,2\ldots(p+1) \tag{65}\]
\[\Gamma_{j}(y)=\prod_{h=1,j\neq h}^{P}\frac{y-y_{h}}{y_{j}-y_{h}},\qquad\quad j= 1,2\ldots(p+1) \tag{66}\]
\[\Gamma_{k}(z)=\prod_{h=1,k\neq h}^{P}\frac{z-z_{h}}{z_{k}-z_{h}},\qquad\quad k= 1,2\ldots(p+1). \tag{67}\]
The expression for the \(P=(p+1)^{2}\) integrand functions for the case of bivariate polynomials is given by
\[a_{l}(x,y)=\Gamma_{i}(x)\Gamma_{j}(y),\qquad\quad l=(j-1)(p+1)+i,\quad i,j=1, 2\ldots(p+1), \tag{68}\]
whereas for trivariate polynomials, the \(P=(p+1)^{3}\) integrand functions adopt the expression
\[a_{l}(x,y,z)=\Gamma_{i}(x)\Gamma_{j}(y)\Gamma_{k}(z),\qquad\quad l=(k-1)(p+1 )^{2}+(j-1)(p+1)+i,\quad i,j,k=1,2\ldots(p+1). \tag{69}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{d} & \multicolumn{2}{c|}{positions} & \multirow{2}{*}{weights} & \multirow{2}{*}{error (Gs.)} \\ \cline{2-3} \cline{5-5} & x & y & & \\ \hline
1 & 0.0000000 & -0.0000000 & 4.00000000 & 1.1104e-15 \\ \hline \multirow{3}{*}{2} & 0.706474840 & 0.648819610 & 0.707815861 & \\ & 0.50596954 & -0.513753481 & 1.262661689 & \\ \cline{2-3} & -0.471826192 & 0.648819610 & 1.059826914 & \\ \cline{2-3} & -0.658817575 & -0.513753481 & 0.99695535 & \\ \hline \multirow{3}{*}{3} & 0.577350269 & 0.577350269 & 1.000000000 & \multirow{3}{*}{2.0914e-15} \\ & 0.577350269 & 0.577350269 & 1.00000000 & \\ \cline{2-3} & -0.577350269 & -0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.577350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.577350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.577350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.577350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.00000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.577350269 & 1.0000000000 & \\ \cline{2-3} & -0.57350269 & 0.
We use structured meshes of \(20\times 20\) quadrilateral elements for the square, and \(20\times 20\times 20\) hexahedra elements for the cube, with Gauss integration rules for each element of \(2\times 2\) and \(2\times 2\times 2\) points, respectively. The parameters governing the performance of the CECM are the same employed in the univariate case. We examined the rules computed by the
Figure 6: Locations and weights of the cubature rules generated during the sparsification process for the case of bivariate polynomials of degree \(p=3\) in \(\Omega=[-1,1]\times[-1,1]\). The red circle in each graph indicates the point whose weight is to be zeroed in the following step. Variables \(t\) and \(k\), on the other hand, have the same interpretation as in Figure 5. The initial DECM rule has \((p+1)^{2}=16\) points (see Figure 6(a)), while the final rule features \((p+1)^{2}/2^{2}=4\) points, see graph 6(l). The exact values of the coordinates and weights are given in Table 2 ( it can be seen that the CECM rule coincides with the standard \(2\times 2\) product Gauss rule).
CECM for degrees up to \(p=12\) for both 2D and 3D cases (the same degree for all variables). For reasons of space limitation, we only display the coordinates and weights up to \(p=7\) for the 2D case (see Table 2), and in Table 3 up to \(p=4\) for the 3D case. The comparison with the product Gauss rules contained in both tables reveals the very same pattern observed in the case of univariate polynomials: for even degrees, the CECM produces asymmetrical rules, whereas for odd degrees, the CECM produces symmetrical rules identical to the corresponding product Gauss rules (featuring in both cases \((p+1)^{d}/2^{d}\) points, where \(d=2,3\)). Although not shown here, the same trend was observed for the remaining polynomial degrees.
These results provide further confirmation of the ability of the proposed sparsification algorithm to arrive at the _integration rules with minimal number of points_. Figures 6 and 7 depict the sparsification process for the case \(p=3\) in 2D and 3D, respectively. As done previously in Figure 5 for the univariate case, we show in each graph the number of trials required to find the point whose weight is to be zeroed (i.e., the number of times the method passess over the loop in line 7 of Algorithm 3), as well as the number of iterations required for zeroing the chosen weight (in the modified Newton-Raphson scheme of Algorithm 5). Whereas in the case of univariate polynomials, displayed previously in Figure 5, the method succesfully determines the weights to be zeroed on the first trial, in the multivariate case several trials are necessary in some cases, especially when the algorithm approaches the optimum. For instance, to produce the rule
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline d & \multicolumn{4}{c|}{positions} & \multicolumn{1}{c|}{} \\ & x & y & z & weights & error (Gs.) \\ \hline
1 & 8.3851909525508E-17 & -1.37013188830199E-16 & 2.54794197642291E-16 & 8.000000000000022 & 2.7534e-14 \\ \hline & 0.743706246140823 & 0.588904961511756 & 0.484154086744496 & 0.86563029280071 \\ \hline & 0.743706246140821 & -0.5606022287327392 & 0.83139926412485 & 0.499063612121599 \\ \hline & -0.44820563907193 & 0.741835057873505 & 0.830405419614278 & 0.6139614921383322 \\ \hline
2 & 0.743706246140824 & 0.588904961511757 & -0.6884867047024208 & 0.6087424763042735 \\ \hline & 0.743706246140825 & -0.56602228732739 & -0.409030513715059 & 1.03489533941434 \\ \hline & -0.44820563907193 & -0.449336183017346 & 0.830045419614395 & 1.01362448933892 \\ \hline & -0.448205639071929 & 0.739838192080586 & -0.40158445007214 & 1.274239461537 \\ \hline & -0.448205639071928 & -0.40582176382297 & -0.401584450020670 & 0.28986063960634 \\ \hline & 0.577350269189625 & 0.577350269189626 & 0.577350269189626 & 0.9999999999999 \\ \hline & 0.577350269189626 & 0.577350269189626 & -0.577350269189626 & 0.99999999999999 \\ \hline & 0.577350269189626 & 0.577350269189626 & -0.577350269189626 & 0.9999999999999999 \\ \hline & 0.577350269189626 & 0.577350269189626 & -0.577350269189626 & 1 \\ \hline & 0.577350269189626 & -0.577350269189626 & -0.577350269189626 & 1 \\ \hline & 0.577350269189626 & -0.577350269189626 & -0.577350269189626 & 1 \\ \hline & 0.577350269189626 & -0.577350269189626 & -0.577350269189626 & 1 \\ \hline & 0.577350269189626 & -0.577350269189626 & -0.577350269189626 & 1 \\ \hline & 0.691575606960029 & 0.160521574170478 & -0.261954268935127 & 0.701290700192769 \\ \hline & -0.282918324688049 & -0.217978724118607 & -0.27678081615893 & 1.00337289978755 \\ \hline & 0.691575606960092 & 0.2882053603748311 & 0.69660317728333 & 0.561750825289478 \\ \hline & -0.2829183246805 & -0.219984754054984 & 0.693054363094 & 0.7337637935922678 \\ \hline & -0.282918324688049 & 0.7073403491792361 & -0.27678081615893 & 0.734672939267592 \\ \hline & 0.691575606960029 & -0.722360355870564 & -0.261954268935126 & 0.50651426374409 \\ \hline & 0.691575606960028 & 0.691783591238766 & 0.69603177283332 & 0.4124586126342616 \\ \hline & 0.691575606960028 & 0.8615354240177974 & -0.26195426895126 & 0.294025738847668 \\ \hline & -0.28291832468805 & 0.708932471043466 & 0.693052436009393 & 0.536381048244394 \\ \hline & 0.691575606960028 & 0.067249317677481 & -0.690515811309941 & 0.715016341839065 \\ \hline & -0.282918324688048 & -0.910810772209135 & -0.27678081615893 & 0.330495727456156 \\ \hline & -0.282918324688049 & 0.9649876140782 & -0.90849849419782 & 0.2462327317412189 \\ \hline & -0.28291832468805 & -0.949608730336 & 0.693052436300938 & 0.248651857974508 \\ \hline & 0.691575606960028 & -0.988158764971828 & 0.696630178723831 & 0.128498060521655 \\ \hline & -0.989432805330447 & -0.0453872654634397 & -0.2326228786521 & 0.20319509703459 \\ \hline & 0.691575606960029 & 0.80470722830073 & -0.9069515811309941 & 0.096280480679767 \\ \hline & 0.691575606960028 & -0.75016835265182 & -0.960515811309941 & 0.118896976349644 \\ \hline & -0.989432805330449 & -0.053487256448396 & 0.703880237838895 & 0.148998888281412 \\ \hline & -0.28291832468805 & -0.695806658208385 & -0.980584894419782 & 0.14274873687 \\ \hline & -0.989432805330447 & 0.784042890
with 8 points in the bivariate case, see Figure 6(i), the algorithm tries \(t=5\) different points until finding the appropriate combination. Closer examination of the causes for this increase of iterative effort indicates that the most common cause is the violation of the constraint that the points must remain within the domain.
Figure 7: Locations and weights of the cubature rules generated during the sparsification process for the case of trivariate polynomials of degree \(p=3\) in \(\Omega=[-1,1]\times[-1,1]\times[-1,1]\). Variables \(t\) and \(k\) have the same interpretation as in Figures 5 and 6. The initial DECM rule has \((p+1)^{3}=64\) points (see Figure 7(a)), and the final rule \((p+1)^{3}/2^{3}=8\) points, see graph 7(m). The exact values of the coordinates and weights are given in Table 3, wherein it can be seen that the computed rule does coincide with the standard \(2\times 2\times 2\) product Gauss rule.
### Exponential-sinusoidal function
We next study the derivation of a cubature rule for the following parameterized, vector-valued function:
\[\mathbf{a}: \Omega\times\mathcal{D}\rightarrow\mathbb{R}^{6}, \tag{70}\] \[(\mathbf{x},\mathbf{\mu})\mapsto(a_{1},a_{2},\ldots a_{6})\]
where
\[\Omega=[-1,1]\times[-1,1]\times[-1,1],\hskip 28.452756pt\text{(spatial domain)} \tag{71}\] \[\mathcal{D}=[1,\pi]\times[1,\pi],\hskip 56.905512pt\text{( parametric domain)}\] (72) \[a_{1}=B(x_{1})C(x_{1},\mu_{1})E(x_{1},\mu_{1})+1\] \[a_{2}=B(x_{2})C(x_{2},\mu_{1})E(x_{2},\mu_{1})+1\] \[a_{3}=B(x_{1})C(x_{1},\mu_{1})E(x_{2},\mu_{1})+1\] (73) \[a_{4}=B(x_{2})C(x_{2},\mu_{1})E(x_{1},\mu_{1})+1\] \[a_{5}=B(x_{1})C(x_{1},\mu_{1})E(x_{3},\mu_{2})+1\] \[a_{6}=B(x_{3})C(x_{3},\mu_{2})E(x_{2},\mu_{1})+1\]
and
\[B(r)=1-r,\hskip 28.452756ptC(r,s)=\cos 3\pi s(r+1),\hskip 28.452756ptE(r,s)=e^{ (-1+r)s}. \tag{74}\]
We use a structured spatial mesh of \(30\times 30\times 30\) hexahedra elements, each element being equipped with a product Gauss rule of \(3\times 3\times 3\) points. Unlike the case of polynomials discussed in the foregoing, where we knew beforehand which was the space of functions to be integrated --the SVD only played a secondary, orthogonalizing role therein--, in this problem we have to delineate first the space in which the integrand lives. This task naturaly confronts us with the question of how dense should be the sampling of the parametric space so that the column space of the corresponding integrand matrix \(\mathbf{A}_{FE}\) becomes representative of this linear space. We address here this question by gradually increasing the number of sampled points in parametric space, applying the SVD with a fixed user-prescribed truncation tolerance to the corresponding integrand matrix (here we use \(\epsilon_{SVD}=10^{-4}\)), and then examining when the rank of the approximation (number of retained singular values) appears to converge to a maximum value. Since there are only two parameters here, it is computationally affordable13 to conduct this exploration by uniformly sampling the parametric space.
Footnote 13: Higher parameter dimensions may require more sophisticated sampling strategies, such as the greedy adaptive procedure advocated in Ref. [5] for reduced-order modeling purposes.
We show in Table 4 the result of this convergence study using both the standard SVD and the proposed Sequential Randomized SVD (described in Appendix A). The study has been devised so that the size of the integrand matrix doubles
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \cline{3-10} \multicolumn{1}{c}{} & & \multicolumn{2}{c|}{SVD} & \multicolumn{3}{c|}{SRSVD} & ERROR SING. VAL. \\ \hline \(n_{samp}\) & \(n_{col}\) & Size (GB) & Time (s) & Rank & \(N_{part}\) & \(N_{iter}^{avg}\) & Time (s) & Rank & \\ \hline
4 & 96 & 0.56 & 2.1 & 36 & 1 & 3 & 2.5 & 36 & 5.05E-15 \\ \hline
6 & 216 & 1.26 & 5.4 & 53 & 1 & 3 & 5.9 & 53 & 5.26E-15 \\ \hline
8 & 384 & 2.24 & 11.4 & 70 & 1 & 2 & 6.6 & 70 & 4.86E-15 \\ \hline
11 & 726 & 4.23 & 28.2 & 90 & 3 & 1.67 & 11.9 & 90 & 5.93E-14 \\ \hline
16 & 1536 & 8.96 & 84.7 & 122 & 4 & 1.67 & 21.3 & 122 & 3.66E-14 \\ \hline
22 & 2904 & 16.94 & 234.0 & 131 & 9 & 1.22 & 35.3 & 131 & 2.62E-13 \\ \hline
31 & 5766 & 33.63 & * & * & 16 & 1.06 & 65.2 & 133 & * \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of the performance of the proposed Sequential Randomized SVD (see Algorithm 6 in Appendix A) with respect to the standard SVD in determining an approximate orthogonal basis matrix (truncation tolerance \(\epsilon_{svd}=10^{-4}\)) for the column space of the integrand matrix of the vector-valued function ( 6 components) defined in Eq. (70). The number of spatial integration points is \(M=(30\cdot 3)^{3}=729000\), and the function is sampled at the points of uniform grids in parameter space of varying size \(n_{samp}\times n_{samp}\), see first column). The number of columns of the integrand matrix is therefore \(n_{col}=6n_{samp}^{2}\), and its size (in gigabytes) equal to \(8\cdot 10^{-9}n_{col}M\). For the matrix of 33.63 GB (last row) there is no information on either the computing time nor the rank (number of basis vectors) for the standard truncated SVD (using the builtin Matlab function \(\mathsf{svd}\), see Algorithm 7 in Appendix A), because the computation exhausted the memory capabilities of the employed 64 GB RAM computer. \(N_{part}\) denotes the number of partitions of the integrand matrix in the case of the SRSVD, and \(N_{iter}^{avg}\) the average number of iterations employed by the incremental randomized orthogonalization (see Algorithm 10 in Appendix A) for all the partitions. The rightmost column represents the relative difference between the singular values computed by both methods (\(\|\mathbf{S}_{svd}-\mathbf{S}_{srad}\|_{2}/\|\mathbf{S}_{srad}\|_{2}\)).
at each refinement step. Likewise, the block partition of the integrand matrix in the case of the SRSVD has been taken so that the size of each block matrix is approximately 2 GB. This convergence study reveals that the dimension of the linear space in which the integrand lies (for the prescribed tolerance) is around 130. The study also serves to highlight the advantages of the proposed SRSVD in terms of both computing time and memory requirements: for the matrix of size 16.96 GB, the SRSVD is almost 7 times faster than the SVD, and for the largest matrix of 33.63 GB, the standard SVD cannot handle the operation because it exhausts the memory capabilities of the employed computer (which has 64 GB RAM14); the SRSVD, by constrast, returns the result in approximately 1 minute. The reasons of this clear outperformance of the SRSVD over the SVD are further discussed in Section A.4 of Appendix A.
Footnote 14: The code is implemented in Matlab, and executed in an Intel(R) Core(TM) i7-8700 CPU, 3.20GHz with 64 Gb RAM (Linux platform)
Figure 8(a) shows the DECM rule determined using the 133 left singular vectors provided by the SRSVD for the parametric grid of \(31\times 31\) points (see last row of Table 4), while Figure 8(c) displays the final 38-points CECM rule obtained after the sparsification process --the reduction factor is approximately 3.4. The variables controlling the sparsification are the same employed in the polynomial case. Further information about the sparsification process are displayed in Figures 8(b) and 8(d). The number of trials taken by the algorithm to find the weight to be zeroed (versus the percentage of zeroed weights) is shown in Figure 8(b), whereas Figure 8(d) represents the total number of accumulated nonlinear iterations (also versus the percentage of zeroed weights). It can be seen in Figure 8(b) that approximately 90 % of the weights are zeroed on the first trial; it is only in the last 10 % that the number of trials increases considerably. The same behavior is observed in terms of accumulated nonlinear iterations in Figure 8(d): while the first 90 % of the points are zeroed in 5 iterations on average, for the last 10 % of points, the number of iterations required for this very task raises sharply (close to 200 iterations in some cases). This is not only due to the increase of the number of attempts to zeroed the weights reflected in Figure 8(b), but also because, at this juncture of the sparsification process, the weights of the points are relatively large and, to ensure converge, the problem of driving the integration residual to zero is solved in more
Figure 8: Cubature of the parameterized function defined in Eq. 70 in \(\Omega=[-1,1]\times[-1,1]\times[-1,1]\). The orthogonal basis vectors are determined by the SRSVD using a parametric grid of \(31\times 31\) points (see last row of Table 3). a) Initial interpolatory DECM rule. b) Number of passes over the loop that selects the weights to be zeroed in Algorithm 3 (see line 7) as a function of the percentage of zeroed weights. c) Final CECM rule. d) Total number of iterations of the modified Newton-Raphson scheme in Algorithm 5 as a function of the percentage of zeroed weights.
than one step15 (we take here \(N_{steps}=20\)).
Footnote 15: The sparsification process enters the second stage in line 5 of Algorithm 1
Obviously, this uneven distribution of iterations during the sparsification process translated into an equally uneven computing time distribution: zeroing the first 90 % of weights took less than 1 minute, whilst the remaining 10 % required about 8 minutes.
### Hyperreduction of multiscale finite element models
We conclude the validation of the proposed CECM by illustrating its use in the hyperreduction of finite element models. More specifically, attention is concentrated in the derivation of low-dimensional surrogate models in the context of multiscale finite element methods. The employed multiscale methodology is the _Empirical Interscale FE_ (EIFE) method developed by the first author and co-worker in Refs. [11] (for beam elements) and Ref. [18] (for solid elements). The chosen example is the modeling of the periodic structure displayed in Figure 9.a, made of tiling copies of the unit cell displayed in turn in Figure 9.b. The FE mesh of this unit cell, also depicted in Figure 9.b, is formed by 19356 nodes, and \(N_{el}=4719\) quadratic quadrilateral elements, with \(r=9\) Gauss points per element (hence \(M=N_{el}r=42471\)). The areas in which stress concentration are likely to appear are densely meshed to avoid both highly distorted elements and pronounced interlemental jumps --both issues are detrimental, not only to the accuracy of the FE analysis per se, but also to the accuracy of the final cubature rule.
The goal in the EIFE method is to replace the fine-scale representation of the unit cell in Figure 9.b by a surrogate coarse-scale element such as the one shown in Figure 9.c. The coarsening process involves two sequential stages; the first stage involves the reduction of number of degrees of freedom (DOFs), and the second stage involves an additional reduction in complexity in terms of number of integration points for each term of the governing equations (this is why the second stage is called "hyperreduction"). The particular details on how the first reduction of number of DOFs is carried out are of no concern here --the reader interested in such details is referred to Ref. [18]. It suffices to say that the process involves running firstly FE analyses in domains comprising several cells (such as the one in Figure 9.a) under appropriate prescribed displacements, and then extracting characteristic deformational patterns of the unit cell in the center via the
Figure 9: Assessment of the performance of the CECM in the hyperreduction of multiscale finite elements. a) Periodic structure under study. b) Fine-scale mesh of the unit cell for which the coarse-scale representation is required (\(a=0.195\) m, \(\alpha=0.5135\)). The total number of Gauss points is \(M=42471\). c) Coarse-scale representation of the unit cell, possessing 8 degrees of freedom and a number of integration points to be determined by the CECM. d) Deformational modes of the unit cell. The integral to be tackled by the CECM is the projection of the fine-scale nodal internal forces onto the span of these modes (see Eq. 75).
SVD. In the case studied here, in which the coarse model has 8 DOFs and the FE analyses are conducted, for simplicity of exposition16, in the linear elastic regime (Young's Modulus \(E=70000\) MPa and Poisson's ration \(\nu=0.3\), in plane strain), the resulting deformational patterns are the \(n=5\) modes whose deformed shapes are displayed in Figure 9.d.
Footnote 16: The procedure can be applied to any constitutive stress-strain law.
Our interest lies in the hyperreduction stage, in which one has to devise efficient cubature rules for each one of the integrals appearing in the (reduced) governing equations. We focus here in the internal force term, but a similar procedure can be followed for the integrals associated to body forces and surface traction. The reduced internal forces are given by the projection onto the space spanned by the deformational modes \(\mathbf{\Phi}\) of the FE nodal internal forces \(\mathbf{F_{int}}\):
\[\mathbf{\Phi}^{T}\mathbf{F_{int}}=\int_{\Omega}\,\mathbf{\Phi}^{T}\mathbf{B}_{FE}^{T}\mathbf{ \sigma}\ d\Omega. \tag{75}\]
Here, \(\mathbf{B}_{FE}\) denotes the standard strain-displacement FE matrix (in its globally supported format), whereas \(\mathbf{\sigma}\) is the stress vector. Our parametric integrand is therefore equal to \(\mathbf{a}=(\mathbf{B}_{FE}\mathbf{\Phi})^{T}\mathbf{\sigma}\), the work per unit volume done over virtual strains of the form \(\mathbf{B}_{FE}\mathbf{\Phi}_{i}\) (\(i=1,2\ldots n\)) by the stresses \(\mathbf{\sigma}\) caused in turn by strains also of the form \(\mathbf{\varepsilon}=(\mathbf{B}_{FE}\mathbf{\Phi})\mathbf{q}\) (Galerkin projection). It follows then that \(\mathbf{\mu}=\mathbf{q}\), that is, in this problem, the amplitude of the deformational modes in the expression for the stresses plays the role of input parameters. Since we are assuming linear elastic behavior, the number of possible stress states is equal to \(n=5\) as well. This implies that the integrand matrix \(\mathbf{A}_{FE}\) is formed by \(n^{2}=25\) columns, which are all the possible combinations of stress modes times virtual strain modes. The SVD (step 4 in Box 5.1 ) of this matrix ( using a fairly low truncation tolerance, \(\epsilon_{svd}=10^{-10}\), to eliminate numerical errors) reveals that there are redundant work modes: out of the \(n^{2}=25\) work modes, only 15 are linearly independent17. By augmenting the integrand basis with an additional vector for accounting for the integration of the volume (step 5 in Box 5.1 ), we end up with 16 basis functions for the integrand; the contour lines of these 16 functions are depicted in Figure 10.
Footnote 17: It can be readily seen that the existence of these 10 redundant work patterns is nothing but the consequence of the symmetry of the elastic problem βBettiβs reciprocity theorem [31]. In general, if there are \(n\) deformational modes, the number of independent work modes will be equal to \((n+1)n/2\).
The 16-points interpolatory DECM rule corresponding to these 16 functions is displayed in turn in Figure 11.a. The fact that the weights of 8 of the 16 points are comparatively small gives an indication that the number of points can be further reduced by the proposed sparsification process. The result of this sparsification process is displayed in Figures 11.b
Figure 10: Country lines corresponding to the 16 basis functions for the integrand in Eq. 75, which represents the virtual work per unit volume (for the case in which the reduced basis \(\mathbf{\Phi}\) is formed by the \(n=5\) modes shown previously in Figure 9.d)
to 11.k. The number of trials and accumulated nonlinear iterations are also shown in the captions of these Figures. As observed with the 3D analytical function in Section 6.3, the number of iterations required for the last 10 % of the weights (which in this case is just the last step) requires significantly more iterations than the previous 90 %. Nevertheless, the total computing time is not signficant --less than 6 seconds for the entire procedure, including the computation of the DECM rule. To assess the final integration error, we compare the reduced stiffness matrix
\[\mathbf{\Phi}^{T}\mathbf{K}\mathbf{\Phi}=\int_{\Omega}\,\mathbf{\Phi}^{T}(\mathbf{B}_{FE}^{T}\mathbf{C} \mathbf{B}_{FE})\mathbf{\Phi}^{T}\ d\Omega. \tag{76}\]
(here \(\mathbf{K}\) is the nodal stiffness matrix of the unit cell, and \(\mathbf{C}\) the corresponding elasticity matrix) computed by both the original 42471-points Gauss element rule and the 6-points CECM rule. The difference turns out to be below 0.005 %. This result demonstrates that, one the one hand, that the computed 6-points CECM effectively encodes the physics of the
Figure 11: Locations and weights of the cubature rules generated during the sparsification process for the case of the virtual internal work per unit volume in Eq. 75 (integrand functions shown previously in Figure 10). The red circle in each graph indicates the point whose weight is to be zeroed in the following step. Variables \(t\) and \(k\), on the other hand, have the same interpretation as in Figure 5. The initial DECM rule has 16 points (see Figure 11(a)), while the final rule features \(m=6\) points, see Figure 11(k). The total reduction in number of integration points is \(M/m=42471/6=7078.5\).
coarsened unit cell, and on the other hand, that the proposed cubature algorithm is able to deal with complex domains in which the integrand function is only defined at some points of such domain --the FE Gauss points. In fact, it should be remarked that, since the tolerance employed in the SVD is negligible, this small error is to be exclusively attributed to the fitting procedure outlined in Section 2.3.2 for constructing approximations of the integrand at element level.
## 7 Conclusions
In this paper we presented the Continuous Empirical Cubature Method (CECM), a novel algorithm designed to enhance the efficiency of numerical integration rules. The CECM is a two-stage algorithm whose first stage consists on the application of a point selection strategy for obtaining an interpolatory rule --featuring as many points as functions to integrate. We have used for this purpose the Discrete Empirical Cubature Method (DECM) [17; 16]. Then, for the second stage, we have applied a sparsification strategy whose aim is rendering to zero the associated weight of as many points as possible. To this end, the locations of the initially selected points were changed following a modified Newton-Raphson algorithm whose details have been outlined within the text (The code has also been made available at [https://github.com/Rbravo555/CECM-continuous-empirical-cubature-method](https://github.com/Rbravo555/CECM-continuous-empirical-cubature-method)).
The versatility of the method was highlighted in the numerical assessment section, showcasing its effectiveness across a diverse range of problems. For the case of univariate and multivariate Lagrange polynomials, the CECM was able to recover the optimal Gaussian rule whenever the number of function to integrate was odd. For even number of functions to integrate, the number of points still coincides with the Gauss rule, although their locations differ. Reductions in required points were observed, with 1D domains requiring half, 2D domains requiring one fourth, and 3D domains requiring one eighth of the initial interpolatory rule.
Example 6.3 showcases an exponential-sinusoidal function in a 3D domain. For this case, the CECM reduced the number of points from 133 of the original DECM rule, to 38. This example also showcased an scenario where the size of the matrices involved would render infeasible the computation of the SVD on a regular desktop computer. The SVD was computed using the sequential randomized SVD (SRSVD) algorithm presented also in this paper. The SRSVD allowed to efficiently compute the required orthogonal basis for matrices of up to 33 GB in size.
Finally, the CECM was applied to an empirical interscale finite element (EIFE) example, for which the number of points was reduced from the original fine scale 42471 points to a 6-point rule, while incurring a negligible integration error of 0.005%.
Overall, the CECM algorithm presents an innovative approach to generating optimal integration rules. Its incorporation into frameworks that allow input of quadrature points' positions leads to substantial improvements in integral evaluation performance compared to standard interpolatory rules.
## Acknowledgements
This work is sponsored in part by the Spanish Ministry of Economy and Competitiveness, through the _Severo Ochoa Programme for Centres of Excellence in R&D_ (CEX2018-000797-S)". The authors also acknowledge the support of the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955558 (the JU receives, in turn, support from the European Union's Horizon 2020 research and innovation programme and Spain, Germany, France, Italy, Poland, Switzerland, Norway). J.A. Hernandez expresseses gratitude by the support of, on the one hand, the "MCIN/AEI/10.13039/501100011033/_y por FEDER una manera de hacer Europa_" (PID2021-122518OB-I00), and, on the other hand, the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 952966 (project FIBREGY). Lastly, both J.R. Bravo and S. Ares de Parge acknowledge the _Departament de Recerca i Universitats de la Generalitat de Catalunya_ for the financial support through doctoral grants FI-SDUR 2020 and FI SDUR-2021, respectively.
## Appendix A Sequential Randomized SVD
### Overview
In this Appendix we explain and provide the implementational details (see Algorithm 6 ) of the proposed procedure for computing the SVD of a partitioned matrix \(\mathbf{A}=[\mathbf{A}_{1},\mathbf{A}_{2}\ldots\mathbf{A}_{p}]\) (\(\mathbf{A}_{i}\in\mathbb{R}^{n\times m_{i}}\), with \(m=\sum_{i=1}^{p}m_{i}\)), alluded to in Remark 2.1. The method is based on the same idea behind other _randomized_ algorithms [14], namely, that the SVD of a matrix \(\mathbf{A}\) (\(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^{T}\)) can be alternatively computed from the matrices of the SVD of \(\mathbf{L}=\mathbf{Q}^{T}\mathbf{A}\), \(\mathbf{Q}\in\mathbb{R}^{n\times r}\) being an arbitrary orthogonal basis matrix for the column space of \(\mathbf{A}\) (here \(r\leq\min\left(n,m\right)\) denotes the rank of the matrix). More
specifically, given \(\mathbf{L}=\mathbf{\bar{U}}\mathbf{\bar{S}}\mathbf{\bar{V}}^{T}\), then \(\mathbf{U}=\mathbf{Q}\mathbf{\bar{U}}\), \(\mathbf{S}=\mathbf{\bar{S}}\) and \(\mathbf{\bar{V}}=\mathbf{V}\). The proof of this property follows from expressing \(\mathbf{A}\) as \(\mathbf{A}=\mathbf{Q}(\mathbf{Q}^{T}\mathbf{A})\) and replacing \(\mathbf{Q}^{T}\mathbf{A}\) by its SVD:
\[\mathbf{A}=\mathbf{Q}\mathbf{Q}^{T}=(\mathbf{Q}\mathbf{\bar{U}})\mathbf{\bar{S}}\mathbf{\bar{V}}^{T}. \tag{77}\]
Both \(\mathbf{\bar{S}}\) and \(\mathbf{\bar{V}}\) arise from an SVD, and therefore, are diagonal with positive entries and orthogonal, respectively --and \(\mathbf{\bar{V}}\) is a basis matrix for the row space of \(\mathbf{L}=\mathbf{Q}^{T}\mathbf{A}\), which is the same as the row space of \(\mathbf{A}\). Furthermore, \(\mathbf{U}\) is also an orthogonal matrix:
\[\mathbf{U}^{T}\mathbf{U}=(\mathbf{Q}\mathbf{\bar{U}})^{T}(\mathbf{Q}\mathbf{\bar{U}})=\mathbf{\bar{U}}^{T} (\mathbf{Q}^{T}\mathbf{Q})\mathbf{\bar{U}}=\mathbf{I}. \tag{78}\]
Therefore, it follows from the uniqueness of the SVD (up to the signs of the left- and right- singular vectors) that the factorization \(\mathbf{U}\mathbf{\bar{S}}\mathbf{\bar{V}}^{T}\) is the SVD of \(\mathbf{A}\), as asserted.
It can be readily shown that this property also holds for the case in which _truncation_ is introduced. Besides, since \(\mathbf{S}=\mathbf{\bar{S}}\), the truncation threshold for the SVD of \(\mathbf{L}\) is the same as the input truncation threshold18\(\epsilon\), that is,
Footnote 18: This is not exactly true in the limiting case \(\epsilon\to 0\), when the truncation criterion is established in terms of a machine-dependent precision parameter, which also depends on the size of the matrix being factorized (see Algorithm 6, Line 3)
\[\|\mathbf{L}-\mathbf{\bar{U}}\mathbf{\bar{S}}\mathbf{\bar{V}}^{T}\|_{F}\leq\epsilon\|\mathbf{L} \|_{F}=\epsilon\|\mathbf{A}\|_{F}. \tag{79}\]
This general strategy for computing the SVD of a matrix proves advantageous only when the following two conditions are met. Firstly, the rank of the matrix \(r\) should be significantly smaller19 than the number of columns and rows of the matrix --because otherwise the SVD of \(\mathbf{L}=\mathbf{Q}^{T}\mathbf{A}\) could become as costly as the original SVD. In the context of reduced-order models, this property is expected to hold --and if it does not, it means that the parameterized boundary value problem we intend to solve might not be amenable to dimensionality reduction. The other condition is that the computation of the orthogonal basis matrix \(\mathbf{Q}\) should be efficient, in the sense that it should be carried out by an algorithm in which the asymptotic count of floating point operations (flops) is less than that required by the standard SVD itself.
Footnote 19: This first condition may be relaxed by making \(\mathbf{A}\approx\mathbf{Q}\mathbf{Q}^{T}\mathbf{A}\), that is, by determining an approximated basis matrix for the column space of \(\mathbf{A}\). However, it should be noticed that, in this case, the right-singular vector matrix \(\mathbf{V}\) ceases to be a basis matrix for the row space of \(\mathbf{A}\), and therefore, Eq. 30 would not hold.
```
1Function\([\mathbf{U},\mathbf{S},\mathbf{V}]\leftarrow\texttt{SRSVD}([\mathbf{A}_{1},\mathbf{A}_{2},\dots\mathbf{ A}_{p}],\,\epsilon)\) Data:\([\mathbf{A}_{1},\mathbf{A}_{2},\dots\mathbf{A}_{q}]\): Partitioned matrix (with \(\mathbf{A}_{i}\in\mathbb{R}^{n\times m_{i}}\), and \(m=\sum_{i=1}^{p}m_{i}\) ). \(\epsilon\in[0,1]\) : relative error threshold Result: Truncated Singular Value Decomposition (with relative truncation threshold \(\epsilon\)) of \(\mathbf{A}\in\mathbb{R}^{n\times m}\), i.e.: \(\mathbf{A}\approx\mathbf{U}\mathbf{S}\mathbf{V}^{T}\), where \(\mathbf{U}\in\mathbb{R}^{n\times k}\) is the orthogonal matrix of left singular vectors, \(\mathbf{S}\in\mathbb{R}^{k\times k}\) is the diagonal matrix of positive singular values, and \(\mathbf{V}\in\mathbb{R}^{m\times k}\) is the orthogonal matrix of right singular vectors. Here \(k\leq\min{(n,m)}\) denotes the number of retained singular vectors upon truncation, which is, by definition of SVD, the lowest number of vectors such that \(\|\mathbf{A}-\mathbf{U}\mathbf{S}\mathbf{V}^{T}\|_{F}\leq\epsilon\|\mathbf{A}\|_{F}\).
2\([\mathbf{Q},\,\mathbf{L}]\leftarrow\texttt{SRORTH}([\mathbf{A}_{1},\mathbf{A}_{2},\dots\mathbf{A}_{p}])\) // Factorization\(\mathbf{A}=\mathbf{Q}\mathbf{L}\), where \(\mathbf{Q}^{T}\mathbf{Q}=\mathbf{I}\) ( see Algorithm 8)
3\(\epsilon_{L}\leftarrow\epsilon\,\|\mathbf{L}\|_{F}\); \(\mu_{mach}=\max{(n,m)}\texttt{eps}(\|\mathbf{L}\|_{F})\) // Truncation thresholds (see definition of eps in Algorithm 7).
4\([\mathbf{\bar{U}},\mathbf{S},\mathbf{V}]\leftarrow\texttt{SVD}\) (\(\mathbf{L}\), \(\epsilon_{L}\),\(\mu_{mach}\)) // Truncated SVD (see Algorithm 7)..
5\(\mathbf{U}\leftarrow\mathbf{Q}\mathbf{\bar{U}}\) // Matrix of left singular vectors
```
**Algorithm 6** Sequential Randomized Singular Value Decomposition (SRSVD) of a partitioned matrix \(\mathbf{A}=[\mathbf{A}_{1},\mathbf{A}_{2},\dots\mathbf{A}_{p}]\).
### Sequential Randomized Orthogonalization
#### a.2.1 Infinite-precision arithmetic
To meet this latter condition, we propose to determine \(\mathbf{Q}\) by the _Sequential Randomized Orthogonalization_ (SRORTH) invoked in Line 2 of Algorithm 6, and with pseudo-code outlined in Algorithm 8. The qualifier _sequential_ refers to the fact that the method only processes _one block matrix at a time_, thus alleviating potential memory bottlenecks. On the other hand, we call it _randomized_ because one of the factorizations employed by the algorithm is carried out by a modified version of the _incremental randomized SVD_ proposed in Ref. [25].
Let us describe first the overall structure of this orthogonalization procedure, without delving into the randomized part of the algorithm, which will be treated later on, in Section A.3. In essence, the procedure is a Gram-Schmidt orthogonalization which operates, rather than on single vectors, on block matrices. Accordingly, \(\mathbf{Q}\) is constructed as the concatenation of basis matrices (one for each block matrix \(\mathbf{A}_{i}\)):
\[\mathbf{Q}=\begin{bmatrix}\mathbf{\Delta}\mathbf{Q}_{1}&\mathbf{\Delta}\mathbf{Q}_{2}&\cdots&\mathbf{ \Delta}\mathbf{Q}_{p}\end{bmatrix} \tag{80}\]
where \(\mathbf{\Delta}\mathbf{Q}_{i}^{T}\mathbf{\Delta}\mathbf{Q}_{j}=\mathbf{0}\) if \(i\neq j\), and \(\mathbf{\Delta}\mathbf{Q}_{i}^{T}\mathbf{\Delta}\mathbf{Q}_{i}=\mathbf{I}\) (\(i=1,2\ldots p\)). In turn, these orthogonal submatrices are computed by the recursion
\[\mathbf{\Delta}\mathbf{Q}_{1} =\texttt{0RTH}(\mathbf{A}_{1}); \mathbf{Q}^{(1)} =\mathbf{\Delta}\mathbf{Q}_{1}; \mathbf{P}_{1} =\mathbf{Q}^{(1)}{}^{T}\mathbf{A}_{1} \tag{81}\] \[\mathbf{\Delta}\mathbf{Q}_{2} =\texttt{0RTH}(\mathbf{A}_{2}-\mathbf{\Delta}\mathbf{Q}_{1}\mathbf{\Delta}\mathbf{Q} _{1}{}^{T}\mathbf{A}_{2}); \mathbf{Q}^{(2)} =[\mathbf{Q}^{(1)},\mathbf{\Delta}\mathbf{Q}_{2}]; \mathbf{P}_{2} =\mathbf{Q}^{(2)}{}^{T}\mathbf{A}_{2}\] \[\vdots \vdots \mathbf{\Delta}\mathbf{Q}_{i} =\texttt{0RTH}(\mathbf{A}_{i}-\mathbf{Q}^{(i-1)}\mathbf{Q}^{(i-1)^{T}}\mathbf{A}_ {i}); \mathbf{Q}^{(i)} =[\mathbf{Q}^{(i-1)},\mathbf{\Delta}\mathbf{Q}_{i}]; \mathbf{P}_{i} =\mathbf{Q}^{(i)}{}^{T}\mathbf{A}_{i}\] \[\vdots \vdots \mathbf{\Delta}\mathbf{Q}_{p} =\texttt{0RTH}(\mathbf{A}_{p}-\mathbf{Q}^{(p-1)}\mathbf{Q}^{(p-1)^{T}}\mathbf{A}_ {p}); \mathbf{Q} =[\mathbf{Q},\mathbf{\Delta}\mathbf{Q}_{p}]; \mathbf{P}_{p} =\mathbf{Q}^{T}\mathbf{A}_{p}\]
(here \(\texttt{0RTH}(\bullet)\) symbolizes the function that determines an orthogonal basis matrix for the column space of its input). Notice that this procedure for determining \(\mathbf{Q}\) need not store in the computer's main memory the entire matrix, but just one block matrix at a time, as asserted earlier. The other matrix in the factorization, \(\mathbf{L}=\mathbf{Q}^{T}\mathbf{A}\), can be also constructed incrementally as the algorithm progresses20. Indeed, by exploiting that \(\mathbf{\Delta}\mathbf{Q}_{j}^{T}\mathbf{A}_{i}=\mathbf{0}\) if \(j>i\), we have that
Footnote 20: Note that this statement would cease to be true in the case of approximated basis matrices for the range of \(\mathbf{A}\) (\(\mathbf{A}\approx\mathbf{Q}\mathbf{Q}^{T}\mathbf{A}\)).
\[\mathbf{L}=\begin{bmatrix}\mathbf{\Delta}\mathbf{Q}_{1}^{T}\mathbf{A}_{1}&\mathbf{\Delta}\mathbf{Q}_{ 1}^{T}\mathbf{A}_{2}&\cdots&\mathbf{\Delta}\mathbf{Q}_{1}^{T}\mathbf{A}_{p}\\ \mathbf{\Delta}\mathbf{Q}_{2}^{T}\mathbf{A}_{1}&\mathbf{\Delta}\mathbf{Q}_{2}^{T}\mathbf{A}_{2}&\cdots &\mathbf{\Delta}\mathbf{Q}_{2}^{T}\mathbf{A}_{p}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{\Delta}\mathbf{Q}_{p}^{T}\mathbf{A}_{1}&\mathbf{\Delta}\mathbf{Q}_{p}^{T}\mathbf{A}_{1}&\cdots &\mathbf{\Delta}\mathbf{Q}_{p}^{T}\mathbf{A}_{p}\end{bmatrix}=\begin{bmatrix}\mathbf{\Delta} \mathbf{Q}_{1}^{T}\mathbf{A}_{1}&\mathbf{\Delta}\mathbf{Q}_{1}^{T}\mathbf{A}_{2}&\cdots&\mathbf{ \Delta}\mathbf{Q}_{1}^{T}\mathbf{A}_{p}\\ \mathbf{0}&\mathbf{\Delta}\mathbf{Q}_{2}^{T}\mathbf{A}_{2}&\cdots&\mathbf{\Delta}\mathbf{Q}_{2}^{T}\mathbf{ A}_{p}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\cdots&\mathbf{\Delta}\mathbf{Q}_{p}^{T}\mathbf{A}_{p}\end{bmatrix}. \tag{82}\]
Inspection of the nonzero entries in each column of the right-most matrix of the above equation shows that these entries are expressible in terms of the matrices \(\mathbf{P}_{i}\) (\(i=1,2\ldots p\)) appearing in the recursion formulas 81 as follows:
\[\mathbf{\Delta}\mathbf{Q}_{1}^{T}\mathbf{A}_{1}=\mathbf{P}_{1};\quad\begin{bmatrix}\mathbf{\Delta} \mathbf{Q}_{1}^{T}\\ \mathbf{\Delta}\mathbf{Q}_{2}^{T}\\ \vdots\\ \mathbf{\Delta}\mathbf{Q}_{i}^{T}\end{bmatrix}\mathbf{A}_{2}=\mathbf{P}_{2};\quad\ldots\quad \begin{bmatrix}\mathbf{\Delta}\mathbf{Q}_{1}^{T}\\ \mathbf{\Delta}\mathbf{Q}_{2}^{T}\\ \vdots\\ \mathbf{\Delta}\mathbf{Q}_{i}^{T}\end{bmatrix}\mathbf{A}_{i}=\mathbf{P}_{i}\quad\ldots\quad \begin{bmatrix}\mathbf{\Delta}\mathbf{Q}_{1}^{T}\\ \mathbf{\Delta}\mathbf{Q}_{2}^{T}\\ \vdots\\ \mathbf{\Delta}\mathbf{Q}_{p}^{T}\end{bmatrix}\mathbf{A}_{p}=\mathbf{P}_{p}, \tag{83}\]
#### a.2.2 Finite-precision arithmetic
The preceding recursive scheme would in principle work seamlessly in an ideal infinite-precision arithmetic scenario. Yet the devil is in the details, and when moving to the real case of finite-precision arithmetic, its performance is seriously afflicted by sensitivity to rounding errors and loss of orthogonality over multiple steps --as it occurs with the classical Gram-Schmidt orthogonalization [13]. The computational implementation described in Algorithm 8 incorporates ingredients that mitigate these deleterious effects. Re-orthogonalization is carried out in Line 8 by determining the component of \(\mathbf{\Delta Q}\) which is orthogonal to the current basis \(\mathbf{Q}\), and then applying the SVD again, setting the corrected \(\mathbf{\Delta Q}\) equal to the matrix of left-singular vectors of such a decomposition. The effect of rounding errors, on the other hand, is treated by computing the orthogonal basis matrix \(\mathbf{\Delta Q}\) as the left-singular vectors of the truncated SVD of \(\mathbf{\Delta A}=\mathbf{A}_{i}-\mathbf{Q}\mathbf{Q}^{T}\mathbf{A}_{i}\) (see Line 12), but using as truncation tolerance a machine-dependent precision parameter based on the norm of \(\mathbf{A}_{i}\):
\[\mu_{mach}=\max(n,m_{i})\,\mathsf{eps}(\|\mathbf{A}_{i}\|_{F}) \tag{84}\]
(see Line 10) rather than the default option, which would be in terms of the norm of the input matrix \(\mathbf{\Delta A}\) ( the definition of \(\mathsf{eps}(x)\) is given in the description of Algorithm 7).
```
1Function\([\mathbf{Q},\,\mathbf{L}]\leftarrow\texttt{SRORTH}([\mathbf{A}_{1},\mathbf{A}_{2},\ldots\mathbf{A}_{p}])\) Data:\([\mathbf{A}_{1},\mathbf{A}_{2},\ldots\mathbf{A}_{p}]\): Partitioned matrix (with \(\mathbf{A}_{i}\in\mathbb{R}^{n\times m_{i}}\)), and \(m=\sum_{i=1}^{p}m_{i}\)) Result: Factorization \(\mathbf{A}=\mathbf{Q}\mathbf{L}\), where \(\mathbf{Q}\in\mathbb{R}^{n\times r}\) is an orthogonal basis matrix for the column space of \(\mathbf{A}\) (\(r\leq\min(n,m)\) is the numerical rank of \(\mathbf{A}\)), while \(\mathbf{L}=\mathbf{Q}^{T}\mathbf{A}\). \(\mathbf{Q}\leftarrow\emptyset,\;\;\mathbf{P}_{i}\leftarrow\emptyset,\;\;i=1,2\ldots p\)// Initializations for\(i=1\)to\(p\)do
2\(\mathbf{\Delta Q}\leftarrow\emptyset\)// Incremental basis matrix if\(i=1\)then
3\(\mathbf{\Delta A}\leftarrow\mathbf{A}_{i};\;\;r\leftarrow\texttt{ceil}(0.01\min(n,m_{i}))\)// \(r\): Estimation rank\(\mathbf{A}_{i}\)
4else
5\(\mathbf{\Delta A}\leftarrow\mathbf{A}_{i}-\mathbf{Q}\mathbf{Q}^{T}\mathbf{A}_{i}\)// Component of \(\mathbf{A}_{i}\) orthogonal to the column space of \(\mathbf{Q}\)
6 end if
7\(\mu_{mach}\leftarrow\max(n,m_{i})\,\mathsf{eps}(\|\mathbf{A}_{i}\|_{F})\)// Machine-dependent precision parameter if\(\|\mathbf{\Delta A}\|_{F}>\mu_{mach}\)then
8\([\mathbf{\Delta Q},\bullet,\bullet]\leftarrow\texttt{RSVDinc}(\mathbf{\Delta A},0,\mu_{ mach},r)\)// Incremental randomized SVD, see Algorithm 9 if\(i>1\)then\([\mathbf{\Delta Q},\bullet]\leftarrow\texttt{SVD}((\mathbf{\Delta Q}-\mathbf{Q}\mathbf{Q}^{T}\mathbf{\Delta Q}),0)\)// Re-orthogonalization \(\mathbf{Q}\leftarrow[\mathbf{Q},\mathbf{\Delta Q}]\)// Basis matrix augmented with the columns of \(\mathbf{\Delta Q}\)
9 end if
10\(\mathbf{P}_{i}\leftarrow\mathbf{Q}^{T}\mathbf{A}_{i}\)\(r\leftarrow\texttt{ncol}(\mathbf{\Delta Q})\)// Number of columns of \(\mathbf{\Delta Q}\) (estimation for the rank of the next submatrix in Line 12)
11 end for\(\mathbf{L}\leftarrow\) Use \(\mathbf{P}_{1},\mathbf{P}_{2}\ldots\mathbf{P}_{p}\) to construct \(\mathbf{L}\) according to expressions (82) and (83)
```
**Algorithm 8**Sequential Randomized Orthogonalization of a partitioned matrix \(\mathbf{A}=[\mathbf{A}_{1},\mathbf{A}_{2},\ldots\mathbf{A}_{p}]\) (employed in Line 2 of Algorithm 6).
### Incremental randomized SVD
Let us focus now on the randomized ingredient of the method, which is the _Incremental Randomized SVD_ (RSVDinc) invoked in Line 12 of Algorithm 8 for determining the basis matrix for \(\mathbf{\Delta A}\). As commented previously, this randomized SVD is partially based on the adaptive randomized algorithm proposed in Ref. [25], and it is the actual ingredient that renders the proposed scheme faster than the standard "deterministic" SVD when the rank of the input matrix is significantly smaller than the minor dimension of the matrix. The reason is that, as argued in Refs. [25] (see also [25, 24] ), the asymptotic cost of this type of randomization algorithms is \(\mathcal{O}(nmr)\), \(r\) being the rank of the matrix --as opposed to the standard SVD, whose cost is independent of the rank and scales quadratically with its minor dimension21.
The pseudo-code of this randomized SVD is described in Algorithm 9. The basic steps are identical to the ones employed in the SRSVD of Algorithm 6, namely, 1) determination of an orthogonal basis matrix \(\mathbf{H}\) for the column space (range) of the input matrix \(\mathbf{\Delta A}\) in Line 2; and 2) Truncated SVD of \(\mathbf{B}=\mathbf{H}^{T}\mathbf{\Delta A}\) in Line 4. The computation of the basis matrix in the first step also shares common features with the one employed in the SRSVD of Algorithm 6. Indeed, as can be inferred from the the pseudo-code of Algorithm 10 of the function devoted to this task (RORTHinc ), the desired orthogonal basis matrix \(\mathbf{\Delta H}\) is built iteratively in a Gram-Schmidt orthogonalization fashion:
\[\mathbf{\Delta H}=\begin{bmatrix}\mathbf{\Delta H}_{1}&\mathbf{\Delta H}_{2}&\cdots&\mathbf{ \Delta H}_{s}\end{bmatrix},\quad\text{where}\quad\mathbf{\Delta H}_{i}^{T}\mathbf{ \Delta H}_{j}=\mathbf{0}\;(i\neq j),\quad\;\mathbf{\Delta H}_{i}^{T}\mathbf{\Delta H}_{i}= \mathbf{I}. \tag{85}\]
The actual difference with SRORTH in Algorithm 8 is that, at a given iteration \(i\), the corresponding orthogonal matrix \(\mathbf{\Delta H}_{i}\) is not determined from a column partition of the input matrix, but rather as an orthogonal basis matrix for the range of the matrix defined by
\[\mathbf{Y}_{i}=\frac{1}{\sqrt{n\Lambda R_{i}}}\mathbf{\Omega}_{i}\mathbf{C}_{i},\qquad\; \;\;i=1,2\ldots s \tag{86}\]
(see Lines 4 and 5 in Algorithm 10). Here, \(\mathbf{C}_{i}\in\mathbb{R}^{n\times m}\) is the residual matrix at iteration \(u\), whereas \(\mathbf{\Omega}_{i}\) is a \(n\times\Delta R_{i}\) standard Gaussian test matrix (a _random_ matrix whose entries are independent standard normal variables). The distinguishing feature of our algorithm with respect to the original scheme put forward in Ref. [25] is that, in our case, the number of columns \(\Delta R_{i}\) of this random matrix changes during the iterations. In the first iteration (\(i=1\)), we set \(\Delta R_{1}=R\), \(R\) being the number of columns of the incremental basis matrix of the previous block matrix (see Line 18 in Algorithm 8). As argued in Ref. [14], if this initial guess is well above the rank of input matrix \(\mathbf{\Delta A}\), it is highly probable that the basis matrix for the range of \(\mathbf{Y}_{1}\) in Eq. 86 is the required orthogonal matrix \(\mathbf{\Delta H}\). Numerical experience shows that when the submatrices \(\mathbf{A}_{k-1}\) and \(\mathbf{A}_{k}\) (\(k=2,3\ldots\) p) correspond to input parameters that are close in parameter space, then this estimation is normally a reliable upper bound, and therefore, only one iteration is required.
If this first iteration is not sufficient to reduce the norm of the residual matrix \(\mathbf{C}_{i}\) below the prescribed error threshold \(\mu\) (see Line 3 in Algorithm 10 ), then it is necessary to calculate a guess for the number of columns of the random matrix in the next iteration. Our proposal in this respect is to use the logarithmic estimation displayed in Line 12 of Algorithm 10. This estimation is based on the observation that, in most physical problems amenable to dimensionality reduction, the singular values of the integrand matrix decay in an exponential manner. Nevertheless, to avoid situation in which the estimated increments \(\Delta R_{i}\) are either too large or too small, the minimum and maximum sizes of the increment, \(\Delta R_{m}\) and \(\Delta R_{M}\) respectively, can be also specified as optional arguments.
Lastly, it should be noted that this randomized factorization is also subject to the vagaries of finite precision arithmetics. To address this, Algorithm 10 includes a re-orthogonalization step in Line 8.
### Numerical study
To compare the performance of the standard SVD and the proposed SRSVD, we use the convergence study presented in Table 4 of Section 6.3 for determining an orthogonal basis functions for the parameterized, vector-valued function of Eq.(70). It can be gleaned from this table that the proposed SRSVD clearly outperforms the standard SVD, both in terms of computing time and memory requirements. For instance, for the 16 GB matrix, the SRSVD turns out to be almost 7 times faster than the standard SVD, and for the largest matrix of 33 GB, the standard SVD simply cannot handle the operation because it exhausts the memory capabilities of the employed 64 GB RAM computer. Furthermore, we can see that, in passing from the matrix of 16 GB to the matrix of 33 GB, the computing time of the SRSVD increases by a factor slightly below 2, a fact that indicates that the cost scales approximately linearly with the number of columns --as opposed to the standard SVD, whose asymptotic costs scales quadratically with its minor dimension[32]. It is noteworthy also that, as the number of partitions increases, the number of iterations required by the incremental randomized orhtogonalization
of Algorithm 10 tends to one. This indicates that, as conjectured in Section A.3 of Appendix A, the rank of a given block matrix is a reliable upper bound for the rank of the orhtogonal complement of the next block matrix in the sequence. Incidentally, this may explain in part why the asymptotic costs of the SQRSV appears to scale linearly with its minor dimension, for the standard randomized SVD itself exhibits this desirable feature [25]. Last but not least, we show in Table 4 the relative difference between the singular values computed by both approaches. The results reveal that the difference is negligible, a fact that supports the theoretical claim made at the outset of this Appendix, according to which _the proposed SRSVD is not an approximate method_ for computing the truncated SVD of a matrix, but rather an _alternative method to compute the exact factorization_ --one that exploits linear correlations existing between blocks of the matrix.
|
2301.00875 | Classical prime subhypermodules and related classes | In this paper, we extend the notion of prime subhypermodules to n-ary
classical prime, n-ary weakly classical prime and n-ary phi-classical prime
subhypermodules of an (m,n)-hypermodule over a commutative Krasner
(m,n)-hyperring. Many properties and characterizations of them are introduced.
Moreover, we investigate the behavior of these structures under hypermodule
homomorphisms, quotient hypermodules and cartesian product. | M. Anbarloei | 2023-01-02T21:08:07Z | http://arxiv.org/abs/2301.00875v1 | # Classical prime subhypermodules and related classes
###### Abstract.
In this paper, we extend the notion of prime subhypermodules to \(n\)-ary classical prime, \(n\)-ary weakly classical prime and \(n\)-ary \(\phi\)-classical prime subhypermodules of an \((m,n)\)-hypermodule over a commutative Krasner \((m,n)\)-hyperring. Many properties and characterizations of them are introduced. Moreover, we investigate the behavior of these structures under hypermodule homomorphisms, quotient hypermodules and cartesian product. We think the knowledge gained in this setting provides a significant step in the general investigation of subhypermodules.
Key words and phrases:\(n\)-ary classical prime subhypermodule, \(n\)-ary weakly classical prime subhypermodule, \(n\)-ary \(\phi\)-classical prime subhypermodule, \((m,n)\)-hypermodule 2010 Mathematics Subject Classification: 16Y99, 20N20
## 1. Introduction
To extend the notion of prime ideals from the category of rings to the category of modules has excited several researchers to show that many, but not all, of the results in the theory of rings are also valid for modules. The concept of classical prime submodules as an extension of prime submodules was introduced by Behboodi and Koohy in [7]. A proper submodule \(Q\) of \(M\) is said to be a classical prime submodule, if for each \(r,s\in R\) and \(a\in M\), \(rsm\in Q\) implies that \(ra\in Q\) or \(sa\in Q\). Moreover, the notion of weakly classical prime submodules, which is a generalization of classical prime submodules was studied in [23].
The theory of algebraic hyperstructures playing an important role in the classical algebraic theory was born in 1934 by a French mathematician, F. Marty, at the \(8^{th}\) Congress of Scandinavian Mathematicians. A comprehensive review of the theory of hyperstructures appears in [21, 22, 9, 10].
The concept of \(n\)-ary algebras was introduced by Kasner in a lecture in a annual meeting in 1904 [15]. The first paper on the theory of \(n\)-ary groups was written by Dorente in 1928 [13]. Moreover, for the first time in [16] the notion of Krasner hyperrings was introduced by Krasner. Some properties on this hyperrings can be seen in [20, 24]. The concept of \(n\)-ary hypergroups was defined in [12] as an extension of hypergroups in the sense of Marty. After the introduction of the concept of \((m,n)\)-hyperrings in [18], Davvaz et al. extended \((m,n)\)-rings to Krasner \((m,n)\)- hyperrings and studied some results in this context in [19]. Several classes of hyperideals namely maximal hyperideal, \(n\)-ary prime hyperideal, \(n\)-ary primary hyperideal and the radical of a hyperideal in a Krasner \((m,n)\)-hyperring were introduced in [1].
[19] A commutative Krasner \((m,n)\)-hyperring with a scalar identity \(1\) is an algebraic hyperstructure \((R,f^{\prime},g^{\prime})\) if the following hold: (1) \((R,f^{\prime})\) is a canonical \(m\)-ary hypergroup, (2) \((R,g^{\prime})\) is a commutative \(n\)-ary semigroup, (3) the
\(n\)-ary operation \(g^{\prime}\) is distributive with respect to the \(m\)-ary hyperoperation \(f^{\prime}\), i.e., \(g^{\prime}(a_{1}^{i-1},f^{\prime}(x_{1}^{m}),a_{i+1}^{n})=f^{\prime}(g^{\prime}( a_{1}^{i-1},x_{1},a_{i+1}^{n}),...,g^{\prime}(a_{1}^{i-1},x_{m},a_{i+1}^{n}))\), for each \(a_{1}^{i-1},a_{i+1}^{n},x_{1}^{m}\in R\), and \(1\leq i\leq n\), (4) \(0\) is a zero element of the \(n\)-ary operation \(g^{\prime}\), i.e., for every \(x_{2}^{n}\in R\) we have \(g^{\prime}(0,x_{2}^{n})=g^{\prime}(x_{2},0,x_{3}^{n})=...=g^{\prime}(x_{2}^{n},0)=0\), (5) for all \(x\in R\), \(g(x,1^{(n-1)})=x\).
The sequence \(x_{i},x_{i+1},...,x_{j}\) is denoted by \(x_{i}^{j}\). For \(j<i\), \(x_{i}^{j}\) is the empty symbol. In this convention \(f^{\prime}(x_{1},...,x_{i},y_{i+1},...,y_{j},z_{j+1},...,z_{n})\) will be written as \(f^{\prime}(x_{1}^{i},y_{i+1}^{j},z_{j+1}^{n})\). In the case when \(y_{i+1}=...=y_{j}=y\) the last expression will be written in the form \(f^{\prime}(x_{1}^{i},y^{(j-i)},z_{j+1}^{n})\). For non-empty subsets \(A_{1},...,A_{n}\) of \(R\) we define \(f^{\prime}(A_{1}^{n})=f^{\prime}(A_{1},...,A_{n})=\bigcup\{f^{\prime}(x_{1}^{ n})\mid x_{i}\in A_{i},i=1,...,n\}\). A non-empty subset \(S\) of \(R\) is called a subhyerring of \(R\) if \((S,f^{\prime},g^{\prime})\) is a Krasner \((m,n)\)-hyerring. Let \(I\) be a non-empty subset of \(R\), we say that \(I\) is a hyperideal of \((R,f^{\prime},g^{\prime})\) if \((I,f^{\prime})\) is an \(m\)-ary subhypergroup of \((R,f^{\prime})\) and \(g^{\prime}(x_{i-1}^{i-1},I,x_{i+1}^{n})\subseteq I\), for every \(x_{1}^{n}\in R\) and \(1\leq i\leq n\). For each element \(x\in R\), the hyperideal generated by \(x\) is denoted by \(\langle x\rangle\) and defined as \(\langle x\rangle=g(R,x,1^{(n-2)})=\{g(r,x,1^{(n-2)})\mid r\in R\}\). Recall from [1] that a proper hyperideal \(P\) of a Krasner \((m,n)\)-hyerring \((R,f^{\prime},g^{\prime})\) is an \(n\)-ary prime hyperideal if for hyperideals \(I_{1},...,I_{n}\) of \(R\), \(g^{\prime}(I_{1}^{n})\subseteq P\) implies that \(I_{1}\subseteq P\) or \(I_{2}\subseteq P\) or...or \(I_{n}\subseteq P\). Also, Lemma 4.5 in [1] shows that a proper hyperideal \(P\) of a Krasner \((m,n)\)-hyerring \((R,f^{\prime},g^{\prime})\) is an \(n\)-ary prime hyperideal if for all \(x_{1}^{n}\in R\), \(g^{\prime}(x_{1}^{n})\in P\) implies that \(x_{i}\in P\) for some \(1\leq i\leq n\).
Hypermodules over a hyperring is a generalization of the classical modules over a ring. Several types of hypermodules were introduced by many authors. The notion of \((m,n)\)-hypermodules over \((m,n)\)-hyperrings was defined in [6]. After, some classes of the hypermodules were studied in [5, 8, 2]. Prime and primary subhypermodules of an \((m,n)\)-hypermodule were discussed in [4].
Motivated and inspired by the above papers, the purpose of this research work is to introduce and study generalizations of prime subhupermodules. We define the notions of classical prime, weakly classical prime and \(\phi\)-classical prime subhypermodules of an \((m,n)\)-hypermodule over a commutative Krasner \((m,n)\)-hyerring with a scalar identity \(1\). Then a number of major conclusions are given to explain the general framework of these structures. Moreover, we give some characterizations of these concepts on cartesian product of \((m,n)\)-hypermodules.
## 2. Preliminaries
In this section, we recall some basic terms and definitions concerning \(n\)-ary hyperstructures which we need to develop our paper.
**Definition 2.1**.: ([6]) Let \(M\) be a nonempty set. Then \((M,f,g)\) is an \((m,n)\)-hypermodule over an \((m,n)\)- hyperring \((R,f^{\prime},g^{\prime})\), simply \(R\), if \((M,f)\) is a canonical \(m\)-ary hypergroup and the map
\(g:\underbrace{R\times...\times R}_{n-1}\times M\longrightarrow P^{*}(M)\)
satisfied the following conditions:
\((i)\)\(g(r_{1}^{n-1},f(x_{1}^{m}))=f(g(r_{1}^{n-1},x_{1}),...,g(r_{1}^{n-1},x_{m}))\)
\((ii)\)\(g(r_{1}^{i-1},f^{\prime}(s_{1}^{m}),r_{i+1}^{n-1},x)=f(g(r_{1}^{i-1},s_{1},r_{i+1}^ {n-1},x),...,g(r_{1}^{i-1}s_{m},r_{i+1}^{n-1},x))\)
\((ii)\)\(g(r_{1}^{i-1},g^{\prime}(r_{i}^{i+n-1}),r_{i+m}^{n+m-2},x)=g(r_{1}^{n-1},g(r_{m}^{n+m-2},x))\)
\((iv)\)\(\{0\}=g(r_{1}^{i-1},0,r_{i+1}^{n-1},x)\).
If \(g\) is an \(n\)-ary hyperoperation, \(A_{1},...,A_{n-1}\) are subsets of \(R\) and \(M^{\prime}\subseteq M\), we set
\(g(A_{1}^{n-1},M^{\prime})=\bigcup\{g(r_{1}^{n-1},m)\ |\ r_{i}\in A_{i},1\leq i \leq n-1,m\in M^{\prime}\}\).
Let \(1\) be a scalar identity in \(R\). For every \(a\in M\) and \(r_{1}^{n-1}\in R\) we have
\(g(1^{(n-1)},a)=\{a\},\ \ \ \ \ \ \ \ g(0^{(n-1)},a)=\{0\},\ \ \ \ \ \ \ \ g(r_{1}^{n-1},0)=\{0\}\).
Let \((M,f,g)\) be an \((m,n)\)-hypermodule over \(R\). A non-empty subset \(N\) of \(M\) is said to be an \((m,n)\)-subhypermodule of \(M\) if \((N,f)\) is a \(m\)-ary subhypergroup of \((M,f)\) and \(g(R^{(n-1)},N)\in P^{*}(N)\).
[2] Let \((M,f,g)\) be an \((m,n)\)-hypermodule, \(N\) a subhypermodule of \(M\) and \(a\) an element of \(M\). Then the hyperideals \(S_{N}\) and \(N_{a}\) is considered as follows:
\(S_{N}=\{r\in R\ |\ g(r,1^{(n-2)},M)\subseteq N\}\)
\(N_{a}=\{r\in R\ |\ g(r,1^{(n-2)},a)\subseteq N\}\)
**Definition 2.2**.: ([4]) Let \(M\) be an \((m,n)\)-hypermodule over \(R\). A proper subhypermodule \(K\) of \(M\) is said to be maximal, if for \(N\leq M\) with \(K\subseteq N\subseteq M\), we have either \(K=N\) or \(N=M\).
**Definition 2.3**.: ([4]) Let \(M\) be an \((m,n)\)-hypermodule over \(R\). A proper subhypermodule \(N\) of \(M\) is said to be \(n\)-ary prime, if \(g(r_{1}^{n-1},a)\subseteq N\) with \(r_{1}^{n-1}\in R\) and \(a\in M-N\), implies that \(g(r_{1}^{n-1},M)\subseteq N\).
In [2], there exists another definition of \(n\)-ary prime subhypermodules which is equivalent to above definition. A proper subhypermodule \(N\) of \(M\) is called \(n\)-ary prime, if \(g(r_{1}^{n-1},a)\subseteq N\) with \(r_{1}^{n-1}\in R\) implies that \(a\in N\) or \(r_{i}\in S_{N}\) for some \(1\leq i\leq n-1\).
**Definition 2.4**.: ([4]) Let \(N\) be a subhypermodule of an \((m,n)\)-hypermodule \((M,f,g)\) over \(R\). Then the set
\(M/N=\{f(x_{1}^{i-1},N,x_{i+1}^{m})\ |\ x_{1}^{i-1},x_{i+1}^{m}\in M\}\)
endowed with \(m\)-ary hyperoperation \(f\) which for all \(x_{11}^{1m},...,x_{m1}^{mm}\in M\)
\(F(f(x_{11}^{1(i-1)},N,x_{1(i+1)}^{1m}),...,f(x_{m1}^{m(i-1)},N,x_{m(i+1)}^{ mm}))\)
\(=\{(f(t_{1}^{i-1},N,t_{i+1}^{m})\ |\ t_{1}\in f(x_{11}^{m1}),...,t_{m}\in f(x_{1m }^{mm})\}\)
and with \(n\)-ary hyperoperation \(G:\underbrace{R\times...\times R}_{n-1}\times M/N\longrightarrow P^{*}(M/N)\) which for all
\(x_{1}^{i-1},x_{i+1}^{m}\in M\) and \(r_{1}^{n-1}\in R\)
\(G(r_{1}^{n-1},f(x_{1}^{i-1},N,x_{i+1}^{m}))\)
\(=\{f(z_{1}^{i-1},N,z_{i+1}^{m})\ |\ z_{1}\in g(r_{1}^{n-1},x_{1}),...,z_{m}\in g(r_{1} ^{n-1},x_{m})\}\)
is an \((m,n)\)-hypermodule over \(R\), and \((M/N,F,G)\) is called the quotient \((m,n)\)-hypermodule of \(M\) by \(N\).
**Definition 2.5**.: ([2]) For every nonzero element \(m\) of \((m,n)\)-hypermodule \((M,f,g)\) over \(R\), we define
\(F_{m}=\{r\in R\ |\ 0\in g(r,1^{(n-2)},m);r\neq 0\}\).
It is clear that \(F_{m}\) is a hyperideal of \((R,h,k)\). The \((m,n)\)-hypermodule \((M,f,g)\) is said to be faithful, if \(F_{m}=\{0\}\) for all nonzero elements \(m\in M\), that is \(0\in g(r,1^{(n-2)},m)\) implies that \(r=0\), for \(r\in R\).
**Definition 2.6**.: ([4]) Assume that \((M_{1},f_{1},g_{1})\) and \(M_{2},f_{2},g_{2})\) are two \((m,n)\)-hypermodules over \(R\). A mapping \(h:M_{1}\longrightarrow M_{2}\) is a homomorphism of \((m,n)\)-hypermodules if for all \(a_{1}^{m},a\in M_{1}\) and \(r_{1}^{n-1}\in R\):
\[h(f_{1}(a_{1}^{m}))=f_{2}(h(a_{1}),\cdots,h(a_{m})),\]
\[h(g_{1}(r_{1}^{n-1},a))=g_{2}(r_{1}^{n-1},h(a)).\]
## 3. \(n\)-ary classical prime subhypermodules
In this section, we want to consider the concept of an \(n\)-ary classical prime subhypermodule which is a generalization of the concept of prime submodules.
**Definition 3.1**.: Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). \(Q\) refers to an \(n\)-ary classical prime subhypermodule if for \(r_{1}^{n-1}\in R\) and \(a\in M\), \(g(r_{1}^{n-1},a)\subseteq Q\) implies that \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\).
**Example 3.2**.: Suppose that \(R=\{0,1,2\}\) and define \(3\)-ary hyperoperation \(f^{\prime}\) and \(3\)-ary operation \(g^{\prime}\) on \(R\) as follows:
\(f^{\prime}(0,0,0)=0,\)\(f^{\prime}(0,0,2)=2,\)\(f^{\prime}(1,1,2)=f^{\prime}(1,2,2)=\{1,2\},\)
\(f^{\prime}(0,0,1)=1,\)\(f^{\prime}(2,2,2)=2,\)\(f^{\prime}(0,1,1)=\{0,1\},\)
\(f^{\prime}(1,1,1)=1,\)\(f^{\prime}(0,1,2)=R,\)\(f^{\prime}(0,2,2)=\{0,2\},\)
and
\(g^{\prime}(r_{1},r_{2},r_{3})=0;\)\(r_{i}=0,\)\(1\leq i\leq 3\)
\(g^{\prime}(1,1,1)=1,\)\(g^{\prime}(1,2,2)=g^{\prime}(1,1,2)=g^{\prime}(2,2,2)=2\)
Then \((R,f^{\prime},g^{\prime})\) is a commutative \((3,3)\)-hyperring. Now, consider the set \(M=\{0,1,2,3\}\). \((M,f,g)\) is an \((3,3)\)-hypermodule over \(R\) with \(3\)-ary hyperoperations \(f\) and \(3\)-ary external hyperoperation \(g\) defined by:
\(f(0,0,0)=0,\)\(f(0,0,1)=1,\)\(f(0,0,2)=2,\)
\(f(0,0,3)=3,\)\(f(1,1,1)=1,\)\(f(2,2,2)=2,\)
\(f(3,3,3)=3,\)\(f(0,1,1)=\{0,1\},\)\(f(0,2,2)=\{0,2\},\)
\(f(0,3,3)=\{0,3\},\)\(f(1,2,3)=\{1,2,3\},\)\(f(0,1,2)=\{0,1,2\},\)
\(f(0,1,3)=\{0,1,3\},\)\(f(0,2,3)=\{0,2,3\},\)\(f(2,2,3)=f(2,3,3)=\{2,3\},\)
\(f(1,1,2)=f(1,2,2)=\{1,2\},\)\(f(1,1,3)=f(1,3,3)=\{1,3\},\)
and for \(r_{1}^{2}\in R\) and \(a\in M\),
\[g(r_{1}^{2},a)=\begin{cases}\{0\}&\text{if $r_{1}=0$ or $r_{2}=0$ or $a=0$},\\ \{2\}&\text{if $r_{1},r_{2}\neq 0$ and $a\neq 0$},\\ \{a\}&\text{if $r_{1}=r_{2}=0$}.\end{cases}\]
Let \(Q=\{0,2\}\). Then \(Q\) is a \(3\)-ary classical prime subhypermodule of \(M\).
**Theorem 3.3**.: _Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). Then \(Q\) is an \(n\)-ary classical prime subhypermodule if and only if for hyperideals \(I_{1}^{n-1}\) of \(R\) and subhypermodule \(N\) of \(M\), if \(g(I_{1}^{n-1},N)\subseteq Q\), then \(g(I_{i},1^{(n-2)},N)\subseteq Q\) for some \(1\leq i\leq n-1\)._
Proof.: This can be proved by using an argument similar to that in the proof of Theorem 2.14 in [11].
**Theorem 3.4**.: _Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and let \(S=M-Q\). Then \(Q\) is an \(n\)-ary classical prime subhypermodule of \(M\) if and only if for hyperideals \(I_{1}^{n-1}\) of \(R\) and for subhypermodules \(N_{1},N_{2}\) of
\(M\), \(f(N_{1},g(I_{i},1^{(n-2)},N_{2}),0^{(m-2)})\cap S\neq\varnothing\) for all \(1\leq i\leq n-1\) implies that \(f(N_{1},g(I_{1}^{n-1},N_{2}),0^{(m-2)})\cap S\neq\varnothing\) for all \(1\leq i\leq n-1\) implies that \(f(N_{1},g(I_{1}^{n-1},N_{2}),0^{(m-2)})\cap S\neq\varnothing\)._
Proof.: \((\Longrightarrow)\) Let \(I_{1}^{n-1}\) be hyperideals of \(R\) and let \(N_{1}\) and \(N_{2}\) be two subhypermodules of an \((m,n)\)-hypermodule \(M\) over \(R\) with \(f(N_{1},g(I_{i},1^{(n-2)},N_{2}),0^{(m-2)})\cap S\neq\varnothing\) for all \(1\leq i\leq n-1\). Suppose that \(f(N_{1},g(I_{1}^{n-1},N_{2}),0^{(m-2)})\cap S=\varnothing\). This implies \(g(I_{1}^{i-1},N_{2})\subseteq Q\). Then we get \(g(I_{i},1^{(n-2)},N_{2})\subseteq Q\) for some \(1\leq i\leq n-1\) since \(Q\) is an \(n\)-ary classical prime subhypermodule of \(M\). Thus we obtain \(f(N_{1},g(I_{i},1^{(n-2)},N_{2}),0^{(m-2)})\cap S=\varnothing\) which is a contradiction.
\((\Longleftarrow)\) Let \(g(I_{1}^{n-1},N)\subseteq Q\) for hyperideals \(I_{1}^{n-1}\) of \(R\) and for a subhypermodule \(N\) of an \((m,n)\)-hypermodule \(M\) over \(R\) but \(g(I_{i},1^{(n-2)},N)\nsubseteq Q\) for all \(1\leq i\leq n-1\). Then we conclude that \(g(I_{i},1^{(n-2)},N)\cap S\neq\varnothing\) for all \(1\leq i\leq n-1\) which means \(g(I_{1}^{n-1},N)\cap S\neq\varnothing\) which is a contradiction. Thus \(Q\) is an \(n\)-ary classical prime subhypermodule of \(M\).
**Theorem 3.5**.: _Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). Let \(S\) be a nonempty subset of \(M-\{0\}\) such that for hyperideals \(I_{1}^{n-1}\) of \(R\) and for subhypermodules \(N_{1},N_{2}\) of \(M\), \(f(N_{1},g(I_{i},1^{(n-2)},N_{2}),0^{(m-2)})\cap S\neq\varnothing\) for all \(1\leq i\leq n-1\) implies that \(f(N_{1},g(I_{1}^{n-1},N_{2}),0^{(m-2)})\cap S\neq\varnothing\). If \(Q\) is maximal with respect to the property that \(Q\cap S=\varnothing\), then \(Q\) is an \(n\)-ary classical prime subhypermodule of \(M\)._
Proof.: Assume that \(g(I_{1}^{n-1},N)\subseteq Q\) for some hyperideals \(I_{1}^{n-1}\) of \(R\) and for a subhypermodule \(N\) of \(M\). Let \(g(I_{i},1^{(n-2)},N)\nsubseteq Q\) for all \(1\leq i\leq n-1\). Then \(f(Q,g(I_{i},1^{(n-2)},N),0^{(m-2)})\cap S\neq\varnothing\) for all \(1\leq i\leq n-1\) by the maximality of \(Q\). This implies that \(f(Q,g(I_{1}^{n-1},N),0^{(m-2)})\cap S\neq\varnothing\) which means \(Q\cap S\neq\varnothing\) which is a contradiction. Consequently, \(Q\) is an \(n\)-ary classical prime subhypermodule of \(M\).
Recall from [8] that if \(N\) is a subhypermodule of \((M,f,g)\) over \(R\), then we consider the set \(M/N\) as follows:
\[M/N=\{f(a,N,0^{(m-2)})\ |\ a\in M\}.\]
Moreover, recall from [8] that an element \(a\) of an \((m,n)\)-hypermodule \(M\) over \(R\) is called torsion free if \(g(r_{1}^{n-1},a)=0\), then there exists \(1\leq i\leq n-1\) such that \(r_{i}=0\). If all elements of \(M\) are torsion free, then \(M\) is called torsion free.
**Theorem 3.6**.: _Suppose that \(M\) is an \((m,n)\)-hypermodule over \(R\) such that every classical prime subhypermodule of \(M\) is an intersection of maximal subhypermodules of \(M\) and \(N\) is a subhypermodule of \(M\). If \(M/N\) is a torsion free \((m,n)\)-hypermodule over \(R\), then every classical prime subhypermodule of \(N\) is an intersection of maximal subhypermodules of \(N\)._
Proof.: Assume that \(Q\) is a classical prime subhypermodule of \(N\). Let \(g(r_{1}^{n-1},m)\subseteq Q\) for some \(r_{1}^{n-1}\in R\) and \(m\in M\). If \(m\in N\), then \(Q\) is a classical prime subhypermodule of \(M\). So suppose that \(m\notin N\). Then we have \(g(r_{1}^{n-1},m)\subseteq Q\subseteq N\). Since \(m\notin N\) and \(M/N\) is a torsion free \((m,n)\)-hypermodule over \(R\), we obtain \(r_{i}=0\) for some \(1\leq i\leq n-1\). Therefore we get \(g(r_{i},1^{(n-2)},m)\subseteq Q\). This means that \(Q\) is a classical prime subhypermodule of \(M\). By the hypothesis, we infer that \(Q\) is an intersection of maximal subhypermodules of \(M\). Put \(Q=\cap_{i\in I}K_{i}\) for the maximal subhyperideals \(K_{i}\) of \(M\). Consider \(Q_{i}=K_{i}\cap N\) for
each \(i\in I\). Clearly \(Q=\cap_{i\in I}Q_{i}\), because \(Q\subseteq N\). We assume that \(Q_{i}\subset N\) for every \(i\in I\). Let \(x\in N-Q_{i}\) for some \(i\in I\). This means \(x\notin K_{i}\). By maximality \(K_{i}\) of \(M\), we conclude that \(f(K_{i},\langle x\rangle,0^{(m-2)})=M\). Assume that \(a\in N\). Then there exists some \(a_{i}\in K_{i}\) and \(r_{i}^{n-1}\in R\) such that \(a\in f(a_{i},g(r_{1}^{n-1},x),0^{(m-2)})\). Thus we have \(a_{i}\in f(a,-g(r_{1}^{n-1},x),0^{(m-2)})\subseteq N\) which implies \(a_{i}\in Q_{i}\). So \(a\in f(a_{i},\langle x\rangle,0^{(m-2)})\in f(Q_{i},\langle x\rangle,0^{(m-2)})\) which means \(f(Q_{i},\langle x\rangle,0^{(m-2)})=N\). Hence \(Q_{i}\) is a maximal subhypermodule of \(N\), as needed.
## 4. \(n\)-ary weakly classical prime subhypermodules
**Definition 4.1**.: Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). \(Q\) is called an \(n\)-ary weakly classical prime subhypermodule if \(0\notin g(r_{1}^{n-1},a)\subseteq Q\) for \(r_{1}^{n-1}\in R\) and \(a\in M\), then \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\).
**Example 4.2**.: Consider the commutative group \((H=\{0,x,y,z\},\oplus)\), where \(\oplus\) is defined by
\[\begin{array}{c|cccc}\oplus&0&x&y&z\\ \hline 0&0&x&y&z\\ x&x&0&z&y\\ y&y&z&0&x\\ z&z&y&x&0\end{array}\]
It is clear that \(H\) is a \(\mathbb{Z}\)-module. Also, the ring of integers \(\mathbb{Z}\) is a Krasner \((3,3)\)-hyperring with 3-ary hyperoperation \(f^{\prime}(r_{1}^{3})=\{r_{1}+r_{2}+r_{3}\}\) and 3-ary operation \(g^{\prime}(r_{1}^{3})=r_{1}\cdot r_{2}\cdot r_{3}\) for all \(r_{1}^{3}\in\mathbb{Z}\). Now, we have the canonical \((3,3)\)-hypermodule \((H,f,g)\) over \((\mathcal{Z},f^{\prime},g^{\prime})\) where 3-ary hyperoperation \(f\) and 3-ary external hyperoperation \(g\) on \(H\) are defined as follows:
\(f(a,a,a)=\{a\}\), for \(a\in H\)
\(f(0,a,a)=\{0\}\), for \(a\in H\)
\(f(a,a,b)=\{b\}\), for \(a,b\in H\)
\(f(a,b,c)=\{d\}\), for \(a\neq b\neq c\neq d\in H\)
and
\[g(r_{1}^{2},a)=\{\underbrace{a\oplus\cdots\oplus a}_{r_{1}\cdot r_{2}}\},\quad \text{for $r_{1}^{2}\in\mathbb{Z}$ and $a\in H$.}\]
The subhypermodule \(Q=\{0,y\}\) is a 3-ary weakly classical prime subhypermodule of \(H\).
**Theorem 4.3**.: _Let \(Q\) be an \(n\)-ary weakly classical prime subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(a\in M-Q\) such that \(F_{a}=\{0\}\). If \(0\neq g^{\prime}(r_{1}^{n})\in Q_{a}\) for some \(r_{1}^{n}\in R\), then \(r_{i}\in Q_{a}\) for some \(1\leq i\leq n\)._
Proof.: Assume that \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(a\in M-Q\) such that \(F_{a}=0\). Suppose that \(0\neq g^{\prime}(r_{1}^{n})\in Q_{a}\) for some \(r_{1}^{n}\in R\) such that \(r_{2}^{n}\notin Q_{a}\). We must show that \(r_{1}\in Q_{a}\). By \(r_{2}^{n}\notin Q_{a}\) we conclude that \(g(r_{i},1^{(n-2)},a)\nsubseteq Q\) for all \(2\leq i\leq n\). From \(g^{\prime}(r_{1}^{n})\in Q_{a}\) it follows that \(0\notin g(g^{\prime}(r_{1}^{n}),1^{(n-2)},a)\subseteq Q\) because \(F_{a}=\{0\}\). This means \(0\notin g(g^{\prime}(r_{1}^{n-2},g^{\prime}(r_{n-1}^{n},1^{(n-2)})),1^{(n-2)},a)=g(1^{(n-1)},g(r_{1}^{n-2},g^{\prime}(r_{n-1}^{n},1^{(n-2)}),a)=g(r_{1}^{n-2 },g^{\prime}(r_{n-1}^{n},1^{(n-2)}),a)\subseteq Q\). Since \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\), we get \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-2\) or \(g(g^{\prime}(r_{n-1}^{n},1^{(n-2)}),1^{(n-2)},a)=g(r_{n-1}^{n},1^{(n-3)},a)\subseteq Q\). In the second possibilty, we obtain \(r_{n-1}\in Q\) or \(r_{n}\in Q\) as \(1\notin Q\) and the proof is completed.
**Theorem 4.4**.: _Let \(P\) and \(Q\) be two subhypermodules of an \((m,n)\)-hypermodule \(M\) over \(R\) such that \(P\subset Q\). If \(P\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\) and \(Q/P\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/P\), then \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\)._
Proof.: Assume that \(0\notin g(r_{1}^{n-1},a)\subseteq Q\) for \(r_{1}^{n-1}\in R\) and \(a\in M\). If \(g(r_{1}^{n-1},a)\subseteq P\), then we are done. Suppose that \(g(r_{1}^{n-1},a)\nsubseteq P\). So \(0\neq G(r_{1}^{n-1},f(a,P,0^{(m-2)}))=\{f(g(r_{1}^{n-1},a),P,0^{(m-2)})\} \subseteq Q/P\). Since \(Q/P\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/P\), then we conclude that \(G(r_{i},1^{(n-2)},f(a,P,0^{(m-2)}))=\{f(g(r_{i},1^{(n-2)},a),P,0^{(m-2)})\} \subseteq Q/P\) for some \(1\leq i\leq n-1\) which implies \(g(r_{i},1^{(n-2)},a)\subseteq Q\), as needed.
Next, we observe that weakly classical prime subhypermodules behave naturally under a homomorphism.
**Theorem 4.5**.: _Let \((M_{1},f_{1},g_{1})\) and \((M_{2},f_{2},g_{2})\) be two \((m,n)\)-hypermodules over \((R,f^{\prime},g^{\prime})\) and let \(Q_{1},Q_{2}\) be \(n\)-ary weakly classical prime subhypermodules of \(M_{1},M_{2}\), respectively. If \(h:M_{1}\longrightarrow M_{2}\) is a homomorphism, then:_
1. _If_ \(h\) _is an epimorphism and_ \(Ker(h)\subseteq Q_{1}\)_, then_ \(h(Q_{1})\) _is an_ \(n\)_-ary weakly classical prime subhypermodule of_ \(M_{2}\)_._
2. _If_ \(h\) _is a monomorphism with_ \(h^{-1}(Q_{2})\neq M_{1}\)_, then_ \(h^{-1}(Q_{2})\) _is an_ \(n\)_-ary weakly classical prime subhypermodule of_ \(M_{1}\)_._
Proof.: (1) Let \(0\notin g_{2}(r_{1}^{n-1},a_{2})\subseteq h(Q_{1})\) for \(r_{1}^{n-1}\in R\) and \(a_{2}\in M_{2}\). Since \(h\) is an epimorphism, then there exists \(a_{1}\in M_{1}\) such that \(h(a_{1})=a_{2}\). Hence we get
\[h(g_{1}(r_{1}^{n-1},a_{1}))=g_{2}(r_{1}^{n-1},h(a_{1}))=g_{2}(r_{1}^{n-1},a_{2 })\subseteq h(Q_{1})\]
which means \(g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}\). Since \(Q_{1}\) is an \(n\)-ary weakly classical prime subhypermodules of \(M_{1}\) and \(0\notin g_{1}(r_{1}^{n-1},a_{1})\), it follows that \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\) for some \(1\leq i\leq n-1\). Therefore
\[g_{2}(r_{i},1^{(n-2)},a_{2})=g_{2}(r_{i},1^{(n-2)},h(a_{1}))=h(g_{1}(r_{i},1^ {(n-2)},a_{1}))\subseteq h(Q_{1}).\]
Thus \(h(Q_{1})\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{2}\).
(2) Let \(Q_{2}\) be an \(n\)-ary weakly classical prime subhypermodules of \(M_{2}\). Let \(0\notin g_{1}(r_{1}^{n-1},a_{1})\subseteq h^{-1}(Q_{2})\) for \(r_{1}^{n-1}\in R\) and \(a_{2}\in M_{2}\). Since \(h\) is a monomorphism, we conclude that \(0\notin h(g_{1}(r_{1}^{n-1},a_{1})=g_{2}(r_{1}^{n-1},h(a_{1}))\subseteq Q_{2}\). Since \(Q_{2}\) is an \(n\)-ary weakly classical prime subhypermodules of \(M_{2}\), we have \(g_{2}(r_{i},1^{(n-2)},h(a_{1}))\subseteq Q_{2}\) for some \(1\leq i\leq n-1\) and so \(h(g_{1}(r_{i},1^{(n-2)},a_{1}))\subseteq Q_{2}\). Hence \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq h^{-1}(Q_{2})\) for some \(1\leq i\leq n-1\). Therefore \(h^{-1}(Q_{2})\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\).
As an immediate consequence of the previous theorem, we have the following result.
**Corollary 4.6**.: Let \(P\) and \(Q\) be two subhypermodules of an \((m,n)\)-hypermodule \(M\) over \(R\) such that \(P\subset Q\). If \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\), then \(Q/P\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/P\).
Proof.: Consider the mapping \(\pi:M\longrightarrow M/P\) defined by \(a\longrightarrow f(a,P,0^{(n-2)})\). Then \(\pi\) is an epimorphism by Theorem 3.2 in [4]. Suppose that \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\). Since \(Ker(\pi)=P\subset Q\) and \(\pi\) is onto, we conclude that \(\pi(Q)=Q/P\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/P\) by Theorem 4.5 (1).
Assume that \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). Then \((r_{1}^{n-1},X)\) for \(r_{1}^{n-1}\in R\) and some non empty subset \(X\) of \(M\) is called a classical \((m,n)\)-zero of \(Q\) if \(0\in g(r_{1}^{n-1},X)\subseteq Q\) and \(g(r_{i},1,X)\nsubseteq Q\) for all \(1\leq i\leq n-1\).
**Theorem 4.7**.: _Let \(Q\) be an \(3\)-ary weakly classical prime subhypermodule of an \((3,3)\)-hypermodule \(M\) over \(R\) and let \(g(r_{1}^{2},P)\subseteq Q\) for some subhypermodule \(P\) of \(M\) and \(r_{1}^{2}\in R\). If \((r_{1}^{2},X)\) is not a classical \((3,3)\)-zero of \(Q\) for every non empty subset \(X\) of \(P\), then \(g(r_{i},1,P)\subseteq Q\) for some \(i\in\{1,2\}\)._
Proof.: Let \(g(r_{1}^{2},P)\subseteq Q\) but \(g(r_{i},1,P)\nsubseteq Q\) for each \(i\in\{1,2\}\). This implies that for each \(i\in\{1,2\}\) there exists \(p_{i}\in P\) such that \(g(r_{i},1,p_{i})\nsubseteq Q\). If \(0\notin g(r_{1}^{2},p_{1})\subseteq Q\), then \(g(r_{2},1,p_{1})\subseteq Q\) since \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\) and \(g(r_{1},1,p_{1})\nsubseteq Q\). If \(0\in g(r_{1}^{2},p_{1})\subseteq Q\), then \(g(r_{2},1,p_{1})\subseteq Q\) since \(g(r_{1}^{2},p_{1})\) is not a classical \((3,3)\)-zero of \(Q\). Similarly, we can conclude that \(g(r_{1},1,p_{2})\subseteq Q\). Therefore we have \(g(r_{1}^{2},f(p_{1}^{2},0))\subseteq Q\). This implies that \(g(r_{i},1,f(p_{1}^{2},0))\subseteq Q\) for some \(i\in\{1,2\}\) which means \(f(g(r_{i},1,p_{1}),g(r_{i},1,p_{2}),0)\subseteq Q\). If \(i=1\), then we get \(g(r_{1},1,p_{1})\subseteq Q\) which is a contradiction. If \(i=2\), then we obtain \(g(r_{2},1,p_{2})\subseteq Q\), a contradiction. Hence \(g(r_{i},1,P)\subseteq Q\) for some \(i\in\{1,2\}\).
Suppose that \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). Let \(g(I_{1}^{n-1},P)\subseteq Q\) for some hyperideals \(I_{1}^{n-1}\) of \(R\) and some subhypermodule \(P\) of \(M\). \(Q\) is called a free classical \((m,n)\)-zero with respect to \(g(I_{1}^{n-1},P)\) if \(g(r_{1}^{n-1},X)\) is not classical \((m,n)\)-zero of \(Q\) for every \(r_{i}\in I_{i}\) and for every non empty subset \(X\) of \(P\).
**Corollary 4.8**.: Let \(Q\) be an \(3\)-ary weakly classical prime subhypermodule of an \((3,3)\)-hypermodule \(M\) over \(R\) and let \(g(I_{1}^{2},P)\subseteq Q\) for some hyperideals \(I_{1}^{2}\) of \(R\) and some subhypermodule \(P\) of \(M\). If \(Q\) is a free classical \((3,3)\)-zero with respect to \(g(I_{1}^{2},P)\), then \(g(I_{i},1,P)\subseteq Q\) for some \(i\in\{1,2\}\).
Proof.: Let \(g(I_{i},1,P)\nsubseteq Q\) for each \(i\in\{1,2\}\). Then there exists \(r_{i}\in I_{i}\) for each \(i\in\{1,2\}\) such that \(g(r_{i},1,P)\nsubseteq Q\). So we have \(g(r_{1}^{2},P)\subseteq Q\). By Theorem 4.7, we get \(g(r_{i},1,P)\subseteq Q\) for some \(i\in\{1,2\}\) since \(Q\) is a free classical \((3,3)\)-zero with respect to \(g(I_{1}^{2},P)\). This is a contradiction. Thus \(g(I_{i},1,P)\subseteq Q\) for some \(i\in\{1,2\}\).
**Theorem 4.9**.: _Let \(Q\) be an \(n\)-ary weakly classical prime subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\). Then \(Q_{g(r_{1}^{n-1},a)}\subseteq F_{g(r_{1}^{n-1},a)}\cup Q_{g(r_{1},1^{(n-2)},a)} \cup\cdots\cup Q_{g(r_{n-1},1^{(n-2)},a)}\) for all \(r_{1}^{n-1}\in R\) and \(a\in M\)._
Proof.: Suppose that \(a\in M\) and \(r_{1}^{n-1}\in R\). Assume that \(x\in Q_{g(r_{1}^{n-1},a)}\). This means that \(g(x,1^{(n-2)},g(r_{1}^{n-1},a))\subseteq Q\). If \(0\in g(x,1^{(n-2)},g(r_{1}^{n-1},a))\), then \(x\in F_{g(r_{1}^{n-1},a)}\). If \(0\notin g(x,1^{(n-2)},g(r_{1}^{n-1},a))=g(r_{1}^{n-1},g(x,1^{(n-2)},a))\), then we conclude that \(g(r_{i},1^{(n-2)},g(x,1^{(n-2)},a)=g(x,1^{(n-2)},g(r_{i},1^{(n-2)},a))\subseteq Q\) for some \(1\leq i\leq n-1\) since \(Q\) is an \(n\)-ary weakly classical prime subhypermodule of \(M\). This implies that \(x\in Q_{g(r_{i},1^{(n-2)},a)}\) which means \(Q_{g(r_{1}^{n-1},a)}\subseteq F_{g(r_{1}^{n-1},a)}\cup Q_{g(r_{1},1^{(n-2)},a )}\cup\cdots\cup Q_{g(r_{n-1},1^{(n-2)},a)}\) and the proof is completed.
Recall from [2] that if \((M_{1},f_{1},g_{1})\) and \((M_{2},f_{2},g_{2})\) are two \((m,n)\)-hypermodules over \(R\), then the \((m,n)\)-hypermodule \((M_{1}\times M_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) over \(R\) is defined
by \(m\)-ary hyperoperation \(f_{1}\times f_{2}\) and \(n\)-ary external hyperoperation \(g_{1}\times g_{2}\), as follows:
\[\begin{array}{l}f_{1}\times f_{2}((a_{1},b_{1}),\cdots,(a_{m},b_{m}))=\{(x_{1 },x_{2})\ |\ x_{1}\in f_{1}(a_{1}^{m}),x_{2}\in f_{2}(b_{1}^{m})\}\\ g_{1}\times g_{2}(r_{1}^{n-1},(a,b))=\{(y_{1},y_{2})\ |\ y_{1}\in g_{1}(r_{1}^{n-1},a),y_{ 2}\in g_{2}(r_{1}^{n-1},b)\}\end{array}\]
**Theorem 4.10**.: _Let \((M_{1},f_{1},g_{1})\) and \((M_{2},f_{2},g_{2})\) be two \((m,n)\)-hypermodules over \(R\) and \(Q_{1}\) be a proper subhypermodule of \(M_{1}\). Then \(Q_{1}\times M_{2}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\times M_{2}\) if and only if \(Q_{1}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\) and \(0\in g_{1}(r_{1}^{n-1},a_{1})\) for \(r_{1}^{n-1}\in R\), \(a_{1}\in M_{1}\) such that \(g(r_{i},1^{(n-2)},a_{1})\nsubseteq Q_{1}\) for all \(1\leq i\leq n-1\) imply that \(g^{\prime}(r_{1}^{n-1},1)\in F_{a_{2}}\) for all \(a_{2}\in M_{2}\)._
Proof.: \((\Longrightarrow)\) Let \(Q_{1}\times M_{2}\) be an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\times M_{2}\). Suppose that \(0\notin g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}\) for some \(r_{1}^{n-1}\in R\) and for some \(a_{1}\in M_{1}\). Then we have \((0,0)\notin g_{1}\times g_{2}(r_{1}^{n-1},(a_{1},0))\subseteq Q_{1}\times M_{2}\). Therefore \(g_{1}\times g_{2}(r_{i},1^{(n-1)},(a_{1},0))=\{(y_{1},y_{2})\ |\ y_{1}\in g_{1}(r_{i},1^{(n-2)},a_{1}),y_{ 2}\in g_{2}(r_{i},1^{(n-2)},0)\}\subseteq Q_{1}\times M_{2}\) for some \(1\leq i\leq n-1\) which means \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\). Thus \(Q_{1}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\). Suppose that \(0\in g_{1}(r_{1}^{n-1},a_{1})\) for \(r_{1}^{n-1}\in R\), \(a_{1}\in M_{1}\) with \(g(r_{i},1^{(n-2)},a_{1})\nsubseteq Q_{1}\) for all \(1\leq i\leq n-1\). Assume on the contrary that \(g^{\prime}(r_{1}^{n-1},1)\notin F_{a_{2}}\) for some \(a_{2}\in M_{2}\). This implies that \(0\notin g_{2}(g^{\prime}(r_{1}^{n-1},1),1^{(n-2)},a_{2})\). It follows that \((0,0)\notin g_{1}\times g_{2}(r_{1}^{n-1},(a_{1},a_{2}))\subseteq Q_{1}\times M _{2}\). Since \(Q_{1}\times M_{2}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\times M_{2}\), we obtain \(g_{1}\times g_{2}(r_{i},1^{(n-2)},(a_{1},a_{2}))=\{(y_{1},y_{2})|\ y_{1}\in g_ {1}(r_{i},1^{(n-2)},a_{1}),y_{2}\in g_{2}(r_{i},1^{(n-2)},a_{2})\}\subseteq Q_{1}\times M _{2}\) which implies \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\), a contradiction. Hence \(g^{\prime}(r_{1}^{n-1},1)\in F_{a_{2}}\) for all \(a_{2}\in M_{2}\).
\((\Longleftarrow)\) Let \((0,0)\notin g_{1}\times g_{2}(r_{1}^{n-1},(a_{1},a_{2}))=\{(y_{1},y_{2})|\ y_{1}\in g_ {1}(r_{1}^{n-1},a_{1}),y_{2}\in g_{2}(r_{1}^{n-1},a_{2})\}\subseteq Q_{1}\times M _{2}\) for some \(r_{1}^{n-1}\in R\) and \((a_{1},a_{2})\in Q_{1}\times M_{2}\). If \(0\notin g_{1}(r_{1}^{n-1},a_{1})\), then we get \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\) for some \(1\leq i\leq n-1\) which implies \(g_{1}\times g_{2}(r_{i},1^{(n-2)},(a_{1},a_{2}))\subseteq Q_{1}\times M_{2}\) for some \(1\leq i\leq n-1\), as needed. If \(0\in g_{1}(r_{1}^{n-1},a_{1})\), we get \(0\notin g_{2}(r_{1}^{n-1},a_{2})\) which means \(g^{\prime}(r_{1}^{n-1},1)\notin F_{a_{2}}\). Then we conclude that \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\) for some \(1\leq i\leq n-1\) which implies \(g_{1}\times g_{2}(r_{i},1^{(n-2)},(a_{1},a_{2}))\subseteq Q_{1}\times M_{2}\). Thus \(Q_{1}\times M_{2}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\times M_{2}\).
Let \((M_{1},f_{1},g_{1})\) and \((M_{2},f_{2},g_{2})\) are two \((m,n)\)-hypermodules over \((R_{1},f_{1}^{\prime},g_{1}^{\prime})\) and \((R_{2},f_{2}^{\prime},g_{2}^{\prime})\), respectively. Then the \((m,n)\)-hypermodule \((M_{1}\times M_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) over \((R_{1}\times R_{2},f_{1}^{\prime}\times f_{2}^{\prime},g_{1}^{\prime}\times g_{2} ^{\prime})\) is defined by \(m\)-ary hyperoperation \(f_{1}\times f_{2}\) and \(n\)-ary external hyperoperation \(g_{1}\times g_{2}\), as follows:
\(f_{1}\times f_{2}((a_{1},b_{1}),\cdots,(a_{m},b_{m}))=\{(x_{1},x_{2})\ |\ x_{1}\in f_{1}(a_{1}^{m}),x_{2}\in f_{2}(b_{1}^{m})\}\)
\(g_{1}\times g_{2}((r_{1},s_{1}),\cdots,(r_{n-1},s_{n-1}),(a,b))=\{(y_{1},y_{2}) \ |\ y_{1}\in g_{1}(r_{1}^{n-1},a),y_{2}\in g_{2}(s_{1}^{n-1},b)\}\)
for all \(a_{1}^{m},a\in M_{1}\), \(b_{1}^{m},b\in M_{2}\), \(r_{1}^{n-1}\in R_{1}\) and \(s_{1}^{n-1}\in R_{2}\).
**Theorem 4.11**.: _Let \((M_{1}\times M_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) be an \((m,n)\)-hypermodule over \((R_{1}\times R_{2},f_{1}^{\prime}\times f_{2}^{\prime},g_{1}^{\prime}\times g_{2} ^{\prime})\) such that \((M_{1},f_{1},g_{1})\) is an \((m,n)\)-hypermodule over \((R_{1},f_{1}^{\prime},g_{1}^{\prime})\) and \((M_{2},f_{2},g_{2})\) is an \((m,n)\)-hypermodule over \((R_{2},f_{2}^{\prime},g_{2}^{\prime})\). Let \(Q_{1}\times M_{2}\) be a proper subhypermodule of \(M_{1}\times M_{2}\). Then the followings are equivalent:_
1. \(Q_{1}\) _is an_ \(n\)_-ary classical prime subhypermodule of_ \(M_{1}\)_._
2. \(Q_{1}\times M_{2}\) _is an_
Proof.: \((1)\Longrightarrow(2)\) Let \(g_{1}\times g_{2}((r_{1},s_{1}),\cdots(r_{n-1},s_{n-1}),(a,b))=\{(y_{1},y_{2})\mid y _{1}\in g_{1}(r_{1}^{n-1},a),y_{2}\in g_{2}(s_{1}^{n-1},b)\}\subseteq Q_{1} \times M_{2}\) for some \((r_{1},s_{1}),\cdots,(r_{n-1},s_{n-1})\in R_{1}\times R_{2}\), \((a,b)\in M_{1}\times M_{2}\). Therefore \(g_{1}(r_{1}^{n-1},a)\subseteq Q_{1}\). Since \(Q_{1}\) is an \(n\)-ary classical prime subhypermodule of \(M_{1}\), we conclude that \(g_{1}(r_{i},1^{(n-2)},a)\subseteq Q_{1}\) for some \(1\leq i\leq n-1\) which implies \(g_{1}\times g_{2}((r_{i},s_{i}),(1,1)^{n-2},(a,b))\subseteq Q_{1}\times M_{2}\). This shows that \(Q_{1}\times M_{2}\) is an \(n\)-ary classical prime subhypermodule of \(M_{1}\times M_{2}\).
\((2)\Longrightarrow(3)\) It is obvious.
\((3)\Longrightarrow(1)\) Assume that \(g_{1}(r_{1}^{n-1},a)\subseteq Q_{1}\) for some \(r_{1}^{n-1}\in R_{1}\) and \(a\in M_{1}\). Let us pick \(0\neq b\in M_{2}\). Then \((0,0)\notin g_{1}\times g_{2}((r_{1},s_{1}),\cdots,(r_{n-1},s_{n-1}),(a,b)) \subseteq Q_{1}\times M_{2}\). Since \(Q_{1}\times M_{2}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\times M_{2}\), we get \(g_{1}\times g_{2}((r_{i},s_{i}),(1,1)^{(n-2)},(a,b))\subseteq Q_{1}\times M_{2}\) for some \(1\leq i\leq n-1\) which shows \(g_{1}(r_{i},1^{(n-2)},a)\subseteq Q_{1}\). Consequently, \(Q_{1}\) is an \(n\)-ary classical prime subhypermodule of \(M_{1}\).
## 5. \(n\)-ary \(\phi\)-classical prime subhypermodule
In this section, the concept of \(n\)-ary \(\phi\)-classical prime subhypermodules of an \((m,n)\)-hypermodule over \(R\) is introduced. The results obtained in the theorems seem to play an important role to study \(n\)-ary \(\phi\)-classical prime subhypermodules.
**Definition 5.1**.: Let \(\mathcal{SH}(M)\) be the set of all subhypermodules of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) be a function. A proper subhypermodule \(Q\) of \(M\) is said to be an \(n\)-ary \(\phi\)-classical prime subhypermodule if \(r_{1}^{n-1}\in R\) and \(a\in M\), \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\) implies that \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\).
**Example 5.2**.: Assume that \(\mathbb{Z}\) is the ring of integers and \((\mathbb{Z},f,g)\) is the \((m,n)\)-hypermodule over \((\mathbb{Z},h,k)\) defined in Example 3.5 of [2]. Let for ever subhypermodule \(N\) of \(\mathbb{Z}\), \(S_{N}=\{r\in\mathbb{Z}\mid g(r,1^{(n-2)},\mathbb{Z})\subseteq N\}\). Consider the function \(\phi:\mathcal{SH}(\mathbb{Z})\longrightarrow\mathcal{SH}(\mathbb{Z})\cup\{\varnothing\}\) defined by \(\phi(N)=g(S_{N},1^{(n-2)},N)\) for ever subhypermodule \(N\) of \(\mathbb{Z}\). Then the subhypermodule \(g(\mathbb{Z}^{n-1},p)\) of \(\mathbb{Z}\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule.
Suppose that \(N\) is a subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) is a function. Define \(\phi_{N}\) from \(\mathcal{SH}(M/N)\) into \(\mathcal{SH}(M/N)\cup\{\varnothing\}\) by \(\phi_{N}(K/N)=f(\phi(K),N,0^{(m-2)})/N\) for all \(K\in\mathcal{SH}(M)\) such that \(N\subseteq K\). If \(\phi_{N}(K)=\varnothing\), then we consider \(\phi_{N}(K/N)=\varnothing\).
**Theorem 5.3**.: _Let \(N\subseteq Q\) be proper subhypermodules of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) be a function. If \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\), then \(Q/N\) is a \(\phi_{N}\)-classical prime subhypermodule of \(M/N\)._
Proof.: Let \(G(r_{1}^{n-1},f(a,N,0^{(n-m)}))\subseteq Q/N-\phi_{N}(Q/N).\) Therefore \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\) which implies \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\) since \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\). Thus \(G(r_{i},1^{(n-2)},f(a,N,0^{(m-2)}))\subseteq Q/N\). This shows that \(Q/N\) is a \(\phi_{N}\)-classical prime subhypermodule of \(M/N\).
**Theorem 5.4**.: _Let \(N\) and \(Q\) be proper subhypermodules of an \((m,n)\)-hypermodule \(M\) over \(R\) such that \(N\subseteq Q\). Suppose that \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) is a function. Then the followings hold:_
1. _If_ \(Q\) _is an_ \(n\)_-ary_ \(\phi\)_-classical prime subhypermodule of_ \(M\) _such that_ \(\phi(Q)\subseteq N\)_, then_ \(Q/N\) _is an_ \(n\)_-ary weakly classical prime subhypermodule of_ \(M/N\)_._
2. _If_ \(Q/N\) _is an_ \(n\)_-ary_ \(\phi_{N}\)_-classical prime subhypermodule of_ \(M/N\) _such that_ \(N\subseteq\phi(Q)\)_, then_ \(Q\) _is an_ \(n\)_-ary_ \(\phi\)_-classical prime subhypermodule of_ \(M\)_._
3. _If_ \(N\) _is an_ \(n\)_-ary_ \(\phi\)_-classical prime subhypermodule of_ \(M\) _such that_ \(\phi(N)\subseteq\phi(Q)\) _and_ \(Q/N\) _is an_ \(n\)_-ary weakly classical prime subhypermodule of_ \(M/N\)_, then_ \(Q\) _is an_ \(n\)_-ary_ \(\phi\)_-classical prime subhypermodule of_ \(M\)_._
Proof.: (1) Let \(0\notin G(r_{1}^{n-1},f(a,N,0^{(m-2)})\subseteq Q/N\) for some \(r_{1}^{n-1}\in R\) and \(a\in M\). Since \(\phi(Q)\subseteq N\), we conclude that \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\). Since \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\), we get \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\). It gives \(G(r_{i},1^{(n-2)},f(a,N,0^{(m-2)}))\subseteq Q/N\). Thus \(Q/N\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/N\).
(2) Let \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some \(r_{1}^{n-1}\in R\) and \(a\in M\). Then we conclude that \(G(r_{1}^{n-1},f(a,N,0^{(m-2)}))\subseteq Q/N-\phi_{N}(Q/N)=Q/N-(\phi(Q)/N)\). Since \(Q/N\) is an \(n\)-ary \(\phi_{N}\)-classical prime subhypermodule of \(M/N\), we obtain \(G(r_{i},1^{(n-2)},f(a,N,0^{(m-2)}))\subseteq Q/N\) for some \(1\leq i\leq n-1\). It follows that \(g(r_{i},1^{(n-2)},a)\subseteq Q\). Consequently, \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\).
(3) Suppose that \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some \(r_{1}^{n-1}\in R\) and \(a\in M\). From \(\phi(N)\subseteq\phi(Q)\), it follows that \(g(r_{1}^{n-1},a)\nsubseteq\phi(N)\). Let \(g(r_{1}^{n-1},a)\subseteq N\). Since \(N\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\), we get \(g(r_{i},1^{(n-2)},a)\subseteq N\subseteq Q\) for some \(1\leq i\leq n-1\). Now, let \(g(r_{1}^{n-1},a)\nsubseteq N\). It implies that \(0\notin G(r_{1}^{n-1},f(a,N,0^{(m-2)}))\subseteq Q/N\) and so \(G(r_{i},1^{(n-2)},f(a,N,0^{(m-2)}))\subseteq Q/N\) for some \(1\leq i\leq n-1\) since \(Q/N\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/N\). It shows that \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\). Hence \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\).
In view of Theorem 5.4, the following result is obtained.
**Corollary 5.5**.: Assume that \(Q\) is a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) is a function. Then the following conditions are equivalent:
1. \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\).
2. \(Q/\phi(Q)\) is an \(n\)-ary weakly classical prime subhypermodule of \(M/\phi(Q)\).
**Theorem 5.6**.: _Suppose that \(Q\) is a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) and \(\phi^{\prime}:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R) \cup\{\varnothing\}\) are two functions such that \(\mathcal{H}\mathcal{I}(R)\) is the set of all hyperideals of \(R\). Then the followings hold:_
1. _Let_ \(Q\) _be an_ \(n\)_-ary_ \(\phi\)_-classical prime subhypermodule of_ \(M\)_. Then_ \(g^{\prime}(r_{1}^{n})\in Q_{a}-\phi^{\prime}(Q_{a})\) _for_ \(r_{1}^{n}\in R\) _and for all_ \(a\in M-Q\) _with_ \(\phi(Q)_{a}\subseteq\phi^{\prime}(Q_{a})\) _implies that_ \(r_{i}\in Q_{a}\) _for some_ \(1\leq i\leq n\)_._
2. _If_ \(g^{\prime}(r_{1}^{n})\in Q_{a}-\phi(Q_{a})\) _for some_ \(r_{1}^{n}\in R\) _and for every_ \(a\in M-Q\) _with_ \(\phi^{\prime}(Q_{a})\subseteq\phi(Q)_{a}\) _implies that_ \(r_{i}\in Q_{a}\) _for some_ \(1\leq i\leq n\)_, then_ \(Q\) _is an_ \(n\)_-ary_ \(\phi\)_-classical prime subhypermodule of_ \(M\)_._
Proof.: (1) Let \(Q\) be an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\). Pick \(a\in M-Q\) with \(\phi(Q)_{a}\subseteq\phi^{\prime}(Q_{a})\). Assume that \(g^{\prime}(r_{1}^{n})\in Q_{a}-\phi^{\prime}(Q_{a})\) for some \(r_{1}^{n}\in R\). This means \(g(g^{\prime}(r_{1}^{n}),1^{(n-2)},a)=g(r_{1}^{n-2},g^{\prime}(r_{n-1}^{n},1^{ (n-2)}),a)\subseteq Q-\phi(Q)\). Since \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\), then \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for
some \(1\leq i\leq n-2\) or \(g(g^{\prime}(r_{n-1}^{n},1^{(n-2)}),1^{(n-2)},a)=g(r_{n-2},r_{n},1^{(n-2)},a)\subseteq Q\). In the second possibility, we have \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(i\in\{n-1,n\}\). Then we conclude that \(r_{i}\in Q_{a}\) for some \(1\leq i\leq n\), as needed.
(2) Suppose that \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some \(r_{1}^{n-1}\in R\) and \(a\in M\). Let \(a\in Q\). Then the claim follows. If \(a\notin Q\). From \(g^{\prime}(r_{1}^{n-1},1)\in Q_{a}-\phi^{\prime}(Q_{a})\), it follows that \(r_{i}\in Q_{a}\) for some \(1\leq i\leq n-1\). Hence \(g(r_{i},1^{(n-2)},a)\subseteq Q\). Consequently, \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\).
**Theorem 5.7**.: _Let \((M_{1},f_{1},g_{1})\) and \((M_{2},f_{2},g_{2})\) be two \((m,n)\)-hypermodules over \(R\) and \(h:M_{1}\longrightarrow M_{2}\) be an epimorphism. Let \(\phi_{1}:\mathcal{SH}(M_{1})\longrightarrow\mathcal{SH}(M_{1})\cup\{\varnothing\}\) and \(\phi_{2}:\mathcal{SH}(M_{2})\longrightarrow\mathcal{SH}(M_{2})\cup\{\varnothing\}\) be two functions._
1. _If_ \(Q_{2}\) _is an_ \(n\)_-ary_ \(\phi_{2}\)_-classical prime subhypermodule of_ \(M_{2}\) _such that_ \(\phi_{1}(h^{-1}(Q_{2}))=h^{-1}(\phi_{2}(Q_{2}))\)_, then_ \(h^{-1}(Q_{2})\) _is an_ \(n\)_-ary_ \(\phi_{1}\)_-classical prime subhypermodule of_ \(M_{1}\)_._
2. _If_ \(Q_{1}\) _is an_ \(n\)_-ary_ \(\phi_{1}\)_-classical prime subhypermodule of_ \(M_{1}\) _such that_ \(Ker(h)\subseteq Q_{1}\) _and_ \(\phi_{2}(h(Q_{1}))=h(\phi_{1}(Q_{1}))\)_, then_ \(h(Q_{1})\) _is an_ \(n\)_-ary_ \(\phi_{2}\)_-classical prime subhypermodule of_ \(M_{2}\)_._
Proof.: (1) Assume that \(g_{1}(r_{1}^{n-1},a_{1})\subseteq h^{-1}(Q_{2})-\phi_{1}(h^{-1}(Q_{2}))\) for some \(r_{1}^{n-1}\in R\) and \(a_{1}\in M_{1}\). This means \(h(g_{1}(r_{1}^{n-1},a_{1})=g_{2}(r_{1}^{n-1},h(a_{1}))\subseteq Q_{2}\). From \(\phi_{1}(h^{-1}(Q_{2}))=h^{-1}(\phi_{2}(Q_{2}))\), it follows that \(g_{2}(r_{1}^{n-1},h(a_{1}))\nsubseteq\phi_{2}(Q_{2})\). Since \(Q_{2}\) is an \(n\)-ary \(\phi_{2}\)-classical prime subhypermodule of \(M_{2}\) and \(g_{2}(r_{1}^{n-1},h(a_{1}))\subseteq Q_{2}-\phi_{2}(Q_{2})\), we get \(g_{2}(r_{i},1^{(n-2)},h(a_{1}))\subseteq Q_{2}\) for some \(1\leq i\leq i-1\). Then \(h(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{2}\) and so \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq h^{-1}(Q_{2})\). Thus \(h^{-1}(Q_{2})\) is an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\).
(2) Suppose that \(g_{2}(r_{1}^{n-1},a_{2})\subseteq h(Q_{1})-\phi_{2}(h(Q_{1}))\) for some \(r_{1}^{n-1}\in R\) and \(a_{2}\in M_{2}\). Since \(h\) is an epimorphism, we have \(h(a_{1})=a_{2}\) for some \(a_{1}\in M_{1}\). Hence \(h(g_{1}(r_{1}^{n-1},a_{1}))=g_{2}(r_{1}^{n-1},h(a_{1}))=g_{2}(r_{1}^{n-1},a_{ 2})\subseteq h(Q_{1})\) and so \(g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}\). From \(\phi_{2}(h(Q_{1}))=h(\phi_{1}(Q_{1}))\), it follows that \(g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}-\phi(Q_{1})\). Since \(Q_{1}\) is an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\), we conclude that \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\) for some \(1\leq i\leq n-1\). Thus we get \(h(g_{1}(r_{i},1^{(n-2)},a_{1}))=g_{2}(r_{i},1^{(n-2)},h(a_{1}))=g_{2}(r_{i},1^{ (n-2)},a_{2})\subseteq h(Q_{1})\). Consequently, \(h(Q_{1})\) is an \(n\)-ary \(\phi_{2}\)-classical prime subhypermodule of \(M_{2}\).
**Theorem 5.8**.: _Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) be a function. If \(Q\) bis an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\), then \(Q_{g(r_{1}^{n-1},a)}\subseteq\phi(Q)_{g(r_{1}^{n-1},a)}\cup Q_{g(r_{1},1^{(n-2 )},a)}\cup\cdots\cup Q_{g(r_{n-1},1^{(n-2)},a)}\) for all \(r_{1}^{n-1}\in R\) and \(a\in M\)._
Proof.: Let \(x\in Q_{g(r_{1}^{n-1},a)}\). This means that \(g(x,1^{(n-2)},g(r_{1}^{n-1},a))\subseteq Q\). Let \(g(x,1^{(n-2)},g(r_{1}^{n-1},a))\subseteq\phi(Q)\). It implies that \(x\in\phi(Q)_{g(r_{1}^{n-1},a)}\), as needed. So we consider \(g(x,1^{(n-2)},g(r_{1}^{n-1},a))\nsubseteq\phi(Q)\). Since \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\) and \(g(x,1^{(n-2)},g(r_{1}^{n-1},a))=g(r_{1}^{n-2},g^{\prime}(r_{n-1},x,1^{(n-2)}),a) \subseteq Q-\phi(Q)\), we conclude that \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-2\) or \(g(g^{\prime}(r_{n-1},x,1^{(n-2)}),1^{(n-2)},a)=g(x,1^{(n-2)},g(r_{n-1},1^{(n-2)},a))\subseteq Q\). In the former case, we get \(g(x,1^{(n-2)},g(r_{i},1^{(n-1)},a))\subseteq Q\) which means \(x\in Q_{g(r_{i},1^{(n-2)},a)}\) for some \(1\leq i\leq n-2\). In the second case, we obtain \(x\in Q_{g(r_{n-1},1^{(n-2)},a)}\).
The following theorem offers a characterization of \(n\)-ary \(\phi\)-classical prime subhypermodules of \(M\).
**Theorem 5.9**.: _Let \(Q\) be a proper subhypermodule of an \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) be a function. Then \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\) if and only if for every hyperideals \(I_{1}^{n-1}\) of \(R\) and \(a\in M\), \(g(I_{1}^{n-1},a)\subseteq Q-\phi(Q)\) implies that \(g(I_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\)._
Proof.: (\(\Longrightarrow\)) Assume that \(g(I_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some hyperideals \(I_{1}^{n-1}\) of \(R\) and \(a\in M\) but \(g(I_{i},1^{(n-2)},a)\nsubseteq Q\) for all \(1\leq i\leq n-1\). Then there exists \(r_{i}\in I_{i}\) for each \(1\leq i\leq n-1\) such that \(g(r_{i},1^{(n-2)},a)\nsubseteq Q\). Since \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\) and \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\), we conclude that \(g(r_{i},1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\) which is a contradiction.
(\(\Longleftarrow\)) Suppose that \(g(r_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some \(r_{1}^{n-1}\in R\) and \(a\in M\). Then we have \(g(\langle r_{1}\rangle,\cdots,\langle r_{n-1}\rangle,a)\subseteq Q\). Since \(g(r_{1}^{n-1},a)\nsubseteq\phi(Q)\), then we conclude that \(g(\langle r_{1}\rangle,\cdots,\langle r_{n}\rangle,a)\nsubseteq\phi(Q)\). By the hypothesis, we have \(g(\langle r_{i}\rangle,1^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\). Therefore we get \(g(r_{i},1^{(n-2)},a)\subseteq Q\) which means \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\).
Recall from [2] that an \((m,n)\)-hypermodule \(M\) over \(R\) is a multiplication \((m,n)\)-hypermodule if for every subhypermodule \(K\) of \(M\), there exists a hyperideal \(I\) of \(R\) with \(K=g(I,1^{(n-2)},M)\). Let \(K_{i}\) be a subhypermodule of a multiplication \((m,n)\)-hypermodule \(M\) for each \(1\leq i\leq n-1\) such that \(K_{i}=g(I_{i},1^{(n-2)},M)\) for some hyperideal \(I_{i}\) of \(R\). Then the product of \(K_{1},\cdots,K_{n}\) denoted by \(g(K_{1}^{n})\) is defined by \(g(K_{1}^{n})=g(g^{\prime}((I_{1}^{n}),1^{(n-2)},M)\). Also, we define \(g(K_{1}^{n-1},a)=g(I_{1}^{n-1},a)\) and \(g(K_{i},M^{(n-2)},a)=g(I_{i},1^{(n-2)},a)\) for each \(1\leq i\leq n-1\) and for any \(a\in M\).
**Theorem 5.10**.: _Let \(Q\) be a proper subhypermodule of a multiplication \((m,n)\)-hypermodule \(M\) over \(R\) and \(\phi:\mathcal{SH}(M)\longrightarrow\mathcal{SH}(M)\cup\{\varnothing\}\) be a function. Then \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\) if and only if \(g(Q_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some subhypermodules \(Q_{1}^{n-1}\) of \(M\) and \(a\in M\) implies that \(g(Q_{i},M^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\)._
Proof.: (\(\Longrightarrow\)) Assume that \(g(Q_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some subhypermodules \(Q_{1}^{n-1}\) of \(M\) and \(a\in M\). Since \(M\) is a multiplication \((m,n)\)-hypermodule, then there exist some hyperideals \(I_{1}^{n-1}\) of \(R\) with \(Q_{i}=g(I_{i},1^{(n-2)},M)\) for each \(1\leq i\leq n-1\). Therefore we have \(g(Q_{1}^{n-1},a)=g(I_{1}^{n-1},a)\subseteq Q-\phi(Q)\). Since \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\), then \(g(I_{i},1^{(n-1)},a)\subseteq Q\) for some \(1\leq i\leq n-1\) by Theorem 5.9. This means that \(g(Q_{i},M^{(n-2)},a)\subseteq Q\), as needed.
(\(\Longleftarrow\)) Let \(g(I_{1}^{n-1},a)\subseteq Q-\phi(Q)\) for some hyperideals \(I_{1}^{n-1}\) of \(R\) and \(a\in M\). Now, we put \(Q_{i}=g(I_{i},1^{(n-2)},M)\) for each \(1\leq i\leq n-1\). Then we have \(g(Q_{1}^{n-1},a)\subseteq Q-\phi(Q)\) which implies \(g(Q_{i},M^{(n-2)},a)\subseteq Q\) for some \(1\leq i\leq n-1\). Therefore \(g(I_{i},1^{(n-2)},a)\subseteq Q\). Thus \(Q\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M\) by Theorem 5.9.
**Theorem 5.11**.: _Assume that \((M_{1}\times M_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) is an \((m,n)\)-hypermodule over \((R_{1}\times R_{2},f_{1}^{\prime}\times f_{2}^{\prime},g_{1}^{\prime}\times g_{2} ^{\prime})\) such that \((M_{1},f_{1},g_{1})\) is an \((m,n)\)-hypermodule over \((R_{1},f_{1}^{\prime},g_{1}^{\prime})\) and \((M_{2},f_{2},g_{2})\) is an \((m,n)\)-hypermodule over \((R_{2},f_{2}^{\prime},g_{2}^{\prime})\). Let \(\phi:\mathcal{SH}(M_{1}\times M_{2})\longrightarrow\mathcal{SH}(M_{1} \times M_{2})\cup\{\varnothing\}\) be a function. If \(Q_{1}\) is an \(n\)-ary weakly
classical prime subhypermodule of \(M_{1}\) with \(\{0\}\times M_{2}\subseteq\phi(Q_{1}\times M_{2})\), then \(Q_{1}\times M_{2}\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M_{1}\times M_{2}\)._
Proof.: Let \(g_{1}\times g_{2}((r_{1},s_{1}),\cdots(r_{n-1},s_{n-1}),(a,b))=\{(y_{1},y_{2}) \mid y_{1}\in g_{1}(r_{1}^{n-1},a),y_{2}\in g_{2}(s_{1}^{n-1},b)\}\subseteq Q_{ 1}\times M_{2}-\phi(Q_{1}\times M_{2})\) for some \((r_{1},s_{1}),\cdots,(r_{n-1},s_{n-1})\in R_{1}\times R_{2}\) and \((a,b)\in M_{1}\times M_{2}\). Therefore \(0\notin g_{1}(r_{1}^{n-1},a)\subseteq Q_{1}\). Since \(Q_{1}\) is an \(n\)-ary weakly classical prime subhypermodule of \(M_{1}\), we conclude that \(g_{1}(r_{i},1^{(n-2)},a)\subseteq Q_{1}\) for some \(1\leq i\leq n-1\) which implies \(g_{1}\times g_{2}((r_{i},s_{i}),(1,1)^{n-2},(a,b))\subseteq Q_{1}\times M_{2}\). This means that \(Q_{1}\times M_{2}\) is an \(n\)-ary \(\phi\)-classical prime subhypermodule of \(M_{1}\times M_{2}\).
**Theorem 5.12**.: _Suppose that \((M_{1}\times M_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) is an \((m,n)\)-hypermodule over \((R_{1}\times R_{2},f_{1}^{\prime}\times f_{2}^{\prime},g_{1}^{\prime}\times g_{ 2}^{\prime})\) such that \((M_{1},f_{1},g_{1})\) is an \((m,n)\)-hypermodule over \((R_{1},f_{1}^{\prime},g_{1}^{\prime})\) and \((M_{2},f_{2},g_{2})\) is an \((m,n)\)-hypermodule over \((R_{2},f_{2}^{\prime},g_{2}^{\prime})\). Let \(\phi_{1}:\mathcal{SH}(M_{1})\longrightarrow\mathcal{SH}(M_{1})\cup\{\varnothing\}\) and \(\phi_{2}:\mathcal{SH}(M_{2})\longrightarrow\mathcal{SH}(M_{2})\cup\{\varnothing\}\) be two functions such that \(\phi_{2}(M_{2})=M_{2}\). Then \(Q_{1}\times M_{2}\) is an \(n\)-ary \(\phi_{1}\times\phi_{2}\)-classical prime subhypermodule of \(M_{1}\times M_{2}\) if and only if \(Q_{1}\) is an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\)._
Proof.: (\(\Longrightarrow\)) Assume that \(Q_{1}\times M_{2}\) is an \(n\)-ary \(\phi_{1}\times\phi_{2}\)-classical prime subhypermodule of \(M_{1}\times M_{2}\). Let \(g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}-\phi_{1}(Q_{1})\) for some \(r_{1}^{n-1}\in R\) and \(a_{1}\in M_{1}\). Therefore we have \(g_{1}\times g_{2}((r_{1},1),\cdots,(r_{n-1},1)(a_{1},a_{2}))\subseteq Q_{1} \times M_{2}-\phi_{1}\times\phi_{2}(Q_{1}\times M_{2})=Q_{1}\times M_{2}-(\phi_ {1}(Q_{1})\times\phi_{2}(M_{2}))\) for all \(a_{2}\in M_{2}\). Since \(Q_{1}\times M_{2}\) is an \(n\)-ary \(\phi_{1}\times\phi_{2}\)-classical prime subhypermodule of \(M_{1}\times M_{2}\), we obtain \(g_{1}\times g_{2}((r_{i},1),(1,1)^{(n-2)},(a_{1},a_{2}))\subseteq Q_{1}\times M _{2}\) for some \(1\leq i\leq n-1\) which means \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\). This shows that \(Q_{1}\) is an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\).
(\(\Longleftarrow\)) Let \(Q_{1}\) be an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\). Assume that \(g_{1}\times g_{2}((r_{1},s_{1}),\cdots,(r_{n-1},s_{n-1})(a_{1},a_{2})) \subseteq Q_{1}\times M_{2}-\phi_{1}\times\phi_{2}(Q_{1}\times M_{2})\). From \(\phi_{2}(M_{2})=M_{2}\), it follows that \(g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}-\phi_{1}(Q_{1})\). Then we have \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\) for some \(1\leq i\leq n-1\). So we conclude that \(g_{1}\times g_{2}((r_{i},s_{i}),(1,1)^{(n-2)},(a_{1},a_{2}))\subseteq Q_{1} \times M_{2}\). Consequently, \(Q_{1}\times M_{2}\) is an \(n\)-ary \(\phi_{1}\times\phi_{2}\)-classical prime subhypermodule of \(M_{1}\times M_{2}\).
**Theorem 5.13**.: _Let \((M_{1}\times M_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) be an \((m,n)\)-hypermodule over \((R_{1}\times R_{2},f_{1}^{\prime}\times f_{2}^{\prime},g_{1}^{\prime}\times g_{ 2}^{\prime})\) such that \((M_{1},f_{1},g_{1})\) is an \((m,n)\)-hypermodule over \((R_{1},f_{1}^{\prime},g_{1}^{\prime})\) and \((M_{2},f_{2},g_{2})\) is an \((m,n)\)-hypermodule over \((R_{2},f_{2}^{\prime},g_{2}^{\prime})\). Assume that \(\phi_{1}:\mathcal{SH}(M_{1})\longrightarrow\mathcal{SH}(M_{1})\cup\{\varnothing\}\) and \(\phi_{2}:\mathcal{SH}(M_{2})\longrightarrow\mathcal{SH}(M_{2})\cup\{\varnothing\}\) be two functions. If \(Q_{1}\times Q_{2}\) is an \(n\)-ary \(\phi_{1}\times\phi_{2}\)-classical prime subhypermodule of \(M_{1}\times M_{2}\), then \(Q_{1}\) is an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\) and \(Q_{2}\) is an \(n\)-ary \(\phi_{2}\)-classical prime subhypermodule of \(M_{2}\)._
Proof.: Let \(Q_{1}\times Q_{2}\) be an \(n\)-ary \(\phi_{1}\times\phi_{2}\)-classical prime subhypermodule of \(M_{1}\times M_{2}\). Assume that \(g_{1}(r_{1}^{n-1},a_{1})\subseteq Q_{1}-\phi_{1}(Q_{1})\) for some \(r_{1}^{n-1}\in R\) and \(a\in M_{1}\). Pick \(a_{2}\in Q_{2}\). So \(g_{1}\times g_{2}((r_{1},1),\cdots,(r_{n-1},1)(a_{1},a_{2}))\subseteq Q_{1} \times Q_{2}-\phi_{1}\times\phi_{2}(Q_{1}\times Q_{2})\). Therefore we get \(g_{1}\times g_{2}((r_{i},1),(1,1)^{(n-2)},(a_{1},a_{2}))\subseteq Q_{1}\times Q _{2}\) for some \(1\leq i\leq n-1\) which implies \(g_{1}(r_{i},1^{(n-2)},a_{1})\subseteq Q_{1}\). Thus \(Q_{1}\) is an \(n\)-ary \(\phi_{1}\)-classical prime subhypermodule of \(M_{1}\). Similarly, we can show that \(Q_{2}\) is an \(n\)-ary \(\phi_{2}\)-classical prime subhypermodule of \(M_{2}\).
## 6. conclusion
The notion of prime submodules has a significant place in the theory of modules, and it is used to characterize certain classes of modules. In this paper, we studied some generalizations on this issue in the context of \((m,n)\)-hypermodules. We introduced \(n\)-ary classical prime, \(n\)-ary weakly classical prime and \(n\)-ary \(\phi\)-classical prime subhypermodules. In this direction we gave some characterizations of such subhypermodules. The future work can be on defining the notions of classical primary, weakly classical primary and \(\phi\)-classical primary subhypermodules of an \((m,n)\)-hypermodules over a Krasner \((m,n)\)-hyperring.
|
2305.03653 | Query Expansion by Prompting Large Language Models | Query expansion is a widely used technique to improve the recall of search
systems. In this paper, we propose an approach to query expansion that
leverages the generative abilities of Large Language Models (LLMs). Unlike
traditional query expansion approaches such as Pseudo-Relevance Feedback (PRF)
that relies on retrieving a good set of pseudo-relevant documents to expand
queries, we rely on the generative and creative abilities of an LLM and
leverage the knowledge inherent in the model. We study a variety of different
prompts, including zero-shot, few-shot and Chain-of-Thought (CoT). We find that
CoT prompts are especially useful for query expansion as these prompts instruct
the model to break queries down step-by-step and can provide a large number of
terms related to the original query. Experimental results on MS-MARCO and BEIR
demonstrate that query expansions generated by LLMs can be more powerful than
traditional query expansion methods. | Rolf Jagerman, Honglei Zhuang, Zhen Qin, Xuanhui Wang, Michael Bendersky | 2023-05-05T16:16:45Z | http://arxiv.org/abs/2305.03653v1 | # Query Expansion by Prompting Large Language Models
###### Abstract.
Query expansion is a widely used technique to improve the recall of search systems. In this paper, we propose an approach to query expansion that leverages the generative abilities of Large Language Models (LLMs). Unlike traditional query expansion approaches such as Pseudo-Relevance Feedback (PRF) that relies on retrieving a good set of pseudo-relevant documents to expand queries, we rely on the generative and creative abilities of an LLM and leverage the knowledge inherent in the model. We study a variety of different prompts, including zero-shot, few-shot and Chain-of-Thought (CoT). We find that CoT prompts are especially useful for query expansion as these prompts instruct the model to break queries down step-by-step and can provide a large number of terms related to the original query. Experimental results on MS-MARCO and BEIR demonstrate that query expansions generated by LLMs can be more powerful than traditional query expansion methods.
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
+
Footnote β : journal: Computer Science
We note that our work is similar to the recent works of (Han et al., 2017) and (Kumar et al., 2018): leveraging an LLM to expand a query. However, we differentiate our work in several important ways: First, we study a number of different prompts whereas (Kumar et al., 2018) focuses on a single few-shot prompt and (Han et al., 2017) does not study prompts. Second, unlike (Kumar et al., 2018) and (Han et al., 2017), we focus on generating _query expansion terms_ instead of entire pseudo documents. To this end, we demonstrate the performance of our prompts on a variety of _smaller_ model sizes which helps understand both the limitations and the practical capabilities of an LLM approach to query expansion. Finally, we experiment with entirely open-source models, inviting reproducibility and openness of research, while (Kumar et al., 2018) experiments with a single type of model which is only accessible through a third-party API.
## 3. Methodology
We formulate the query expansion problem as follows: given a query \(q\) we wish to generate an _expanded query_\(q^{\prime}\) that contains additional query terms that may help in retrieving relevant documents. In particular we study the use of an LLM to expand the query terms and generate a new query \(q^{\prime}\). Since the LLM output may be verbose, we repeat the original query terms 5 times to upweigh their relative importance. This is the same as the trick employed by (Kumar et al., 2018). More formally:
\[q^{\prime}=\mathrm{Concat}(q,q,q,q,q,\mathrm{LLM}(\mathit{prompt}_{q})), \tag{1}\]
where \(\mathrm{Concat}\) is the string concatenation operator, \(q\) is the original query, LLM is a Large Language Model and \(\mathit{prompt}_{q}\) is the generated prompt based on the query (and potentially side information like few-shot examples or PRF documents).
In this paper we study eight different prompts:
* The Query2Doc (Kumar et al., 2018) few-shot prompt, asking the model to write a passage that answers the query.
* A zero-shot version of **Q2D**.
* A zero-shot prompt like **Q2D/ZS** but which also contains extra context in the form of top-3 retrieved PRF documents for the query.
* Similar to the Query2Doc few-shot prompt but with examples of query _expansion terms_ instead of _documents_.
* A zero-shot version of **Q2E**.
* A zero-shot prompt like **Q2E/ZS** but with extra context in the form of PRF documents like **Q2D/PRF**.
* A zero-shot Chain-of-Thought prompt which instructs the model to provide rationale for its answer.
* A prompt like **CoT** but which also contains extra context in the form of top-3 retrieved PRF documents for the query.
Zero-shot prompts (**Q2D/ZS** and **Q2E/ZS**) are the simplest as they consist of a simple plaintext instruction and the input query. Few-shot prompts (**Q2D** and **Q2E**) additionally contain several examples to support in-context learning, for example they contain queries and corresponding expansions. Chain-of-Thought (**CoT**) prompts formulate their instruction to obtain a more verbose output from the model by asking it to break its response down step-by-step. Finally, Pseudo-Relevance Feedback (-/**PRF**) variations of prompts use the top-3 retrieved documents as additional context for the model. See Appendix A for the exact prompts that are used in the experiments.
## 4. Experiments
To validate the effectiveness of the LLM-based query expansion we run experiments on two retrieval tasks: MS-MARCO (Kumar et al., 2018) passage retrieval and BEIR (Kumar et al., 2018). For the retrieval system we use BM25 (Kumar et al., 2018; Kumar et al., 2018) as implemented by Terrier (Tran et al., 2017)1. We use the default BM25 parameters (\(b=0.75\), \(k_{1}=1.2\), \(k_{3}=8.0\)) provided by Terrier.
Footnote 1: [http://terrier.org/](http://terrier.org/)
### Baselines
To analyze the LLM-based query expansion methods we compare against several classical PRF-based query expansion methods (Bordes and Kulesh, 2017):
* Bo1: Bose-Einstein 1 weighting
* Bo2: Bose-Einstein 2 weighting
* KL: Kullback-Leibler weighting
The implementations for these are provided by Terrier. In all cases we use the default Terrier settings for query expansion: 3 PRF docs and 10 expansion terms.
Furthermore, we include the prompt from Query2Doc (Kumar et al., 2018) as a baseline. However, we do not compare against their exact setup since they use a significantly larger model than the models we study in this paper. The comparisons in this paper are focused on prompts and not on the exact numbers produced by different, potentially much larger, models. Furthermore, for models with a small receptive field (specifically the Flan-T5 models) we only use a 3-shot Q2D prompt instead of the standard 4-shot prompt to prevent the prompt from being truncated.
### Language Models
We compare the prompts on two types of models, Flan-T5 (Fan et al., 2017; Kulesh and Flan-UL2, 2018), at various model sizes:
* Flan-T5-Small (60M parameters)
* Flan-T5-Base (220M parameters)
* Flan-T5-Large (770M parameters)
* Flan-T5-XL (3B parameters)
* Flan-T5-XXL (11B parameters)
* Flan-UL2 (208 parameters)
We choose to use the Flan (Fan et al., 2017; Kulesh and Flan, 2018) versions of the T5 (Kulesh and Flan, 2018) and UL2 (Kulesh and Flan, 2018) models as they are fine-tuned to follow instructions which is critical when using prompt-based approaches. Furthermore, all of these models are available as open-source2.
Footnote 2: Models are available at [https://huggingface.co/docs/transformers/model_doc/flan-t5-and](https://huggingface.co/docs/transformers/model_doc/flan-t5-and) [https://huggingface.co/google/flan-t12](https://huggingface.co/google/flan-t12)
### Metrics
Since we are interested in query expansion, which is largely focussed on improving the recall of first-stage retrieval, we use Recall@1K as our core evaluation metric. We also report top-heavy ranking metrics using MRR@10 (Kumar et al., 2018) and NDCG@10 (Kumar et al., 2018) to better understand how the models change the top retrieved results. We report all our results with significance testing using a paired \(t\)-test and consider a result significant at \(p<0.01\).
## 5. Results
### Ms-Marco Passage Ranking
Table 1 presents the results on the MS-MARCO passage ranking task. The classical query expansion baselines (Bo1, Bo2 and KL), already provide a useful gain in terms of Recall@1K over the standard BM25 retrieval. In line with the results of (Krishnan et al., 2017), we observe that this increase in recall comes at the cost of top-heavy ranking metrics such as MRR@10 and NDCG@10.
Next, we see the results of LLM-based query expansion depend heavily on the type of prompts used. Similar to the findings of (Krishnan et al., 2017), the Query2Doc prompt (**Q2D**) can provide a substantial gain in terms of Recall@1K over the classical approaches. Interestingly, Query2Doc does not only improve recall, but also improves the top-heavy ranking metrics such as MRR@10 and NDCG@10, providing a good improvement across metrics. This contrasts with classical query expansion methods which typically sacrifice top-heavy ranking metrics in order to improve recall.
Finally, the best performance is obtained by **CoT** (and the corresponding PRF-enhanced prompt **CoT/PRF**). This particular prompt instructs the model to generate a verbose explanation by breaking its answer down into steps. We hypothesize that this verbosity may lead to many potential keywords that are useful for query expansion. Finally, we find that adding PRF documents to the prompt helps significantly in top-heavy ranking metrics like MRR@10 and NDCG@10 across models and prompts. A possible explanation for this is that LLMs are effective in distilling the PRF documents, which may already contain relevant passages, by attending over the most promising keywords and using them in the output. We provide a more concrete example of the prompt output in Appendix B.
### Beir
The BEIR datasets comprise many different zero-shot information retrieval tasks from a variety of domains. We compare the performance of the different prompts on the BEIR datasets in Table 2. The first thing to observe here is that the classical PRF-based query expansion baselines still work very well, especially on domain-specific datasets such as tree-covid, scidocs and touc2020. These datasets are largely academic and scientific in nature, and the PRF documents may provide useful query terms in these cases. In contrast, the general purpose LLMs may not have sufficient domain knowledge to be useful for these datasets. Second, we note that the question-answering style datasets (fga, hotpota, msancro and nq) seem to benefit the most from an LLM approach to query expansion. It is likely that the language model is producing relevant answers towards the query which helps retrieve the relevant passages more effectively. Across all datasets, the **Q2D/PRF** prompt produces the highest average Recall@1K, with the **CoT** prompt as a close second.
### The Impact of Model Size
To understand the practical capabilities and limitations of an LLM-based query expander, we compare different model sizes in Figure 2. We range the model size from 60M parameters (Flan-T5-small) up to 11B (Flan-T5-XXL) and also try a 20B parameter model (Flan-UL2) but note that the latter also has a different pre-training objective. In general we observe the expected trend that larger models tend to perform better. The **Q2D** approach requires at least an 11B parameter model to reach parity with the BM25+Bo1 baseline. In contrast, the **CoT** approach only needs a 3B parameter model to reach parity. Furthermore, adding PRF documents to the **CoT** prompt seems to help stabilize the performance for smaller model sizes but does inhibit its performance at larger capacities. A possible explanation for this behavior is that the PRF documents decreases the creativity of
\begin{table}
\begin{tabular}{l r r r} \hline \hline & Recall@1K & MRR@10 & NDCG@10 \\ \hline BM25 & 87.82 & 18.77 & 23.44 \\ BM25 + Bo1 & 88.68 & 17.75 & 22.48 \\ BM25 + Bo2 & 88.32 & 17.58 & 22.30 \\ BM25 + KL & 88.62 & 17.71 & 22.44 \\ \hline
**Flan-T5-XXL** (11B) & & & \\ Q2D & 88.76 & 19.07 & 23.76 \\ Q2D/ZS & 88.88 & 18.55 & 23.13 \\ Q2D/PRF & 89.31 & 22.13\({}^{\blacktriangle}\) & 26.43\({}^{\blacktriangle}\) \\ Q2E & 87.74 & 18.74 & 23.37 \\ Q2E/ZS & 87.93 & 18.79 & 23.45 \\ Q2E/PRF & 88.20 & 19.20 & 23.83 \\ CoT & 89.86 & 19.16 & 23.82 \\ CoT/PRF & 89.02 & 22.08\({}^{\blacktriangle}\) & 26.32\({}^{\blacktriangle}\) \\ \hline
**Flan-UL2** (20B) & & & \\ Q2D & 89.87 & 19.22 & 23.96 \\ Q2D/ZS & 86.60 & 15.56 & 19.54 \\ Q2D/PRF & 89.28 & 21.42\({}^{\blacktriangle}\) & 25.82\({}^{\blacktriangle}\) \\ Q2E & 88.04 & 18.84 & 23.52 \\ Q2E/ZS & 88.11 & 18.87 & 23.56 \\ Q2E/PRF & 88.43 & 19.24 & 23.90 \\ CoT & **90.61\({}^{\blacktriangle}\)** & 20.05\({}^{\blacktriangle}\) & 24.85\({}^{\blacktriangle}\) \\ CoT/PRF & 89.30 & **22.62\({}^{\blacktriangle}\)** & **26.89\({}^{\blacktriangle}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1. LLM-based query expansion on the MS-MARCO passage ranking dev set. \({}^{\blacktriangle}\) indicates a statistically significant (paired \(t\)-test, \(\rho<0.01\)) improvement relative to the Q2D Flan-UL2 method. The best result per metric is bolded.
Figure 2. Performance on MS-MARCO passage ranking dev set across different model sizes. The shaded areas indicate a 99% confidence interval.
the model, as it may focus too much on the provided documents. Although this helps prevent the model from making errors at smaller model sizes, it also inhibits the creative abilities that we wish to leverage at larger model sizes. The **CoT/PRF** prompt is able to outperform the other prompts at the 770M parameter model size, making it a good candidate for possible deployment in realistic search settings where serving a larger model may be impossible. Overall, it is clear that large models are able to provide significant gains which may limit the practical application of an LLM approach to query expansion. Distillation has been shown to be an effective way to transfer the ability of a large model to a smaller one. We leave the study of distillation of these models for query expansion as future work.
## 6. Limitations & Future Work
There are a number of limitations in our work: First, we only study sparse retrieval (BM25) which is where query expansion is important. Dense retrieval systems (e.g. dual encoders) are less prone to the vocabulary gap and, as a result, are less likely to benefit from a query expansion. Wang et al. (Wang et al., 2019) has already studied this setting in more detail and we leave the analysis of our prompts for a dense retrieval setting as future work. Second, our work focuses on Flan (Fan et al., 2019) instruction-finetuned language models. We chose these models due to their ability to follow instructions and the fact that these models are open-source. Our work can naturally be extended to other language models (Wang et al., 2019; Wang et al., 2019; Wang et al., 2020) and we leave the study of such models as a topic for future research. Third, we study specific prompt templates (see Appendix A) and there may be other ways to formulate the different prompts. Finally, the computational cost of LLMs may be prohibitive to deploy LLM-based query expansions in practice. It may be possible to distill the output of the large model into a smaller servable model. How to productionize LLM-based query expansions is left as an open problem.
## 7. Conclusion
In this paper we study LLM-based query expansions. In contrast to traditional PRF-based query expansion, LLMs are not restricted to the initial retrieved set of documents and may be able to generate expansion terms not covered by traditional methods. Our proposed method is simple: we prompt a large language model and provide it a query, then we use the model's output to expand the original query with new terms that help during document retrieval.
Our results show that Chain-of-Thought prompts are especially promising for query expansion, since they instruct the model to generate verbose explanations that can cover a wide variety of new keywords. Furthermore, our results indicate that including PRF documents in various prompts can improve top-heavy ranking metric performance during the retrieval stage _and_ is more robust when used with smaller model sizes, which can help practical deployment of LLM-based query expansion.
As demonstrated in this paper, IR tasks like query expansion can benefit from LLMs. As the capabilities of LLMs continue to improve, it is promising to see their capabilities translate to various IR tasks. Furthermore, as LLMs become more widely available, they will be easier to use and deploy as core parts of IR systems which is exciting for both practitioners and researchers of such systems.
\begin{table}
\begin{tabular}{l c|c c c|c c c|c c c|c c} \hline \hline & \multicolumn{4}{c|}{**Classical QE**} & \multicolumn{4}{c}{**LLM-based QE**} \\ Dataset & BM25 & Bo1 & Bo2 & KL & Q2D & Q2D/ZS & Q2D/PRF & Q2E & Q2E/ZS & Q2E/PRF & CoT & CoT/PRF \\ \hline arguana & 98.93 & **99.00** & **99.00** & **99.00** & 98.86 & 98.93 & 98.93 & 98.93 & 98.93 & 98.93 & 98.93 & 98.86 \\ climate-fever & 46.60 & 45.69 & 45.38 & 45.65 & 47.62 & 47.66 & **47.94** & 46.08 & 46.44 & 46.44 & 47.42 & 46.81 \\ cqadupstack & 65.55 & **66.82** & 65.67 & 66.70 & 65.51 & 64.19 & 65.01 & 65.69 & 65.71 & 65.90 & 66.39 & 66.12 \\ dpedia & 63.72 & 64.77 & 64.55 & 64.60 & **65.89** & 65.47 & 65.78 & 63.53 & 63.92 & 63.93 & 65.77 & 65.06 \\ fever & 75.73 & 76.28 & 75.83 & 76.32 & **79.06*** & 78.87*** & 77.29 & 75.78 & 75.79 & 76.27 & **78.21*** & 77.25 \\ fiqa & 77.42 & 79.18 & 79.06 & 78.84 & 78.34 & 78.26 & 78.69 & 77.33 & 77.31 & 77.68 & **80.08** & 79.03 \\ hotpotqa & 85.78 & 84.84 & 81.71 & 84.65 & 86.90*** & 85.71 & 87.58*** & 85.60 & 85.54 & 87.25*** & 87.54*** & **88.79*** \\ msmarco & 73.61 & 75.08 & 75.14 & 74.66 & 76.77 & 75.73 & 78.75 & 73.87 & 73.79 & 74.14 & **79.58** & 78.36 \\ nfcporus & 38.70 & 57.30 & 57.67 & 56.46 & 55.34 & **59.81** & 59.68 & 43.38 & 44.12 & 47.06 & 52.63 & 53.32 \\ nq & 78.96 & 81.09 & 80.64 & 80.82 & 85.18*** & 84.71*** & 83.53*** & 79.30 & 79.11 & 80.35 & **85.46*** & **83.11*** \\ quora & 99.26 & 99.20 & 99.12 & 99.20 & 99.00 & 98.84 & 98.92 & 99.92 & 99.26 & 99.19 & 99.21 \\ scidocs & 57.46 & 59.78 & **61.03** & 59.86 & 59.09 & 59.78 & 60.10 & 57.88 & 57.70 & 58.32 & 58.51 & 59.69 \\ scifact & 97.17 & **97.57** & **97.57** & **97.57** & **97.57** & **97.57** & **97.57** & 97.17 & 97.17 & 97.17 & **97.57** & 97.17 \\ touche2020 & 84.96 & 85.94 & **86.38** & 86.01 & 83.61 & 83.44 & 84.54 & 85.21 & 85.02 & 86.04 & 85.51 & 84.58 \\ tree-covid & 42.58 & 45.21 & **45.58** & 45.39 & 43.52 & 38.05 & 44.17 & 43.16 & 43.12 & 43.85 & 43.43 & 44.02 \\ \hline Average & 72.43 & 74.52 & 74.35 & 74.38 & 74.82 & 74.47 & **75.23** & 72.81 & 72.86 & 73.50 & 75.08 & 74.76 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Recall@1K of various prompts on BEIR using Flan-U1.2. \({}^{\blacktriangle}\) indicates a statistically significant (paired \(t\)-test, \(p<0.01\)) improvement relative to the best classical QE method. The best result per dataset is highlighted in bold. |
2310.16956 | Datastore Design for Analysis of Police Broadcast Audio at Scale | With policing coming under greater scrutiny in recent years, researchers have
begun to more thoroughly study the effects of contact between police and
minority communities. Despite data archives of hundreds of thousands of
recorded Broadcast Police Communications (BPC) being openly available to the
public, a closer look at a large-scale analysis of the language of policing has
remained largely unexplored. While this research is critical in understanding a
"pre-reflective" notion of policing, the large quantity of data presents
numerous challenges in its organization and analysis.
In this paper, we describe preliminary work towards enabling Speech Emotion
Recognition (SER) in an analysis of the Chicago Police Department's (CPD) BPC
by demonstrating the pipelined creation of a datastore to enable a multimodal
analysis of composed raw audio files. | Ayah Ahmad, Christopher Graziul, Margaret Beale Spencer | 2023-10-25T19:52:19Z | http://arxiv.org/abs/2310.16956v1 | # Datastore Design for Analysis of Police Broadcast Audio at Scale
###### Abstract.
With policing coming under greater scrutiny in recent years, researchers have begun to more thoroughly study the effects of contact between police and minority communities. Despite data archives of hundreds of thousands of recorded Broadcast Police Communications (BPC) being openly available to the public, a closer look at a large-scale analysis of the language of policing has remained largely unexplored. While this research is critical in understanding a "pre-reflective" notion of policing, the large quantity of data presents numerous challenges in its organization and analysis.
In this paper, we describe preliminary work towards enabling Speech Emotion Recognition (SER) in an analysis of the Chicago Police Department's (CPD) BPC by demonstrating the pipelined creation of a datastore to enable a multimodal analysis of composed raw audio files.
temporal data, datastores, audio analysis, speech emotion recognition, feature extraction +
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: Computer Vision and Pattern Recognition
+
Footnote β : journal: journal: Computer Vision and Pattern Recognition
+
Footnote β
### Scale
In expanding the audio into discrete samples, we ended up with approximately 40 million data points. From there, we extracted 26 temporal features, at approximately 183,000 samples per feature--based on the Geneva Minimalistic Acoustic Parameter Set (GeMAPS) (Becker et al., 2010)--using openSMILE (Becker et al., 2010), resulting in approximately 690,000 data points per file. Using Praat-Parselmouth (Becker et al., 2010; Becker et al., 2010) to extract intensity, harmonicity, and pitch for each file resulted in approximately 230,000 data points per feature, per file. Scaling upwards, to include all 160,000 audio files results in approximately 7.2 trillion data points for the raw audio, GeMAPS, and Parselmouth files.
### Temporality
When extracting different features from individual files, the default sample rate varies from one program to another. Thus, for each audio file, we have both raw audio data for each 22kHz sampling period and features extracted at differing periods of time.
### Silence
Since some files contain silent slices, clustering on data that contains silence could lead to an inherently binary model, explained by one dimension--silence or sound.
## 3. Database Design and Implementation
In searching for a database management system (DBMS) that was scalable, and ACID-compliant, extensible, with high levels of concurrency, we decided to use PostgreSQL.
Operating under a 1TB constraint meant that we could not store our raw and extracted data directly in the database because this composition of features exceeded 30 TB. This was in addition to the constraint set by the misalignment in temporality. Thus, we determined to design a datastore, such that raw and extracted features could simultaneously be accessed and preprocessed as inputs to a CRNN. Each file is stored in a specified location, with the locations used instead of the files in the database. Thus, for any feature stored as a column in the database, a script would extract the file locations, and feed them into a clustering algorithm that would parse the given file and cluster the data accordingly. Similarly, for the SER model, a script would perform the same parsing of the files and use the parsed data as inputs to the model.
## 4. Conclusion
In this project, we created a framework that enabled easy interoperability with statistical methods for an unbiased large-scale analysis of police broadcast audio for SER, allowing us to do large-scale pre-processing, clustering and PCA on the dataset.
###### Acknowledgements.
Research reported in this publication was supported by the National Institute On Minority Health And Health Disparities of the National Institutes of Health under Award Number R01MD015064.
|
2307.15648 | Nonabelian partial difference sets constructed using abelian techniques | A $(v,k,\lambda, \mu)$-partial difference set (PDS) is a subset $D$ of a
group $G$ such that $|G| = v$, $|D| = k$, and every nonidentity element $x$ of
$G$ can be written in either $\lambda$ or $\mu$ different ways as a product
$gh^{-1}$, depending on whether or not $x$ is in $D$. Assuming the identity is
not in $D$ and $D$ is inverse-closed, the corresponding Cayley graph ${\rm
Cay}(G,D)$ will be strongly regular. Partial difference sets have been the
subject of significant study, especially in abelian groups, but relatively
little is known about PDSs in nonabelian groups. While many techniques useful
for abelian groups fail to translate to a nonabelian setting, the purpose of
this paper is to show that examples and constructions using abelian groups can
be modified to generate several examples in nonabelian groups. In particular,
in this paper we use such techniques to construct the first known examples of
PDSs in nonabelian groups of order $q^{2m}$, where $q$ is a power of an odd
prime $p$ and $m \ge 2$. The groups constructed can have exponent as small as
$p$ or as large as $p^r$ in a group of order $p^{2r}$. Furthermore, we
construct what we believe are the first known Paley-type PDSs in nonabelian
groups and what we believe are the first examples of Paley-Hadamard difference
sets in nonabelian groups, and, using analogues of product theorems for abelian
groups, we obtain several examples of each. We conclude the paper with several
possible future research directions. | James Davis, John Polhill, Ken Smith, Eric Swartz | 2023-07-28T16:12:40Z | http://arxiv.org/abs/2307.15648v1 | # Nonabelian partial difference sets constructed using abelian techniques
###### Abstract
A \((v,k,\lambda,\mu)\)-partial difference set (PDS) is a subset \(D\) of a group \(G\) such that \(|G|=v\), \(|D|=k\), and every nonidentity element \(x\) of \(G\) can be written in either \(\lambda\) or \(\mu\) different ways as a product \(gh^{-1}\), depending on whether or not \(x\) is in \(D\). Assuming the identity is not in \(D\) and \(D\) is inverse-closed, the corresponding Cayley graph \(\operatorname{Cay}(G,D)\) will be strongly regular. Partial difference sets have been the subject of significant study, especially in abelian groups, but relatively little is known about PDSs in nonabelian groups. While many techniques useful for abelian groups fail to translate to a nonabelian setting, the purpose of this paper is to show that examples and constructions using abelian groups can be modified to generate several examples in nonabelian groups. In particular, in this paper we use such techniques to construct the first known examples of PDSs in nonabelian groups of order \(q^{2m}\), where \(q\) is a power of an odd prime \(p\) and \(m\geqslant 2\). The groups constructed can have exponent as small as \(p\) or as large as \(p^{r}\) in a group of order \(p^{2r}\). Furthermore, we construct what we believe are the first known Paley-type PDSs in nonabelian groups and what we believe are the first examples of Paley-Hadamard difference sets in nonabelian groups, and, using analogues of product theorems for abelian groups, we obtain several examples of each. We conclude the paper with several possible future research directions.
## 1 Introduction
This work focuses on the algebraic structure known as a partial difference set (PDS). A \((v,k,\lambda,\mu)\)-PDS is a subset \(D\) of a group \(G\) such that \(|G|=v\); \(|D|=k\); every nonidentity element of \(D\) can be written as \(d_{1}d_{2}^{-1}\), where \(d_{1},d_{2}\in D\), in \(\lambda\) different ways; and every nonidentity element of \(G-D\) can be written as \(d_{1}d_{2}^{-1}\), where \(d_{1},d_{2}\in D\), in \(\mu\) different ways. These sets have received much attention due to their correspondences with strongly regular graphs, codes, bent functions, and association schemes.
Over the past few decades, numerous constructions of PDSs have been given in many _abelian_ groups (for example, [5, 9, 14, 17, 18, 21, 24]). The methods nearly always include
the use of characters, no doubt because they provide a relatively simple proof. Recent work has shed light on the fact that PDSs can be constructed in nonabelian groups as well, see for instance [2, 7, 8, 25, 30, 31]. We believe that nonabelian groups will provide many interesting examples of PDSs, even though relatively few examples are known in this setting (see [25, Sections 4-5] for a recent survey).
In a previous paper [25], the authors investigated PDSs in nonabelian groups for which there are no abelian PDSs with those parameters. In this situation, both the parameters and the PDS itself are called _genuinely nonabelian_. On the other end of the spectrum, there are examples of PDSs in nonabelian groups that are not genuinely nonabelian, such as in [8]; that is, given a \((v,k,\lambda,\mu)\)-PDS in an abelian group, there also exists a \((v,k,\lambda,\mu)\)-PDS in a nonabelian group. While one of the main themes of [25] is that many tools from the abelian setting simply do not apply to nonabelian groups, the purpose of this paper is to show that several constructions and existence results in abelian groups have direct analogues in nonabelian groups. In fact, in several instances, both the abelian group and the nonabelian group act on the same underlying combinatorial object (in this case, the same strongly regular graph).
The main results of this paper can be summarized as follows.
1. Let \(q\) be a power of an odd prime \(p\) and \(m\geqslant 2\). Then, there exist nonisomorphic, nonabelian groups of order \(q^{2m}\) and exponent \(p\) whose nonidentity elements can be partitioned into PDSs (Theorem 3.3, Remark 3.5, Theorem 3.7, Remark 3.8, Theorem 3.9).
2. Let \(q\) be a power of an odd prime \(p\). There exists a nonabelian group of order \(q^{4}\) and exponent \(p\) that can be partitioned into \(q+3\) PDSs, the union of any which is also a PDS (Theorem 3.12). In particular, this group contains a Paley-type PDS (Corollary 3.13).
3. Let \(t\geqslant 2\) and \(p\) be an odd prime. The group \[\widehat{G_{t}}:=\left\langle x,y:x^{p^{t}}=y^{p^{t}}=1,yxy^{-1}=x^{(p-1)p^{t- 1}+1}\right\rangle\cong\mathbb{Z}_{p^{t}}\rtimes_{(p-1)p^{t-1}+1}\mathbb{Z}_{p ^{t}}\] can be partitioned into \(2p\) PDSs in such a way that the union of any of them is also a PDS. In particular, there is a Paley-type PDS in \(\widehat{G_{t}}\) (Theorem 4.6).
4. If two groups \(G\) and \(G^{\prime}\) of order \(v\) possess Paley-type PDSs, then the group \(G\times G^{\prime}\) also contains a Paley-type PDS (Theorem 5.1). Combined with the results of (2) and (3), this provides infinitely many more examples of Paley-type PDSs in nonabelian groups.
5. If a group \(G\) of order \(v\) contains a Paley-type PDS and the group \(G^{\prime}\) of order \(v\pm 2\) contains a skew Hadamard difference set (DS), then the product group \(G\times G^{\prime}\) contains a Paley-Hadamard DS in the Stanton-Sprott (Twin prime power) family (Theorem 5.3). Combined with the results of (2) and (3), this provides more examples of Paley-Hadamard difference sets in nonabelian groups.
6. In many cases, the group ring calculations needed to prove a product theorem in the abelian case (that is, the existence of PDSs in abelian groups \(G\) and \(G^{\prime}\) imply the
existence of a PDS in \(G\times G^{\prime}\)) do not depend on whether or not the groups are abelian, meaning that they will automatically translate to the nonabelian setting (Lemma 6.1, Theorem 6.2).
We believe the PDSs constructed in this paper are the first infinite families of PDSs in nonabelian groups of order \(q^{d}\), where \(q\) is an odd prime power and \(d>3\). (Partial difference sets of have been constructed in nonabelian groups of order \(q^{3}\), where \(q\) is an odd prime power, in [30] and [25].) Moreover, in this paper we construct what we believe are the first known Paley-type PDSs in nonabelian groups and what we believe are the first examples of Paley-Hadamard DSs in nonabelian groups.
This paper is organized as follows. Section 2 contains preliminary material related to PDSs, association schemes, and quadratic forms. In Section 3, we will use geometric techniques to construct families of PDSs in certain nonabelian groups or order \(q^{2m}\) and exponent \(p\), where \(p\) is an odd prime and \(q\) is a power of \(p\). In Section 4, we will use group ring equations from the abelian world to obtain PDSs in groups of the form \(G=\mathbb{Z}_{p^{r}}\rtimes\mathbb{Z}_{p^{r}}\) that have a large center, \(Z(G)=pG\cong\mathbb{Z}_{p^{r-1}}\times\mathbb{Z}_{p^{r-1}}\). Section 5 highlights new constructions of Paley-type PDSs and how they can be used to construct new difference sets using the twin prime power construction. In Section 6, we show that group ring equations will be forced to hold for various product constructions previously shown to work for abelian groups by characters. In this case, since the input group ring relations for the product are identical for the abelian case as for the nonabelian case we can avoid messy group ring equations. The product will take as input the nonabelian PDSs from Sections 3 and 4 and will generate new PDSs in many nonabelian groups. We conclude in Section 7 with some remarks and a considerable list of open problems.
## 2 Preliminaries
### Partial difference sets
Let \(G\) be a finite group of order \(v\) with a subset \(D\) of order \(k\). Suppose further that the differences \({d_{1}}{d_{2}}^{-1}\) for \(d_{1},d_{2}\in D,d_{1}\neq d_{2}\), represent each nonidentity element of \(G\) precisely \(\lambda\) times. Then, \(D\) is a _\((v,k,\lambda)\)-difference set_ (DS) in \(G\).
Now suppose that \(G\) is a finite group of order \(v\) with a subset \(D\) of order \(k\). Suppose further that the differences \({d_{1}}{d_{2}}^{-1}\) for \(d_{1},d_{2}\in D,d_{1}\neq d_{2}\), represent each nonidentity element of \(D\) exactly \(\lambda\) times and the nonidentity element of \(G-D\) exactly \(\mu\) times. Then, \(D\) is called a _\((v,k,\lambda,\mu)\)-partial difference set_ (PDS) in \(G\). The survey article of Ma is an excellent resource for these sets [16]. Typically, a proper PDS \(D\) for which \(\lambda\neq\mu\) will have the two properties that the identity element from \(G\) is not in \(D\) and that \(x\in D\) implies \(x^{-1}\in D\), and such a PDS is called _regular_. A PDS having parameters \((n^{2},r(n-1),n+r^{2}-3r,r^{2}-r)\) for some natural number \(r\) is called a _Latin square type PDS_. Similarly, a PDS having parameters \((n^{2},r(n+1),-n+r^{2}+3r,r^{2}+r)\) is called a _negative Latin square type PDS_. Assuming the PDS is regular, the Cayley graph for a \((v,k,\lambda,\mu)\)-PDS will always be a _strongly regular graph_ with the same parameters; that is, the corresponding Cayley graph has \(v\) vertices, every vertex has \(k\) neighbors, adjacent vertices have \(\lambda\) common neighbors, and nonadjacent vertices have \(\mu\) common neighbors.
The earliest examples of PDSs date back to Paley [20], though his work long predates the systematic study of PDSs. Paley showed that the nonzero squares in \(\mathbb{F}_{q}\) will be a \((q,\frac{q-1}{2},\frac{q-5}{4},\frac{q-1}{4})\)-PDS in the additive group when \(q\) is a prime power and \(q\equiv 1\pmod{4}\). We will call these _Paley partial difference sets_, and more generally \((v,\frac{v-1}{2},\frac{v-5}{4},\frac{v-1}{4})\)-PDSs will be _Paley-type partial difference sets_. This family of PDSs has received much attention in abelian groups. Davis was the first to construct Paley-type PDSs in groups that are not elementary abelian [5], work that was subsequently generalized in [15] and [27]. Polhill found examples where \(v\) was not a prime power in [23].
Paley [20] showed in the case when \(q\) is a prime power and \(q\equiv 3\pmod{4}\) that the set of nonzero squares will instead be a \((q,\frac{q-1}{2},\frac{q-3}{4})\)-difference set, now called a _Paley-Hadamard difference set_. Stanton and Sprott found new examples of Paley-Hadamard difference sets [29] which are known as _Twin prime power difference sets_ in the additive group of \(\mathbb{F}_{q}\times\mathbb{F}_{q+2}\), when \(q\) and \(q+2\) are both prime powers.
Partial difference sets are often studied within the context of the particular group ring \(\mathbb{Z}[G]\), whether the group \(G\) is abelian or not. For a subset \(D\) of a group \(G\), we abuse notation slightly and write \(D:=\sum_{d\in D}d\) and \(D^{(-1)}:=\sum_{d\in D}d^{-1}\). The following equation will then hold for a regular \((v,k,\lambda,\mu)\)-partial difference set \(D\) in the group \(G\) with identity \(1_{G}\):
\[DD^{(-1)}=DD=\lambda D+\mu(G-D-1_{G})+k1_{G}=(\lambda-\mu)D+\mu G+(k-\mu)1_{G}.\]
### Association schemes
When studying PDSs, and in particular those with (negative) Latin square type parameters, one often has a partition of the nonidentity elements into multiple PDSs. As such, they form a multi-class association scheme, and so it will be helpful to consider these mathematical structures.
Let \(\mathcal{X}\) be a finite set. An _association scheme_ with \(t\) classes on \(\mathcal{X}\) is a partition of \(\mathcal{X}\times\mathcal{X}\) into sets \(R_{0}\), \(R_{1},\ldots,R_{t}\) (relations, or associate classes) such that
1. \(R_{0}=\{(x,x):x\in\mathcal{X}\}\) (the diagonal relation);
2. for each \(\ell\), \(R_{\ell}^{t}=\{(y,x):(x,y)\in R_{\ell}\}=R_{\ell^{\prime}}\) for some \(\ell^{\prime}\);
3. for all \(i,j,k\) in \(\{0,1,2,\ldots,t\}\) there is an integer \(p_{ij}^{k}\) such that, for all \((x,y)\in R_{k}\), \[|\{z\in\mathcal{X}:(x,z)\in R_{i}\;\text{and}\;(z,y)\in R_{j}\}|=p_{ij}^{k}.\]
When \(p_{ij}^{k}=p_{ji}^{k}\) for all \(k,i,j\) then the association scheme is called _commutative_. If \(\ell=\ell^{\prime}\) for all \(\ell\), then the association scheme is said to be _symmetric_; otherwise, it is _nonsymmetric_.
Each of the relations \(R_{l}\) can be interpreted as a directed graph with vertex set \(\mathcal{X}\) and edge set \(R_{l}\), \(\Gamma_{l}=(\mathcal{X},R_{l})\) for all \(l\). An association scheme can be viewed as a decomposition of the complete directed graph with vertex set \(\mathcal{X}\) into directed graphs \(\Gamma_{l}\) with the property that for \(i,j,k\in\{1,2,\cdots d\}\) and for \(xy\in E(\Gamma_{k})\),
\[|\{z\in X:xz\in E(\Gamma_{i})\text{ and }zy\in E(\Gamma_{j})\}|=p_{ij}^{k},\]
where \(E(\Gamma_{i})\) is edge set of graph \(\Gamma_{i}\). The graphs \(\Gamma_{i}\) are called the _graphs_ of the association scheme. Likewise, a symmetric association scheme can be viewed as a decomposition of the complete graph on vertex set \(\mathcal{X}\) into undirected graphs. A strongly regular graph \(\Gamma\) corresponds to a symmetric association scheme with two classes, where \(R_{1}=\{(x,y):xy\in E(\Gamma)\}\) and \(R_{2}=\{(x,y):x\neq y\text{ and }(x,y)\notin R_{1}\}\).
For an association scheme, we can interpret the relationas as adjacency matrices for the graphs, i.e., \(\{0,1\}\)-matrices indexed by the vertex set \(V\) such that for the matrix \(A_{i}\) there is a \(1\) in position \((x,y)\) exactly when \((x,y)\in R_{i}\). Then,we have:
1. \(A_{0}=I\);
2. \(A_{0}+A_{1}+...+A_{d}=J,\) the matrix with all \(1\)'s;
3. for each \(i\) there is some \(i^{\prime}\) with \({A_{i}}^{T}=A_{i^{\prime}}\);
4. \(A_{i}A_{j}=\Sigma_{k}{p_{ij}}^{k}A_{k}\).
This collection forms what is known as the _Bose-Mesner algebra_, and what is key for this article is that for a commutative association scheme it will necessarily follow that the Bose-Mesner algebra is commutative, so that the graph adjacency matrices satisfy \(A_{i}A_{j}=A_{j}A_{i}\) for all \(i,j\).
Given an association scheme, we can take unions of classes to produce graphs with larger edge sets, with such unions termed _fusions_. Fusions are not always association schemes in general, but when a particular association scheme has the property that any of its fusions also forms an association scheme we call the scheme _amorphic_. For an excellent introduction to amorphic association schemes, see [32].
Partial difference sets give rise to strongly regular Cayley graphs. When we partition the nonidentity elements of a group into partial difference sets, we also have a partitioning of the complete graph with vertex set the group elements into strongly regular Cayley graphs.
Now we are ready to consider what will be an essential ingredient for many of the constructions in this article, a powerful result of van Dam:
**Theorem 2.1**.: _[_33_, Theorem 3]_ _Let \(\{\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d}\}\) be an edge-decomposition of the complete graph on a set \(X\), where each \(\Gamma_{i}\) is strongly regular. If the \(\Gamma_{i}\) are all of Latin square type or all of negative Latin square type, then the decomposition is a \(d\)-class amorphic association scheme on \(X\)._
We interpret the implications of this result into the context of PDSs to form the following, which we will use throughout the paper.
**Corollary 2.2**.: _Suppose the nonidentity elements of a group \(G\) can be partitioned into a collection of PDSs all of Latin square type or all of negative Latin square type, \(\{P_{1},P_{2},\ldots,P_{n}\}\). Then, a union of any number of these PDSs is also a PDS of that same type. Moreover, \(P_{i}P_{j}=P_{j}P_{i}\) in the group ring \(\mathbb{Z}[G]\)._
Proof.: Such a collection of PDSs corresponds to a strongly regular Cayley graph decomposition of the complete graph on \(|G|\) points, which will be amorphic by Theorem 2.1. As such,
any fusion of the graph is a strongly regular graph of the same type as the PDSs \(P_{i}\) and therefore any union of PDSs \(\bigcup_{i}P_{i}\) corresponds to another PDS of that type. Amorphic association schemes are commutative, and it follows that the graph adjacency matrices commute and therefore so do the group ring equations for the PDSs: i.e., \(P_{i}P_{j}=P_{j}P_{i}\).
We remark that such a partition of the nonidentity elements of a group \(G\) is called a _Cayley (association) scheme_. Cayley schemes are equivalent to _Schur rings_[13], and amorphic association schemes of (negative) Latin square type were previously used in [8] to construct examples of PDSs in nonabelian 2-groups.
### Quadratic forms
Quadratic forms have been used for constructing PDSs of both Latin square type and negative Latin square type (see [16]). Let \(q\) be a power of a prime. We denote the field with \(q\) elements by \(\mathbb{F}_{q}\). A _quadratic form_\(Q\) on a \(d\)-dimensional vector space \(\mathbb{F}_{q}^{d}\) over \(\mathbb{F}_{q}\) is a function \(Q:\mathbb{F}_{q}^{d}\to\mathbb{F}_{q}\) such that:
1. \(Q(\alpha x)=\alpha^{2}Q(x)\) for all \(\alpha\in\mathbb{F}_{q}\) and all \(x\in\mathbb{F}_{q}^{d}\), and
2. the function \(\beta:\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\to\mathbb{F}_{q}\) given by \(\beta(x,y)=Q(x+y)-Q(x)-Q(y)\) is \(\mathbb{F}_{q}\)-bilinear.
A quadratic form \(Q\) is said to be _nondegenerate_ if \(\beta(x,y)=0\) for all \(y\in\mathbb{F}_{q}^{d}\) implies \(x=0\). We have the following well-known result.
**Theorem 2.3**.: _[_1_, Theorem 3.28]_ _Let \(Q\) be a nondegenerate quadratic form on \(V=\mathbb{F}_{q}^{2m}\). There exists a basis for \(V\) such that exactly one of the following holds for all \(x=(x_{1},\ldots,x_{2m})\in V\):_
1. \(Q(x)=x_{1}x_{2}+x_{3}x_{4}+\cdots+x_{2m-1}x_{2m}\)_, or_
2. \(Q(x)=x_{1}x_{2}+x_{3}x_{4}+\cdots+x_{2m-3}x_{2m-2}+x_{2m-1}^{2}+bx_{2m}^{2},\) _where_ \(-b\) _is a nonsquare in_ \(\mathbb{F}_{q}\)_._
If (i) of Theorem 2.3 holds, then we say \(Q\) is _hyperbolic_ and has type \(\varepsilon=+1\) (often denoted simply with "\(+\)" when used as a superscript), and, if (ii) of Theorem 2.3 holds, then we say \(Q\) is _elliptic_ and has type \(\varepsilon=-1\) (often denoted simply with "\(-\)").
## 3 Nonabelian PDS families related to affine polar graphs
Let \(q\) be an odd prime power and \(m\geqslant 2\). Let \(V=\mathbb{F}_{q}^{2m}\) be equipped with a nondegenerate quadratic form \(Q\) of type \(\varepsilon=\pm 1\). In particular, by Theorem 2.3, if \(x=(x_{1},\ldots,x_{2m})\in V\), we will assume
\[Q(x)=x_{1}x_{2}+x_{3}x_{4}+\cdots+x_{2m-1}x_{2m}\]
if \(\varepsilon=1\), and we will assume
\[Q(x)=x_{1}x_{2}+x_{3}x_{4}+\cdots+x_{2m-3}x_{2m-2}+x_{2m-1}^{2}+bx_{2m}^{2},\]
where \(-b\) is a nonsquare in \(\mathbb{F}_{q}\), if \(\varepsilon=-1\). Note that there is a nondegenerate symmetric bilinear form \(\beta\) associated with \(Q\).
The graphs \(\mathrm{VO}^{\varepsilon}(2m,q)\) are defined by taking the vectors in \(V\) to be vertices, with distinct vectors \(u,v\in V\) adjacent if \(Q(v-u)=0\). As noted in [4, Section 3.3.1], \(\mathrm{VO}^{\varepsilon}(2m,q)\) is a strongly regular graph with
\[v =q^{2m},\] \[k =(q^{m}-\varepsilon)(q^{m-1}+\varepsilon),\] \[\lambda =q(q^{m-1}-\varepsilon)(q^{m-2}+\varepsilon)+q-2,\] \[\mu =q^{m-1}(q^{m-1}+\varepsilon).\]
The graphs \(\mathrm{VNO}^{\varepsilon}(2m,q)\) are defined by taking the vectors in \(V\) to be vertices, with distinct vectors \(u,v\in V\) adjacent when \(Q(u-v)\) is a nonzero square in \(\mathbb{F}_{q}\). As noted in [4, Section 3.3.2], the graph \(\mathrm{VNO}^{\varepsilon}(2m,q)\) is a strongly regular graph with
\[v =q^{2m},\] \[k =\frac{1}{2}(q-1)(q^{m}-\varepsilon)q^{m-1},\] \[\lambda =\frac{1}{4}q^{m-1}(q-1)(q^{m}-q^{m-1}-2\varepsilon)+\varepsilon q ^{m-1},\] \[\mu =\frac{1}{4}q^{m-1}(q-1)(q^{m}-q^{m-1}-2\varepsilon).\]
Finally, we note that the complement to \(\mathrm{VO}^{\varepsilon}(2m,q)\cup\mathrm{VNO}^{\varepsilon}(2m,q)\) in the complete graph on \(V\) will itself be a strongly regular graph isomorphic to \(\mathrm{VNO}^{\varepsilon}(2m,q)\). To see this, note that this new graph has adjacency defined by \(u\sim v\) when \(Q(v-u)\) is a nonsquare in \(\mathbb{F}_{q}\). If \(a\) is a nonsquare in \(\mathbb{F}_{q}\), the map \(\phi:v\mapsto av\) interchanges nonsquares with nonzero squares, and \(Q(v^{\phi}-u^{\phi})=Q(av-au)=a^{2}Q(v-u)\) is still a nonzero square if and only if \(Q(v-u)\) is, meaning \(\phi\) is an isomorphism between \(\mathrm{VNO}^{\varepsilon}(2m,q)\) and this new complement graph, which we will denote by \(\mathrm{VNO}^{\varepsilon}_{2}(2m,q)\). Therefore, the complete graph on \(V\) can be partitioned into \(\mathrm{VO}^{\varepsilon}(2m,q)\), \(\mathrm{VNO}^{\varepsilon}(2m,q)\), and \(\mathrm{VNO}^{\varepsilon}_{2}(2m,q)\). We remark that when \(\varepsilon=+1\) the graphs are of Latin square type, and when \(\varepsilon=-1\) the graphs are of negative Latin square type.
### Automorphisms of affine polar graphs
We represent the elements of the affine general linear group \(\mathrm{AGL}(2m,q)\) in the form \([M,u]\), where \(M\in\mathrm{GL}(2m,q)\) and \(u\in V\), where for all (row vectors) \(v\in V,\)
\[v^{[M,u]}:=vM+u,\]
and multiplication in \(\mathrm{AGL}(2m,q)\) is defined by
\[[M_{1},v_{1}][M_{2},v_{2}]=[M_{1}M_{2},v_{1}M_{2}+v_{2}].\]
The special orthogonal group \(\mathrm{SO}^{\varepsilon}(2m,q)\) is the set of all determinant \(1\) matrices in \(\mathrm{GL}(2m,q)\) preserving the bilinear form \(\beta\) (and quadratic form \(Q\)), and, given a subspace \(U\) of \(V\), denote the subgroup of translations of \(V\) by vectors in \(U\) by \(T_{U}\), i.e.,
\[T_{U}:=\{[I,u]:u\in U\},\]
where \(I\) is the \(2m\times 2m\) identity matrix. Hence,
\[\operatorname{ASO}^{\varepsilon}(2m,q):=\{[M,v]:M\in\operatorname{SO}^{ \varepsilon}(2m,q),v\in V\}\cong T_{V}:\operatorname{SO}^{\varepsilon}(2m,q),\]
where \(M\in\operatorname{SO}^{\varepsilon}(2m,q)\) is identified naturally with \([M,0]\in\operatorname{AGL}(2m,q)\) and the "\(:\)" denotes a semidirect product.
**Lemma 3.1**.: _The group \(\operatorname{ASO}^{\varepsilon}(2m,q)\) is a subgroup of automorphisms of each of \(\operatorname{VO}^{\varepsilon}(2m,q)\), \(\operatorname{VNO}^{\varepsilon}(2m,q)\), and \(\operatorname{VNO}^{\varepsilon}_{2}(2m,q)\)._
Proof.: Let \([M,w]\in\operatorname{ASO}^{\varepsilon}(2m,q)\) and \(u,v\in V\). Then,
\[Q(v^{[M,w]}-u^{[M,w]})=Q((vM+w)-(uM+w))=Q((v-u)M)=Q(v-u),\]
and so \([M,w]\) preserves adjacency in all three graphs.
We remark that \(T_{V}\subseteq\operatorname{ASO}^{\varepsilon}(2m,q)\) is an elementary abelian regular subgroup of automorphisms of each of \(\operatorname{VO}^{\varepsilon}(2m,q)\), \(\operatorname{VNO}^{\varepsilon}(2m,q)\), and \(\operatorname{VNO}^{\varepsilon}_{2}(2m,q)\), and so the corresponding decomposition of the nonidentity elements of \(T_{V}\) into PDSs is an amorphic Cayley scheme. In the following subsections, we will find "twists" of the group \(T_{V}\) in \(\operatorname{ASO}^{\varepsilon}(2m,q)\) - roughly speaking, replacing certain translations \([I,v]\) by elements of the form \([M,v]\), where \(M\neq I\) - to provide new examples of PDSs in nonabelian groups.
### A family of PDSs in nonabelian groups of order \(q^{2m}\)
Fix \(\varepsilon=\pm 1\). Let \(v\) be a nonsingular vector in \(V\), i.e., \(Q(v)\neq 0\), so \(\langle v\rangle\) is a nonsingular subspace. The stabilizer of \(\langle v\rangle\) in \(\operatorname{SO}^{\varepsilon}(2m,q)\) contains an elementary abelian group \(H\) of order \(q\); see, e.g., [3, Sections 2.2.1, 8.2], [12, Tables 3.5 E, F and Proposition 4.1.6], or [34, Section 3.7.4]. Note that \(vM=v\) for all \(M\in H\): we know that \(0M=0\), and that \(vM=v\) follows from the Orbit-Stabilizer Theorem.
Since \(v\) is nonsingular, we have \(V=\langle v\rangle\oplus v^{\perp}\), where
\[v^{\perp}:=\{u\in V:\beta(u,v)=0\};\]
to see this, note that the map \(\beta(-,v):V\to\mathbb{F}_{q}\) is a linear transformation with kernel \(v^{\perp}\). Since \(vM=v\) for all \(M\in H\) and \(M\) preserves \(\beta\), we have \(v^{\perp}M=v^{\perp}\), i.e., \(v^{\perp}\) is an \(H\)-invariant subspace.
**Remark 3.2**.: Choosing a nonsingular vector \(v\) is not strictly necessary for this construction: as long as the elementary abelian group \(H\) stabilizes a decomposition \(V=\langle v\rangle\oplus U\) for some complementary subspace \(U\) to \(\langle v\rangle\), the construction will work.
Since \(H\) is an elementary abelian group of order \(q\), \(H\) is naturally isomorphic to \((\mathbb{F}_{q},+)\). For each \(\alpha\in\mathbb{F}_{q}\), we will denote by \(A_{\alpha}\) the corresponding element of \(H\) under this natural isomorphism. Define
\[\mathcal{A}:=\{[A_{\alpha},\alpha v]:\alpha\in\mathbb{F}_{q}\}.\]
Since \(v\) is fixed by right multiplication by elements of \(H\), \(\mathcal{A}\) is an elementary abelian group of order \(q\) that is itself naturally identified with \((\mathbb{F}_{q},+)\).
Recall that \(T_{v^{\perp}}\) is the set of elements of the form \([I,u]\) for \(u\in v^{\perp}.\) Then, for \(u\in v^{\perp}\), we have
\[[A_{\alpha},\alpha v]^{-1}[I,u][A_{\alpha},\alpha v]=[A_{\alpha}^{-1},-\alpha v] [I,u][A_{\alpha},\alpha v]=[I,uA]\in T_{v^{\perp}},\]
so \(\mathcal{A}\) normalizes \(T_{v^{\perp}}\).
Define
\[G_{1}^{\varepsilon}:=\langle T_{v^{\perp}},\mathcal{A}\rangle=T_{v^{\perp}}: \mathcal{A}.\]
**Theorem 3.3**.: _The group \(G_{1}^{\varepsilon}\) is a nonabelian group of order \(q^{2m}\) in which the nonidentity elements can be partitioned into \(D_{0}\cup D_{1}\cup D_{2}\), where each \(D_{i}\) is a PDS, \(\mathrm{Cay}(G_{1}^{\varepsilon},D_{0})\cong\mathrm{VO}^{\varepsilon}(2m,q)\), and \(\mathrm{Cay}(G_{1}^{\varepsilon},D_{1})\cong\mathrm{Cay}(G_{1}^{\varepsilon},D_{2})\cong\mathrm{VNO}^{\varepsilon}(2m,q)\)._
Proof.: First, we have \(|G_{1}^{\varepsilon}|=q^{2m}\) since \(G_{1}^{\varepsilon}=T_{v^{\perp}}:\mathcal{A}\), \(|T_{v^{\perp}}|=|v^{\perp}|=q^{2m-1}\), and \(|\mathcal{A}|=q\). Moreover, since \(H\) acts faithfully on \(V\) and fixes \(\langle v\rangle\) pointwise, there exist \(u\in v^{\perp}\) and \(A\in H\) such that \(uA\neq u\). Since \(A\in H\), there is a unique \(w\in\langle v\rangle\) such that \([A,w]\in\langle\mathcal{A}\rangle\). Thus,
\[[I,u][A,w]=[A,uA+w]\neq[A,u+w]=[A,w][I,u],\]
and hence \(G_{1}^{\varepsilon}\) is nonabelian.
Let \(x\in V\). Then, we may write \(x=w+u\), where \(w\in\langle v\rangle\) and \(u\in v^{\perp}\). There is a unique \(A\in H\) such that \([A,w]\in\mathcal{A}\), and so \([A,x]=[A,w+u]=[A,w][I,u]\) is an element of \(G_{1}^{\varepsilon}\) such that \(0^{[A,x]}=x\), and hence \(G_{1}^{\varepsilon}\) is transitive on \(V\). Since \(|G_{1}^{\varepsilon}|=|V|\), in fact \(G_{1}^{\varepsilon}\) acts regularly on \(V\).
Finally, since \(A\in\mathrm{SO}^{\varepsilon}(2m,q)\) for all \([A,x]\in G_{1}^{\varepsilon}\), \(G_{1}^{\varepsilon}\leqslant\mathrm{ASO}^{\varepsilon}(2m,q)\), and, by Lemma 3.1, \(G_{1}^{\varepsilon}\) is a subgroup of automorphisms of \(\mathrm{VO}^{\varepsilon}(2m,q)\), \(\mathrm{VNO}^{\varepsilon}(2m,q)\), and \(\mathrm{VNO}_{2}^{\varepsilon}(2m,q)\). The result follows.
**Example 3.4**.: We can construct a concrete example for each \(\varepsilon\), \(q\), and \(m\). When \(\varepsilon=1\), for \(\alpha\in\mathbb{F}_{q}\) we define
\[C_{\alpha}:=\begin{pmatrix}1&0&0&\alpha\\ 0&1&0&-\alpha\\ \alpha&-\alpha&1&\alpha^{2}\\ 0&0&0&1\end{pmatrix},\]
and when \(\varepsilon=-1\), for \(\alpha\in\mathbb{F}_{q}\) we define
\[C_{\alpha}:=\begin{pmatrix}1&-\alpha^{2}&\alpha&0\\ 0&1&0&0\\ 0&-2\alpha&1&0\\ 0&0&0&1\end{pmatrix}.\]
Then, we may choose
\[A_{\alpha}:=\left(\begin{array}{c|c}I_{2m-4}&0\\ \hline 0&C_{\alpha}\end{array}\right).\]
Let \(\{e_{i}:1\leqslant i\leqslant 2m\}\) be the standard basis for \(V\). Then, we may choose \(v=e_{1}+e_{2}\) if \(\varepsilon=1\) and \(v=e_{2}\) if \(\varepsilon=-1\).
As another example, if \(m>2\), for \(\alpha\in\mathbb{F}_{q}\), if
\[B_{\alpha}:=\begin{pmatrix}1&0&0&0\\ 0&1&-\alpha&0\\ 0&0&1&0\\ \alpha&0&0&1\end{pmatrix},\]
then we may choose
\[A_{\alpha}:=\left(\begin{array}{c|c}B_{\alpha}&0\\ \hline 0&I_{2m-4}\end{array}\right)\]
with \(v=e_{5}+e_{6}\) (regardless of the value of \(\varepsilon\)). Thus, when \(m>2\), we may actually assume \(G_{1}^{+}=G_{1}^{-}\).
**Remark 3.5**.: Every element of \(G_{1}^{\varepsilon}\) can be expressed uniquely as \([A_{\alpha},\alpha v+u]\), where \(\alpha\in\mathbb{F}_{q}\) and \(u\in v^{\perp}\). Since
\[[A_{\alpha},\alpha v+u]^{p}=\left[A_{\alpha}^{p},p\alpha v+u\sum_{i=0}^{p-1}A_ {\alpha}^{i}\right]=\left[I,u(A_{\alpha}-I)^{p-1}\right],\]
the choices of \(A_{\alpha}\) from Example 3.4 show that, when \(p>3\) or \(m>2\), we can choose \(G_{1}^{\varepsilon}\) to have exponent \(p\). That such groups can be chosen to have exponent \(3\) when \(p=3\) and \(m=2\) follows from direct inspection with GAP [10].
### A second family of PDSs in nonabelian groups of order \(q^{2m}\)
The second family of PDSs requires a bit more care. We will assume for this construction that either \(m>2\) or, if \(m=2\), \(\varepsilon=1\). As in the last example in Example 3.4, for \(\alpha\in\mathbb{F}_{q}\), if
\[B_{\alpha}:=\begin{pmatrix}1&0&0&0\\ 0&1&-\alpha&0\\ 0&0&1&0\\ \alpha&0&0&1\end{pmatrix},\]
we then define
\[A_{\alpha}:=\left(\begin{array}{c|c}B_{\alpha}&0\\ \hline 0&I_{2m-4}\end{array}\right).\]
(Again, we allow \(m=2\) as long as the quadratic form \(Q\) is hyperbolic.) In this case, \(H:=\{A_{\alpha}:\alpha\in\mathbb{F}_{q}\}\) is an elementary abelian group of order \(q\) preserving the form \(Q\). Define
\[U:=\left\langle e_{1},e_{4},\ldots,e_{2m}\right\rangle.\]
Direct calculation shows that \(U\) is an \(H\)-invariant subspace of \(V\).
Define
\[\mathcal{B}:=\left\langle[A_{\alpha},\alpha e_{2}+\beta e_{3}]:\alpha,\beta \in\mathbb{F}_{q}\right\rangle.\]
**Lemma 3.6**.: _The group \(\mathcal{B}\) is an elementary abelian group of order \(q^{2}\). In particular, for each \(w\in\left\langle e_{2},e_{3}\right\rangle\), there unique element \([A,x]\in\mathcal{B}\) with \(x=w\)._
Proof.: Noting that \(H\) fixes \(e_{3}\), a direct calculcation shows that, for all \(\alpha,\beta,\gamma,\delta\in\mathbb{F}_{q}\), we have
\[[A_{\alpha},\alpha e_{2}+\beta e_{3}][A_{\gamma},\gamma e_{2}+\delta e_{3}]=[A_{ \gamma},\gamma e_{2}+\delta e_{3}][A_{\alpha},\alpha e_{2}+\beta e_{3}]=[A_{ \alpha+\gamma},(\alpha+\gamma)e_{2}+(\beta+\delta-\alpha\gamma)e_{3}].\]
The result follows.
Recalling that \(U\) is \(H\)-invariant, for any \(u\in U\), we have
\[[A_{\alpha},\alpha e_{2}+\beta e_{3}]^{-1}[I,u][A_{\alpha},\alpha e _{2}+\beta e_{3}] =[A_{\alpha}^{-1},-\alpha e_{2}-(\alpha^{2}+\beta)e_{3}][I,u][A_{ \alpha},\alpha e_{2}+\beta e_{3}]\] \[=[I,uA_{\alpha}]\in T_{U},\]
so \(\mathcal{B}\) normalizes \(T_{U}\).
Define
\[G_{2}:=\langle T_{U},\mathcal{B}\rangle=T_{U}:\mathcal{B}.\]
**Theorem 3.7**.: _Let \(m>2\) or, if \(m=2\), then \(\varepsilon=1\). The group \(G_{2}\) is a nonabelian group of order \(q^{2m}\) in which the nonidentity elements can be partitioned into \(D_{0}\cup D_{1}\cup D_{2}\), where each \(D_{i}\) is a PDS, \(\operatorname{Cay}(G_{2},D_{0})\cong\operatorname{VO}^{\varepsilon}(2m,q)\), and \(\operatorname{Cay}(G_{2},D_{1})\cong\operatorname{Cay}(G_{2},D_{2})\cong \operatorname{VNO}^{\varepsilon}(2m,q)\)._
Proof.: The proof is largely the same as that of Theorem 3.3. First, we have \(|G_{2}|=q^{2m}\) since \(G_{2}=T_{U}:\mathcal{B}\), \(|T_{U}|=|U|=q^{2m-2}\), and \(|\mathcal{B}|=q^{2}\). Moreover, not all vectors in \(U\) are fixed by \(H\); for example, \(e_{4}A_{1}=e_{1}+e_{4}\), and so
\[[I,e_{4}][A_{1},e_{2}]=[A_{1},e_{1}+e_{2}+e_{4}]\neq[A_{1},e_{2}+e_{4}]=[A_{1},e_{2}][I,e_{4}],\]
and hence \(G_{2}\) is nonabelian.
Let \(x\in V\). Then, we may write \(x=w+u\), where \(w\in\langle e_{2},e_{3}\rangle\) and \(u\in U\). There is a unique \(A\in H\) such that \([A,w]\in\mathcal{B}\), and so \([A,x]=[A,w+u]=[A,w][I,u]\) is an element of \(G_{2}\) such that \(0^{[A,x]}=x\), and hence \(G_{2}\) is transitive on \(V\). Since \(|G_{2}|=|V|\), in fact \(G_{2}\) acts regularly on \(V\).
Finally, since \(A\in\operatorname{SO}^{\varepsilon}(2m,q)\) for all \([A,x]\in G_{2}\), \(G_{2}\leqslant\operatorname{ASO}^{\varepsilon}(2m,q)\), and, by Lemma 3.1, \(G_{2}\) is a subgroup of automorphisms of \(\operatorname{VO}^{\varepsilon}(2m,q)\), \(\operatorname{VNO}^{\varepsilon}(2m,q)\), and \(\operatorname{VNO}^{\varepsilon}_{2}(2m,q)\). The result follows.
**Remark 3.8**.: A similar calculation to that done in Remark 3.5 shows that \(G_{2}\) has exponent \(p\).
**Theorem 3.9**.: _Let \(m>2\) or, if \(m=2\), then \(\varepsilon=+1\). The groups \(G_{1}^{\varepsilon}\) and \(G_{2}\) are not isomorphic._
Proof.: If \(m>2\), we can choose \(H\) to be the same group in each case. If \(W\) is the subspace of points fixed by \(H\), then
\[W=\langle e_{1},e_{3},e_{5},\ldots,e_{2m}\rangle\,,\]
which has dimension \(2m-2\). In each case, the central elements are of the form \([I,w]\), where \(w\in W\). Since \(v=e_{5}+e_{6}\in W\), \(|Z(G_{1}^{\varepsilon})|=q^{2m-3}\). On the other hand, \(|Z(G_{2})|=|W|=q^{2m-2}\), which proves the claim for \(m>2\)
The proof is similar when \(m=2\) and \(\varepsilon=1\): when \(A_{\alpha}=C_{\alpha}\) (as in Example 3.4), we see that the subspace of of points fixed by \(H\) is \(W=\langle e_{1}+e_{2},e_{4}\rangle\), and, since \(v=e_{1}+e_{2}\),
\[Z(G_{1}^{+})=\{[I,\beta e_{4}]:\beta\in\mathbb{F}_{q}\}.\]
Thus, \(|Z(G_{1}^{+})|=q\). On the other hand,
\[Z(G_{2})=\{[I,w]:w\in W\},\]
and so \(|Z(G_{2})|=q^{2}\), which proves the claim when \(m=2\) and \(\varepsilon=1\).
### A \((q+3)\)-class amorphic association scheme in a group of order \(q^{4}\)
Let \(V=GF(q)^{4}\), where \(q\) is an odd prime. Let \(Q(x)=x_{1}x_{2}+x_{3}x_{4}\), a hyperbolic form on \(V\), and consider the group
\[G:=G_{2}=T_{U}:\mathcal{B}\]
defined in Subsection 3.3, and define
\[H:=\{B_{\alpha}:\alpha\in\mathbb{F}_{q}\}.\]
As in Theorem 3.7, we take \(D_{0}\) to be the elements \([C,x]\) in \(G\) where \(Q(x)=0\); \(D_{1}\) to be the elements \([C,x]\) in \(G\) where \(Q(x)\) is a nonzero square; and \(D_{2}\) to be the elements \([C,x]\) in \(G\) where \(Q(x)\) is a nonzero nonsquare. Since each vector \(x\) in \(V\) occurs exactly once as the second component of an element \([C,x]\in G\), we may identify the elements of \(G\) with the corresponding vector in the second component. In other words, we identify \(D_{0}\) with the set \(V_{0}\) of vectors \(x\) in \(V\) such that \(Q(x)=0\), \(D_{1}\) with the set \(V_{1}\) of vectors \(x\) in \(V\) such that \(Q(x)\) is a nonzero square, and \(D_{2}\) with the set \(V_{2}\) of vectors \(x\) in \(V\) such that \(Q(x)\) is a nonsquare.
**Lemma 3.10**.: _The set \(V_{0}\) can be partitioned into disjoint subsets of size \(q^{2}-1\), where each subset is the set of nonzero vectors in a \(2\)-dimensional subspace of \(V\). Moreover, we can take each subset of the partition of \(V_{0}\) to be \(H\)-invariant._
Proof.: To see that we have a \(H\)-invariant partition for \(V_{0}\), we define \(v_{\infty}:=e_{4}=(0,0,0,1)\) and, for each \(\alpha\in\mathbb{F}_{q}\), we define \(v_{\alpha}:=e_{2}+\alpha e_{4}=(0,1,0,\alpha)\). Since \(Q(v_{\alpha})=0\), both \(v_{\alpha},v_{\alpha}B\in V_{0}\) for all \(B\in H\). Moreover, if we define \(u_{\infty}:=e_{1}=(1,0,0,0)\), \(u_{\alpha}:=\alpha e_{1}-e_{3}=(\alpha,0,-1,0)\) for \(\alpha\in\mathbb{F}_{q}\), and \(U_{\alpha}:=\langle v_{\alpha},u_{\alpha}\rangle\), for each \(\alpha\in\mathbb{F}_{q}\cup\{\infty\}\) and \(\beta\in\mathbb{F}_{q}\), we have
\[v_{\alpha}B_{\beta}=v_{\alpha}+\beta u_{\alpha}\in U_{\alpha}.\]
Since \(u_{\alpha}\in\langle e_{1},e_{3}\rangle\), \(u_{\alpha}B_{\beta}=u_{\alpha}\) for each \(\alpha,\beta\), and thus each \(U_{\alpha}\) is an \(H\)-invariant subspace. It is routine to check that the \(q+1\) subspaces \(U_{\alpha}\), \(\alpha\in\mathbb{F}_{q}\cup\{\infty\}\) are pairwise disjoint, and since \(|V_{0}|=(q+1)(q^{2}-1)\), these subspaces form an \(H\)-invariant partition of \(V\).
**Proposition 3.11**.: _Let \(D\subset V\) be a \(H\)-invariant subset of \(V\), and view \((V,+)\) as the elementary abelian group of order \(q^{4}\). Then, \(G\) is isomorphic to a regular subgroup of \(\operatorname{Aut}(\operatorname{Cay}(V,D))\); that is, if_
\[D^{\prime}:=\{[C,x]\in G:x\in D\},\]
_then \(\operatorname{Cay}(G,D^{\prime})\cong\operatorname{Cay}(V,D)\)._
Proof.: Viewing \(V\) additively, we then can define the graph \(\operatorname{Cay}(V,D)\), where vectors \(v\) and \(w\) are adjacent iff \(v-w\in D\). For any \([C,x]\in G\), we have
\[v^{[C,x]}-w^{[C,x]}=(vC+x)-(wC+x)=(v-w)C,\]
and so \(v-w\in D\) iff \(v^{[C,x]}-w^{[C,x]}\in D\). Since \(G\) is transitive on \(V\) and preserves adjacency in \(\operatorname{Cay}(V,D)\), the result follows.
Recall the definitions of \(D_{1}\) and \(D_{2}\) in \(G\) from above. Let \(\mathbb{F}_{q}\cup\{\infty\}=\{\alpha_{3},\ldots,\alpha_{q+3}\}\), define
\[U_{i}:=U_{\alpha_{i}}\]
as in the proof of Lemma 3.10, and define
\[D_{i}:=\left\{[C,x]\in G:x\in U_{i}-\{0\}\right\}.\]
**Theorem 3.12**.: _Each \(D_{i}\), \(1\leqslant i\leqslant q+3\), is a PDS of Latin square type in \(G\). Consequently, \(\{D_{i}:1\leqslant i\leqslant q+3\}\) corresponds to a \((q+3)\)-class amorphic association scheme, and a union of any number of these PDSs is also a PDS of Latin square type in \(G\)._
Proof.: First, \(D_{1}\) and \(D_{2}\) are PDSs of Latin square type in \(G\) with \(r=q(q-1)/2\) by Theorem 3.7.
Since each \(U_{i}\) is a subspace of size \(q^{2}\), each graph \(\operatorname{Cay}(V,U_{i}-\{0\})\) is a union of disjoint complete subgraphs, i.e., each graph \(\operatorname{Cay}(V,U_{i}-\{0\})\) is a \((q^{4},q^{2}-1,q^{2}-2,0)\)-strongly regular graph of Latin square type. By Proposition 3.11, this means each \(D_{i}\), \(3\leqslant i\leqslant q+3\) is also a PDS of Latin square type. Finally, since \(\{D_{i}:3\leqslant i\leqslant q+3\}\) is a partition of \(D_{0}\) and \(\{D_{0},D_{1},D_{2}\}\) is a partition of \(G\), \(\{D_{i}:1\leqslant i\leqslant q+3\}\) is a \((q+3)\)-class amorphic association scheme. The result follows from Corollary 2.2.
**Corollary 3.13**.: _The group \(G\) contains a Paley-type PDS._
Proof.: Define
\[D:=D_{1}\cup\bigcup_{i=3}^{(q+5)/2}D_{i},\]
i.e., \(D\) is the union of \(D_{1}\) and half of the \(D_{i}\)'s, where \(i\geqslant 3\). By Theorem 3.12, \(D\) is a PDS of Latin square type, and, since \(D\) contains the elements of \(D_{1}\) and exactly half of the elements of \(D_{0}\),
\[|D|=\frac{q(q-1)(q^{2}-1)}{2}+\frac{(q+1)(q^{2}-1)}{2}=\frac{q^{4}-1}{2}.\]
The result follows.
## 4 Partial difference sets in semidirect products with a large center
Let \(p\) be a prime, and define \(G:=\left\langle x,y:x^{p^{2}}=y^{p^{2}}=1,xy=yx\right\rangle\cong\mathbb{Z}_{ p^{2}}^{2}\). The following sets were shown in [5] to be \((p^{4},p(p^{2}-1),2p^{2}-3p,p^{2}-p)\)-PDSs in \(G\) for \(1\leqslant i\leqslant p-1\):
\[P_{i}=\left(\bigcup_{j=0}^{p-1}\left(\left\langle xy^{j+pi}\right\rangle-\left\langle x ^{p}y^{pj}\right\rangle\right)\right)\cup\left(\left\langle x^{pi}y\right\rangle -\left\langle y^{p}\right\rangle\right).\]
The following subgroups with the identity removed are trivial \((p^{2},p^{2}-1,p^{2}-2,0)\)-PDSs:
\[S_{j}=\left\langle xy^{j}\right\rangle-\{1\}\text{ for }0\leqslant j\leqslant p -1,S_{\infty}=\left\langle y\right\rangle-\{1\}.\]
The \(P_{i}\) and \(S_{j}\) partition the nonidentity elements of \(G\) into Latin-square type PDSs. The next theorem applies Theorem 2.1 to this collection.
**Theorem 4.1**.: _The collection \(\{P_{1},P_{2},...,P_{p-1},S_{0},S_{1},\ldots,S_{p-1},S_{\infty}\}\) is a \(2p\)-class amorphic association scheme on \(G\)._
Combining Corollary 2.2 with Theorem 4.1 implies that
\[D:=\left(\bigcup_{j=1}^{\frac{p-1}{2}}P_{i}\right)\cup\left(\bigcup_{j=0}^{ \frac{p-1}{2}}S_{j}\right)\]
is a Paley-type \(\left(p^{4},\frac{p^{4}-1}{2},\frac{p^{4}-5}{4},\frac{p^{4}-1}{4}\right)\)-PDS.
This construction was the first known PDS with Paley-type parameters in a group that was not elementary abelian; other abelian PDSs with these parameters have since appeared (see, for instance, [27]).
We now show that a similar construction will produce Paley-type PDSs in certain nonabelian groups, which along with the construction in the previous section (Corollary 3.13) are the first such constructions of Paley-type PDSs in nonabelian groups known to these authors. Consider the group
\[\widehat{G_{2}}:=\left\langle x,y:x^{p^{2}}=y^{p^{2}}=1,yxy^{-1}=x^{p^{2}-p+1} \right\rangle\cong\mathbb{Z}_{p^{2}}\rtimes_{p^{2}-p+1}\mathbb{Z}_{p^{2}}.\]
Define
\[\widehat{P_{i}}:=\left(\bigcup_{j=0}^{p-1}\left(\left\langle xy^{j+pi}\right \rangle-\left\langle x^{p}y^{pj}\right\rangle\right)\right)\cup\left(\left\langle x ^{pi}y\right\rangle-\left\langle y^{p}\right\rangle\right)\]
and \(\widehat{S_{j}}:=\left\langle xy^{j}\right\rangle-\{1\}\) for \(0\leqslant j\leqslant p-1\), \(\widehat{S_{\infty}}:=\left\langle y\right\rangle-\{1\}\), and finally
\[\widehat{D_{2}}:=\left(\bigcup_{j=1}^{\frac{p-1}{2}}\widehat{P_{i}}\right) \cup\left(\bigcup_{k=0}^{\frac{p-1}{2}}\widehat{S_{k}}\right).\]
We note that the formal sets \(P_{i}\) and \(\widehat{P_{i}}\) appear the same, but the element \((xy^{p+1})^{2}=x^{2}y^{2p+2}\in P_{1}\) whereas \((xy^{p+1})^{2}=x^{p^{2}+2}y^{2}\in\widehat{P_{1}}\).
In order to demonstrate that \(\widehat{P_{i}}\) is a PDS, we first prove two lemmas that we will use in the proof of the result.
**Lemma 4.2**.: _Let \(1\leqslant i\leqslant(p-1)\). For the group \(\widehat{G_{2}}\) and the subset \(\widehat{P_{i}}\) defined above, we have_
\[\sum_{j=0}^{p-1}\left((p^{2}-2p)\left\langle xy^{ip+j}\right\rangle+p\left\langle x ^{p}y^{pj}\right\rangle\right)+(p^{2}-2p)\left\langle x^{ip}y\right\rangle+p \left\langle y^{p}\right\rangle\]
\[=(p^{2}-p)(p+1)1_{\widehat{G_{2}}}+(p^{2}-2p)\widehat{P_{i}}+(p^{2}-p)\left \langle x^{p},y^{p}\right\rangle.\]
Proof.: All of the elements of \(\widehat{P_{i}}\) of order \(p^{2}\) will appear \((p^{2}-2p)\) times, and all of the elements of \(\left\langle x^{p},y^{p}\right\rangle\) will appear an additional \(p\) times.
**Lemma 4.3**.: _For \(j,j^{\prime}\in\{0,1,\ldots,p-1\}\) we have_
\[=(p^{2}-p)\widehat{G_{2}}-(p^{2}-p)\left\langle x^{p},y^{p}\right\rangle.\]
Proof.: A symmetry argument implies that all of the elements of order \(p^{2}\) appear the same number of times in this sum, and the fact that these are different values of \(j,j^{\prime}\) implies that we will not get any elements of order \(p\). The result follows from a counting argument.
**Theorem 4.4**.: _Let \(p\) be a prime. The collection \(\{\widehat{P_{1}},\widehat{P_{2}},\ldots,\widehat{P_{p-1}},\widehat{S_{0}}, \widehat{S_{1}},..,\widehat{S_{p-1}},\widehat{S_{\infty}}\}\) is a \(2p\)-class amorphic association scheme on \(\widehat{G_{2}}\) and the set \(\widehat{D_{2}}\) is a Paley-type \(\left(p^{4},\frac{p^{4}-1}{2},\frac{p^{4}-5}{4},\frac{p^{4}-1}{4}\right)\)-PDS in \(G_{2}\)._
Proof.: Since the \(\widehat{S_{i}}\) are all subgroups, they are all (trivial) Latin-square type PDSs, and Lemmas 4.2 and 4.3 imply the following.
\[\widehat{P_{i}}^{2}= \sum_{j=0}^{p-1}\left(\left\langle xy^{ip+j}\right\rangle-\left \langle x^{p}y^{pj}\right\rangle\right)^{2}+\left(\left\langle x^{p}y\right \rangle-\left\langle y^{p}\right\rangle\right)^{2}\] \[+\sum_{j\neq j^{\prime}}\left(\left\langle xy^{ip+j}\right\rangle- \left\langle x^{p}y^{pj}\right\rangle\right)\left(\left\langle xy^{ip+j^{ \prime}}\right\rangle-\left\langle x^{p}y^{pj^{\prime}}\right\rangle\right)\] \[+\sum_{j=0}^{p-1}\left(\left\langle xy^{ip+j}\right\rangle- \left\langle x^{p}y^{pj}\right\rangle\right)\left(\left\langle x^{ip+j}y \right\rangle-\left\langle x^{pj}y^{p}\right\rangle\right)\] \[= (p^{2}-p)(p+1)1_{\widehat{G_{2}}}+(p^{2}-2p)\widehat{P_{i}}+(p^{2 }-p)\left\langle x^{p},y^{p}\right\rangle+(p^{2}-p)\widehat{G_{2}}-(p^{2}-p) \left\langle x^{p},y^{p}\right\rangle\] \[= (p^{3}-p)1_{\widehat{G_{2}}}+(p^{2}-2p)\widehat{P_{i}}+(p^{2}-p) \widehat{G_{2}}\] \[= (p^{3}-p)1_{\widehat{G_{2}}}+(2p^{2}-3p)\widehat{P_{i}}+(p^{2}-p )(\widehat{G_{2}}-\widehat{P_{i}}-1_{\widehat{G_{2}}})\]
Thus, the \(\widehat{P_{i}}\) are all \((p^{4},p^{3}-p,2p^{2}-3p,p^{2}-p)\)-PDSs in \(\widehat{G_{2}}\) as claimed. Since these are all Latin-square type PDSs, Corollary 2.2 implies that any union of these PDSs will be a PDS.
In particular,
\[\widehat{D_{2}}=\left(\bigcup_{j=1}^{\frac{p-1}{2}}\widehat{P_{i}}\right)\cup \left(\bigcup_{k=0}^{\frac{p-1}{2}}\widehat{S_{i}}\right)\]
is a \(\left(p^{4},\frac{p^{4}-1}{2},\frac{p^{4}-5}{4},\frac{p^{4}-1}{4}\right)\)-PDS as required.
We now turn to a generalization of this construction. Let
\[G_{t}=\left\langle x,y:x^{p^{t}}=y^{p^{t}}=1,xy=yx\right\rangle\cong\mathbb{Z}_ {p^{t}}\times\mathbb{Z}_{p^{t}}.\]
Polhill [27] showed that the sets \(P_{t,i}\) are Latin-square type PDSs with parameters
\[\left(p^{2t},\frac{p^{t}-p}{p-1}(p^{t}-1),p^{t}+(\frac{p^{t}-p}{p-1})^{2}-3 \frac{p^{t}-p}{p-1},(\frac{p^{t}-p}{p-1})^{2}-3\frac{p^{t}-p}{p-1}\right)\]
in \(G_{t}\) for \(1\leqslant i\leqslant p-1\), where \(P_{t,i}\) is defined to be
\[\bigcup_{r=1}^{t-1}\left(\bigcup_{j=0}^{p^{r-1}-1}\left(\bigcup_{k=0}^{p-1} \left(\left\langle xy^{ip^{r}+pj+k}\right\rangle-\left\langle x^{p^{t-r}}y^{jp ^{t+1-r}+kp^{t-r}}\right\rangle\right)\right)\cup\left(\left\langle x^{ip^{r}+ jp}y\right\rangle-\left\langle x^{jp^{t-r+1}}y^{p^{t-r}}\right\rangle \right)\right).\]
If we define \(S_{t,j}:=\left\langle xy^{j}\right\rangle-\left\{1_{G_{t}}\right\},0\leqslant j \leqslant p-1\), and \(S_{t,\infty}:=\left\langle y\right\rangle-\left\{1_{G_{t}}\right\}\), then we get the following theorem, which is analogous to Theorem 4.1.
**Theorem 4.5**.: _For \(t\geqslant 2\), the collection \(\left\{P_{t,1},P_{t,2},\ldots,P_{t,p-1},S_{t,0},S_{t,1},\ldots,S_{t,p-1},S_{t,\infty}\right\}\) is a \(2p\)-class amorphic association scheme on \(G_{t}\)._
The combination of Corollary 2.2 and Theorem 4.5 imply that
\[D_{t}:=\left(\bigcup_{i=1}^{\frac{p-1}{2}}P_{t,i}\right)\cup\left(\bigcup_{j =0}^{\frac{p-1}{2}}S_{t,j}\right)\]
is a Paley-type \(\left(p^{2t},\frac{p^{2t}-1}{2},\frac{p^{2t}-5}{4},\frac{p^{2t}-1}{4}\right)\)-PDS in \(G_{t}\).
We now construct Paley-type PDSs in the nonabelian group
\[\widehat{G_{t}}:=\left\langle x,y:x^{p^{t}}=y^{p^{t}}=1,yxy^{-1}=x^{(p-1)p^{t -1}+1}\right\rangle\cong\mathbb{Z}_{p^{t}}\rtimes_{(p-1)p^{t-1}+1}\mathbb{Z}_ {p^{t}}.\]
To do this, we define a collection of disjoint PDSs that partition the nonidentity elements of \(\widehat{G_{t}}\) in an analogous fashion as those defined in \(G_{t}\): first, we define \(\widehat{P_{t,i}}\) to be
\[\bigcup_{r=1}^{t-1}\left(\bigcup_{j=0}^{p^{r-1}-1}\left(\bigcup_{k=0}^{p-1} \left(\left\langle xy^{ip^{r}+pj+k}\right\rangle-\left\langle x^{p^{t-r}}y^{jp ^{t+1-r}+kp^{t-r}}\right\rangle\right)\right)\cup\left(\left\langle x^{ip^{r}+ jp}y\right\rangle-\left\langle x^{jp^{t-r+1}}y^{p^{t-r}}\right\rangle \right)\right);\]
then we define
\[\widehat{S_{t,j}} := \left\langle xy^{j}\right\rangle-\{1_{\widehat{G_{t}}}\},\] \[\widehat{S_{t,\infty}} := \left\langle y\right\rangle-\{1_{\widehat{G_{t}}}\}.\]
The main construction in this section is the following, and along with those examples in the previous section are the first examples of Paley-type PDSs in nonabelian groups known to the authors.
**Theorem 4.6**.: _For \(t\geqslant 2\), the collection \(\{\widehat{P_{t,1}},\widehat{P_{t,2}},\ldots,\widehat{P_{t,p-1}},\widehat{S_ {t,0}},\widehat{S_{t,1}},\ldots,\widehat{S_{t,p-1}},\widehat{S_{t,\infty}}\}\) is a \(2p\)-class amorphic association scheme on \(\widehat{G_{t}}\). Therefore,_
\[\widehat{D_{t}}:=\left(\bigcup_{i=1}^{\frac{p-1}{2}}\widehat{P_{t,i}}\right) \cup\left(\bigcup_{j=0}^{\frac{p-1}{2}}\widehat{S_{t,j}}\right)\]
_is a Paley-type \(\left(p^{2t},\frac{p^{2t}-1}{2},\frac{p^{2t}-5}{4},\frac{p^{2t}-1}{4}\right)\)-PDS in \(\widehat{G_{t}}\)._
The proof uses the same reasoning as the proof of Theorem 4.4 and is left for the reader.
## 5 Paley-type PDSs and Paley-Hadamard DSs in nonabelian groups
In this section, we will use results from the previous sections to construct additional examples of nonabelian Paley-type PDSs as well as nonabelian Stanton-Sprott (Twin prime power) Paley-Hadamard DSs. Davis [5] used character theory to prove a product construction for abelian groups; we will show that the theorem remains true for nonabelian groups. The theorem will enable us to recursively build nonabelian PDSs with Paley-type parameters.
**Theorem 5.1**.: _Suppose that the groups \(G\) and \(G^{\prime}\) of order \(v\) both possess PDSs of the Paley-type having parameters \(\left(v,\frac{v-1}{2},\frac{v-5}{4},\frac{v-1}{4}\right)\), \(D\) and \(D^{\prime}\) respectively. Then,the group \(\mathcal{G}:=G\times G^{\prime}\) also contains a Paley-type PDS with parameters \(\left(v^{2},\frac{v^{2}-1}{2},\frac{v^{2}-5}{4}.\frac{v^{2}-1}{4}\right)\)._
Proof.: If \(D\) and \(D^{\prime}\) are Paley-type PDSs in \(G\) and \(G^{\prime}\), respectively, then \(D^{c}=G-1_{G}-D\) and \(D^{\prime c}=G^{\prime}-1_{G^{\prime}}-D^{\prime}\) are also Paley-type PDSs in \(G\) and \(G^{\prime}\), respectively. The following group ring equations then hold in \(G\) and \(G^{\prime}\) as a consequence of the sets \(D,D^{c},D^{\prime},D^{\prime c}\) being PDSs:
\[DD^{c}=D^{c}D=\frac{v-1}{4}G^{*},\]
\[D^{\prime}D^{\prime c}=D^{\prime c}D^{\prime}=\frac{v-1}{4}G^{\prime*},\]
where \(G^{*}\) and \(G^{\prime*}\) denote \(G-1_{G}\) and \(G^{\prime}-1_{G^{\prime}}\), respectively.
Our Paley-type PDS in \(\mathcal{G}=G\times G^{\prime}\) is given by \(\mathcal{D}=D(1+D^{\prime})+D^{c}(1+D^{\prime c})\), as verified in the following group ring computation:
\[\mathcal{D}^{2} = (D(1+D^{\prime})+D^{c}(1+D^{\prime c}))^{2}\] \[= D^{2}(1+D^{\prime})^{2}+2DD^{c}(1+D^{\prime})(1+D^{\prime c})+(D^ {\prime})^{2}(1+D^{\prime c})^{2}\] \[= \left(\frac{v-5}{4}D+\frac{v-1}{4}D^{c}+\frac{v-1}{2}1_{G}\right) \left(\frac{v+3}{4}D^{\prime}+\frac{v-1}{4}D^{\prime c}+\frac{v+1}{2}1_{G^{ \prime}}\right)\] \[+\frac{v-1}{2}G^{*}\left(1+\frac{v+3}{4}{G^{\prime}}^{*}\right)\] \[+\left(\frac{v-1}{4}D+\frac{v-5}{4}D^{c}+\frac{v-1}{2}1_{G} \right)\left(\frac{v-1}{4}D^{\prime}+\frac{v+3}{4}D^{\prime c}+\frac{v+1}{2}1 _{G^{\prime}}\right)\] \[= \frac{v^{2}-1}{4}1_{\mathcal{G}}+\frac{v^{2}-5}{4}\mathcal{D}+ \frac{v^{2}-1}{4}(\mathcal{G}-\mathcal{D}).\]
To illustrate the scope of Theorem 5.1, consider the groups \(G=\mathbb{Z}_{25}\rtimes_{21}\mathbb{Z}_{25}\) and \(G^{\prime}=G_{2}=\langle T_{U},\mathcal{B}\rangle\) (with \(q=5\) and \(m=2\)) from the discussion before Theorem 3.7. Both of these groups have \((625,312,155,156)\)-PDSs and hence \(\mathcal{G}=G\times G^{\prime}\) will have a \(\left(5^{8},\frac{5^{8}-1}{2},\frac{5^{8}-5}{4},\frac{5^{8}-1}{4}\right)\)-PDS. We can continue to apply the theorem by first constructing a \(\left(5^{8},\frac{5^{8}-1}{2},\frac{5^{8}-5}{4},\frac{5^{8}-1}{4}\right)\)-PDS in \(\mathcal{G}^{\prime}=\mathbb{Z}_{25}^{2}\times\mathbb{Z}_{5}^{4}\) (both \(\mathbb{Z}_{25}^{2}\) and \(\mathbb{Z}_{5}^{4}\) have \((625,312,155,156)\)-PDSs, and Theorem 5.1 implies that their product will have a PDS), and we can then apply Theorem 5.1 to get a \(\left(5^{16},\frac{5^{16}-1}{2},\frac{5^{16}-5}{4},\frac{5^{16}-1}{4}\right)\)-PDS in \(\mathcal{G}\times\mathcal{G}^{\prime}\). Repeated uses of the Theorem give constructions of Paley-type PDSs in groups of the form \(G^{2^{t}},{G^{\prime}}^{2^{t}},(\mathcal{G}\times\mathcal{G}^{\prime})^{2^{t}}\), and \(\mathcal{G}^{2^{t}}\). As long as the sizes of the groups are the same, we can repeatedly apply Theorem 5.1 to get Paley-type PDSs in larger groups. One general example of a family with a variety of exponents for the constituent groups is the following.
**Corollary 5.2**.: _The group \(\mathbb{Z}_{p}^{4}\times(\mathbb{Z}_{p^{2}}\rtimes_{p^{2}-p+1}\mathbb{Z}_{p^{ 2}})\times(\mathbb{Z}_{p^{4}}\rtimes_{p^{4}-p^{3}+1}\mathbb{Z}_{p^{4}})\times \cdots\times(\mathbb{Z}_{p^{2^{t}}}\rtimes_{p^{2^{t}}-p^{2^{t}-1}+1}\)\(\mathbb{Z}_{p^{2^{t}}})\) has a Paley-type PDS for all \(t\geqslant 2\)._
Paley-type PDSs can in turn be used to generate Paley-Hadamard DSs using the Stanton-Sprott construction [29]. As with the recursive construction for Paley-type PDSs, we show that the input groups need not be abelian. Since we now have constructions of nonabelian Paley-type PDSs, we will be able to construct new Paley-Hadamard DSs that are nonabelian. To our knowledge, these are the first nonabelian DSs with these parameters.
**Theorem 5.3**.: _Suppose that the group \(G\) contains a Paley-type \(\left(v,\frac{v-1}{2},\frac{v-5}{4},\frac{v-1}{4}\right)\)-PDS and the group \(G^{\prime}\) contains a skew Hadamard \(\left(v\pm 2,\frac{(v\pm 2)-1}{2},\frac{(v\pm 2)-3}{4}\right)\)-DS. Then,the product group \(G\times G^{\prime}\) contains a Paley-Hadamard DS in the Stanton-Sprott (Twin prime power) family._
Proof.: We will prove the case where \(|G^{\prime}|=v+2\), with the \(v-2\) case being extremely similar. If \(D\) is a Paley-type \(\left(v,\frac{v-1}{2},\frac{v-5}{4},\frac{v-1}{4}\right)\)-PDS in \(G\), then \(D^{c}=G-1_{G}-D\) is also a Paley-type PDS. We use the facts from the proof of Theorem 5.1 together with the similar equations for the skew-Hadamard DS \(D^{\prime}\) in \(G^{\prime}\) to get the following.
The set \(\mathcal{D}:=G+DD^{\prime}+D^{c}D^{\prime(-1)}\subset G\times G^{\prime}\) is a DS as verified below:
\[\mathcal{DD}^{(-1)} = \left(G+DD^{\prime}+D^{c}D^{\prime(-1)}\right)\left(G+DD^{\prime} +D^{c}D^{\prime(-1)}\right)^{(-1)}\] \[= \left(G+DD^{\prime}+D^{c}D^{\prime(-1)}\right)\left(G+DD^{\prime (-1)}+D^{c}D^{\prime}\right)\] \[= G^{2}+(GD)D^{\prime(-1)}+(GD^{c})D^{\prime}+(GD)D^{\prime}+D^{2 }(D^{\prime}D^{\prime(-1)})+(DD^{c})D^{\prime 2}\] \[+(GD^{\prime})D^{\prime(-1)}+(DD^{c})(D^{\prime(-1)})^{2}+D^{c2}( D^{\prime(-1)}D)\] \[= vG(1_{G^{\prime}})+(v-1)G(D^{\prime}+D^{\prime(-1)})\] \[+\left(\frac{v-1}{4}G^{*}\right)\left(\left(\frac{v-3}{4}+\frac{v -1}{4}\right)G^{\prime*}\right.\] \[\left.+\left(\frac{v-1}{4}G^{*}\right)\left(\left(\left(\frac{v -5}{4}+\frac{v-1}{4}\right)G^{*}+(v-1)1_{G}\right)\left(\frac{v+1}{2}1_{G^{ \prime}}+\frac{v-1}{4}G^{\prime*}\right)\right).\]
Combining terms leads to the equation
\[\mathcal{DD}^{(-1)}=\frac{v^{2}+2v-3}{4}\mathcal{G}^{*}+\frac{v^{2}+2v-1}{2}1_ {\mathcal{G}},\]
thus proving the result.
Recall the group \(G_{2}\) from Section 3, letting \(q=3\) and \(m=2\). As examples of Theorem 5.3, the nonabelian groups \(G_{2}\times\mathbb{Z}_{83}\) and \((\mathbb{Z}_{9}\rtimes_{7}\mathbb{Z}_{9})\times\mathbb{Z}_{83}\) each have a \((6723,3361,1680)\)-difference set; the nonabelian groups \((G_{2})^{2}\times\mathbb{Z}_{6563}\), \((\mathbb{Z}_{9}\rtimes_{7}\mathbb{Z}_{9})\times G_{2}\times\mathbb{Z}_{6563}\), \((\mathbb{Z}_{9}\rtimes_{7}\mathbb{Z}_{9})^{2}\times\mathbb{Z}_{6563}\), and \((\mathbb{Z}_{81}\rtimes_{55}\mathbb{Z}_{81})\times\mathbb{Z}_{6563}\) have \((45724643,22862321,11431160)\)-difference sets; and the nonabelian group \((\mathbb{Z}_{27}\rtimes_{19}\mathbb{Z}_{27})\times\mathbb{Z}_{727}\) has a \((529983,264991,132495)\)-difference set. A more general corollary is the following (although there will be many nonabelian groups containing a Paley-Hadamard difference set that are not contained in this result).
**Corollary 5.4**.: _Let \(r\geqslant 2\). If \(q=p^{2^{r}}\pm 2\) is prime, then the nonabelian group_
\[(\mathbb{Z}_{p^{2^{r}}}\rtimes_{(p-1)p^{2^{r-1-1}+1}}\mathbb{Z}_{p^{2^{r}}}) \times\mathbb{Z}_{q}\]
_has a \(\left(qp^{2^{r}},(qp^{2^{r}}-1)/2,(qp^{2^{r}}-3)/4\right)\)-difference set._
## 6 Many product theorems allow nonabelian groups
In the previous section, we used group rings to prove that two results previously known in abelian groups also hold in nonabelian groups. In this section, we will show that in some cases we can avoid the quadratic group ring calculations entirely because the relations needed to simplify the calculations do not depend on whether the group is abelian or not.
**Lemma 6.1**.: _Suppose the group \(G\) has a partition of the nonidentity elements into PDSs \(P_{1},P_{2},...,P_{n}\) all of the Latin square type or all of the negative Latin square type. Then,the quadratic group ring relations relating the \(P_{i}\) and \(P_{j}\) are strictly determined by the parameters and do not depend on whether the group is abelian or not._
Proof.: Let \(P_{i}\) be a \((v,k_{i},\lambda_{i},\mu_{i})\)-PDS. Then:
\[{P_{i}}^{2}=(k_{i}-\mu_{i})1_{G}+\lambda_{i}P_{i}+\mu_{i}(G-P_{i}).\]
Now suppose that \(P_{i}\) and \(P_{j}\) are part of the partition of \(G\) into Latin or negative Latin square type PDSs. Let \(P_{i}\) be a \((v,k_{i},\lambda_{i},\mu_{i})\)-PDS and \(P_{i}\) be a \((v,k_{j},\lambda_{j},\mu_{j})\)-PDS. The key to getting a relation for \(P_{i}P_{j}\) is the fact that the union of disjoint PDSs of Latin (alternatively negative Latin square type) will also be a PDS of the same type by Corollary 2.2. The same corollary ensures \(P_{i}P_{j}=P_{j}P_{i}\).
Therefore, we can write two equations for \((P_{i}+P_{j})^{2}\), the first of which is expanding and using the individual PDS parameters:
\[(P_{i}+P_{j})^{2} =P_{i}P_{j}+P_{j}P_{i}+{P_{i}}^{2}+{P_{j}}^{2}\] \[=2P_{i}P_{j}+(k_{i}-\mu_{i})1_{G}+\lambda_{i}P_{i}+\mu_{i}(G-P_{i })+(k_{j}-\mu_{j})+\lambda_{j}P_{j}+\mu_{j}(G-P_{j}).\]
Now we will use the fact that \(P_{i}\cup P_{j}\) is a \((v,k_{i}+k_{j},\lambda,\mu)\)-PDS.
\[(P_{i}+P_{j})^{2}=(k_{i}+k_{j}-\mu)1_{G}+\lambda(P_{i}+P_{j})+\mu(G-P_{i}-P_{j }).\]
Setting the equations equal and solving yields:
\[P_{i}P_{j}=(\lambda-\lambda_{i}-\mu_{j})P_{i}+(\lambda-\mu_{i}-\lambda_{j})P_{ j}+(\mu-\mu_{i}-\mu_{j})(G-1-P_{i}-P_{j}).\]
Hence, the relations for both \({P_{i}}^{2}\) and \(P_{i}P_{j}\) are determined by the parameters. Suppose that \(G\) has a partition of the nonidentity elements into PDSs \(P_{1},P_{2},...,P_{n}\) and \(G^{\prime}\) has a partition of the nonidentity elements into PDSs \(P_{1}^{\prime},P_{2}^{\prime},...,P_{n}^{\prime}\) where \(P_{i}\) and \(P_{i}^{\prime}\) have the same parameters. Then,the relations for \({P_{i}}^{2}\) and \({P_{i}^{\prime}}^{2}\) are the same for \(P_{i}\) relative to \(G\) as for \(P_{i}^{\prime}\) relative to \(G^{\prime}\) and furthermore the relations for \(P_{i}P_{j}\) and \(P_{i}^{\prime}P_{j}^{\prime}\) are the same for \(P_{i}\) and \(P_{j}\) relative to \(G\) as for \(P_{i}^{\prime}\) and \(P_{j}^{\prime}\) relative to \(G^{\prime}\). The result follows.
**Theorem 6.2**.: _Suppose that there is a group \(G\) having a partition \(\mathcal{P}=\{1_{G},P_{1},P_{2},...,P_{n}\}\) where all the \(P_{i}\) are Latin square type PDSs or negative Latin square type PDSs and \(1_{G}\) is the identity in \(G\). Let \(G^{\prime}\) be any other group, and suppose \(D\) is a PDS in \(G\times G^{\prime}\) which is constructed as a union of sets of the form \(A_{i}x_{i}^{\prime}\), \(A_{i}\in\mathcal{P},x_{i}^{\prime}\in G^{\prime}\). Suppose there is another group \(\widehat{G}\) that has a partition of its nonidentity elements into PDSs \(\{\widehat{P_{1}},\widehat{P_{2}},...,\widehat{P_{n}}\}\) where \(P_{i}\) and \(\widehat{P_{i}}\) have the same parameters for all \(i\). If we define the set \(\widehat{D}\) by replacing each \(P_{i}\) with \(\widehat{P_{i}}\) and \(1_{G}\) with \(1_{\widehat{G}}\), then \(\widehat{D}\) will be a PDS in \(\widehat{G}\times G^{\prime}\) with the same parameters as \(D\)._
Proof.: Let \(D\) be a \((v,k,\lambda,\mu)\)-PDS in \(G\times G^{\prime}\) so that:
\[DD^{(-1)}=(k-\mu)1_{G\times G^{\prime}}+\lambda D+\mu(G\times G^{\prime}-D).\]
Consider \(\widehat{D}\widehat{D}^{(-1)}\). Each term in the expansion will have one of the following forms:
1. \((1_{\widehat{G}}x^{\prime})(1_{\widehat{G}}y^{\prime})=1_{\widehat{G}}x^{\prime}y^ {\prime}\) where \(x^{\prime},y^{\prime}\in G^{\prime}\). In our calculation of \(DD^{(-1)}\) we have the corresponding term \((1_{G}x^{\prime})(1_{G}y^{\prime})=1_{G}x^{\prime}y^{\prime}\).
2. \((1_{\widehat{G}}x^{\prime})(\widehat{P}_{i}y^{\prime})=\widehat{P}_{i}x^{ \prime}y^{\prime}\) where \(x^{\prime},y^{\prime}\in G^{\prime}\). In our calculation of \(DD^{(-1)}\) we have the corresponding term \((1_{G}x^{\prime})(P_{i}y^{\prime})=P_{i}x^{\prime}y^{\prime}\).
3. \((\widehat{P}_{i}x^{\prime})(\widehat{P}_{i}y^{\prime})=\widehat{P}_{i}^{\,2} x^{\prime}y^{\prime}=((k_{i}-\mu_{i})1_{\widehat{G}}+\lambda_{i}\widehat{P}_{i}+ \mu_{i}(\widehat{G}-\widehat{P}_{i}))x^{\prime}y^{\prime}\) where \(x^{\prime},y^{\prime}\in G^{\prime}\), In our calculation of \(DD^{(-1)}\) we have the corresponding term \((P_{i}x^{\prime})(P_{i}y^{\prime})=P_{i}^{\,2}x^{\prime}y^{\prime}=(k_{i}-\mu_ {i})1_{G}+\lambda_{i}P_{i}+\mu_{i}(G-P_{i})x^{\prime}y^{\prime}\), where both \(P_{i}\) and \(\widehat{P}_{i}\) are \((v,k_{i},\lambda_{i},\mu_{i})\)-PDSs.
4. \((\widehat{P}_{i}x^{\prime})(\widehat{P}_{j}y^{\prime})=\widehat{P}_{i}\widehat {P}_{j}(x^{\prime},y^{\prime})\) where \(x^{\prime},y^{\prime}\in G^{\prime}\). In our calculation of \(DD^{(-1)}\) we have the corresponding term \((P_{i}x^{\prime})(P_{j}y^{\prime})=(P_{i}P_{j})(x^{\prime}y^{\prime})\). By the preceding lemma, we know that \(P_{i}P_{m}=aP_{i}+bP_{j}+c(G-1_{G}-P_{i}-P_{j})\) means that \(\widehat{P}_{i}\widehat{P}_{m}=a\widehat{P}_{i}+b\widehat{P}_{j}+c(G-1_{ \widehat{G}}-\widehat{P}_{i}-\widehat{P}_{j})\) and vice versa, since the quadratic group ring equations relating the \(P_{i}\) are determined by the parameters.
Therefore, when we expand \(\widehat{D}\widehat{D}^{(-1)}\) we will the exact same count of terms for \(\widehat{P}_{i}x^{\prime}\) and \(1_{\widehat{G}}x^{\prime}\) for any \(x^{\prime}\in G^{\prime}\) as we would respectively for \(P_{i}x^{\prime}\) and \(1_{G}x^{\prime}\) when calculating \(DD^{(-1)}\). It follows that:
\[\widehat{D}\widehat{D}^{(-1)}=(k-\mu)1_{\widehat{G}\times G^{\prime}}+\lambda \widehat{D}+\mu(\widehat{G}\times G^{\prime}-\widehat{D}).\]
As it so happens, there are rather many product constructions that fit the hypotheses for the theorem including all the product theorems in [6, 21, 22, 24, 26]. We will use a particular construction in 3-groups that incorporates the examples from Sections 3 and 4 to give the reader an idea of what can be done with these generalizations.
### 3-groups
In the group \(G=\mathbb{Z}_{3}\times\mathbb{Z}_{3}=\langle x,y\rangle\), we can partition the nonidentity elements into four trivially intersecting subgroups of order \(\sqrt{|G|}=3\) with the identity removed: \(H_{1}=\{x,x^{2}\},H_{2}=\{xy,x^{2}y^{2}\},H_{3}=\{xy^{2},x^{2}y\}\), and \(H_{4}=\{y,y^{2}\}\). We can then produce a partition of Latin square type PDSs as
\[L_{0}:=H_{3}\cup H_{4},\;L_{1}:=H_{1},\;L_{2}:=H_{2}\]
or a negative Latin square type partition as
\[C_{0}:=\varnothing,\;C_{1}:=H_{1}\cup H_{2},\;C_{2}:=H_{3}\cup H_{4}.\]
Now we consider groups order \(81\). From Section 3 we can obtain a partition of the nonidentity elements in the groups \({G_{1}}^{+}\) and \(G_{2}\) into three PDSs \(L_{0},L_{1}\), and \(L_{2}\) of cardinalities \(32,24\), and \(24\), respectively, and similarly the group \({G_{1}}^{-}\) into three PDSs \(C_{0},C_{1}\), and \(C_{2}\) of cardinalities \(20,30\), and \(30\). From Section 4 we can obtain a partition of the nonidentity elements of either \(\mathbb{Z}_{9}\times\mathbb{Z}_{9}\) or \(\mathbb{Z}_{9}\rtimes_{7}\mathbb{Z}_{9}\) into three PDSs \(L_{0},L_{1}\), and \(L_{2}\) of cardinalities \(32,24\), and \(24\).
Using these partitions and applying Theorem 6.2 to [21, Theorems 2.1-2.3] in the case of \(p=3\) updates [21, Corollary 5.1] and gives us infinite families of Latin and negative Latin square type PDSs that now include nonabelian groups.
**Corollary 6.3**.: _Suppose \(G\) with \(|G|=3^{2m}\) has a partition of its nonidentity elements into three Latin square type PDSs \(L_{0},L_{1},\) and \(L_{2}\) of cardinality \((3^{m-1}+1)(3^{m}-1),(3^{m-1})(3^{m}-1),\) and \((3^{m-1})(3^{m}-1)\) respectively. Suppose also that \(G^{\prime}\) with \(|G^{\prime}|=3^{2n}\) has a partition of its nonidentity elements into three Latin square type PDSs \(C_{0},C_{1},\) and \(C_{2}\) of cardinality \((3^{m-1}-1)(3^{m}+1),(3^{m-1})(3^{m}+1),\) and \((3^{m-1})(3^{m}+1)\) respectively. Then,the following are negative Latin square type PDSs in \(G\times G^{\prime}\) with parameters \((3^{m+n-1}-1)(3^{m+n}+1),(3^{m+n-1})(3^{m+n}+1),\) and \((3^{m+n-1})(3^{m+n}+1)\) respectively:_
\[\widehat{C_{0}}=([(L_{0}\cup\{1_{G}\})\times(C_{0}\cup\{1_{G^{\prime}}\})] \cup(L_{1}\times C_{1})\cup(L_{2}\times C_{2}))-\{1_{G}\times 1_{G^{ \prime}}\},\]
\[\widehat{C_{1}}=[(L_{0}\cup\{1_{G}\})\times C_{1}]\cup[L_{1}\times C_{2}]\cup [L_{2}\times(C_{0}\cup\{1_{G^{\prime}}\})],\]
\[\widehat{C_{2}}=[(L_{0}\cup\{1_{G}\})\times C_{2}]\cup[L_{1}\times(C_{0}\cup\{ 1_{G^{\prime}}\})]\cup[L_{2}\times C_{1}].\]
_Similarly we have Latin square type PDSs in \(G\times G\) with parameters \((3^{2m-1}+1)(3^{2m}-1),(3^{2m-1})(3^{2m}-1),\) and \((3^{2m-1})(3^{2m}-1)\) respectively:_
\[\widehat{L_{0}}=([(L_{0}\cup\{1_{G}\})\times(L_{0}\cup\{1_{G}\})]\cup[L_{1} \times L_{1}]\cup[L_{2}\times L_{2}])-\{1_{G}\times 1_{G}\},\]
\[\widehat{L_{1}}=[(L_{0}\cup\{1_{G}\})\times L_{1}]\cup[L_{1}\times L_{2}]\cup [L_{2}\times(L_{0}\cup\{1_{G}\})],\]
\[\widehat{L_{2}}=[(L_{0}\cup\{1_{G}\})\times L_{2}]\cup[L_{1}\times(L_{0}\cup\{ 1_{G}\})]\cup[L_{2}\times L_{1}].\]
_Finally, we could instead construct Latin square type PDSs in \(G^{\prime}\times G^{\prime}\) with parameters \((3^{2n-1}+1)(3^{2n}-1),(3^{2n-1})(3^{2n}-1),\) and \((3^{2n-1})(3^{2n}-1)\) respectively:_
\[\widehat{L_{0}}=([(C_{0}\cup\{1_{G^{\prime}}\})\times(C_{0}\cup\{1_{G^{\prime }}\})]\cup[C_{1}\times C_{1}]\cup[C_{2}\times C_{2}])-\{1_{G^{\prime}}\times 1_{G^{ \prime}}\},\]
\[\widehat{L_{1}}=[(C_{0}\cup\{1_{G^{\prime}}\})\times C_{1}]\cup[C_{1}\times C_ {2}]\cup[C_{2}\times(C_{0}\cup\{1_{G^{\prime}}\})],\]
\[\widehat{L_{2}}=[(C_{0}\cup\{1_{G^{\prime}}\})\times C_{2}]\cup[C_{1}\times( C_{0}\cup\{1_{G^{\prime}}\})]\cup[C_{2}\times C_{1}].\]
For example, consider the case of \(v=729\). In this case, we can have a partition of the nonidentity elements of the groups \({\mathbb{Z}_{3}}^{2}\times{\mathbb{Z}_{3}}^{4},\)\({\mathbb{Z}_{3}}^{2}\times G_{1}^{1},\)\({\mathbb{Z}_{3}}^{2}\times G_{1}^{-1},\)\({\mathbb{Z}_{3}}^{2}\times G_{2},\)\({\mathbb{Z}_{3}}^{2}\times{\mathbb{Z}_{9}}\times{\mathbb{Z}_{9}}\) or \({\mathbb{Z}_{3}}^{2}\times{\mathbb{Z}_{9}}\rtimes_{7}{\mathbb{Z}_{9}}\) into three PDSs of cardinalities \(260,234,\) and \(234\) respectively or \(224,252,\) and \(252\) respectively. Moreover, by [24, p. 1645], we also have a partition of \({\mathbb{Z}_{27}}\times{\mathbb{Z}_{27}}\) into three PDSs of cardinalities \(260,234,\) and \(234\), respectively. In fact, direct calculation in GAP [10] shows that \({\mathbb{Z}_{27}}\rtimes_{19}{\mathbb{Z}_{27}}\) also has such a partition into three PDSs of cardinalities \(260,\)\(234,\) and \(234,\) respectively. (The PDSs in \({\mathbb{Z}_{27}}\rtimes_{19}{\mathbb{Z}_{27}}\) are obtained from those in \({\mathbb{Z}_{27}}\times{\mathbb{Z}_{27}}\) analogously as when moving from \(G_{t}\) to \(\widehat{G_{t}}\) in Section 4.)
## 7 Possible next steps
While we have constructed PDSs in several nonabelian groups, we believe there is much more to be uncovered. We list some open questions.
1. All the constructions in this paper are in \(p\)-groups. There have been some constructions of Latin and negative Latin square type PDSs in nonabelian non-\(p\)-groups, and in particular for \(|G|=100\) (see [11] and [28]), but aside from these and a few other small examples little is known. It seems likely that there will be some nonabelian groups with PDSs having the same parameters as those that exist in certain abelian groups, and perhaps (such as with \(|G|=100\) there might be some genuinely nonabelian parameters).
2. We saw four distinct techniques in this paper that used abelian PDSs to obtain nonabelian PDSs: using quadratic forms and analyzing affine polar graphs, exploiting groups with a large center, calculating group ring equations in place of characters, and identifying that certain product constructions depend only on parameters. Abelian PDSs have been extensively studied, and there are many other techniques to explore from this previous work. One could consider additional ways to carry over the well-developed techniques from abelian groups to the less familiar nonabelian setting. Especially in light of [19], it seems likely that at least some techniques from character theory would fit this description.
3. Theorem 6.2 from Section 6 could be applied to other results from abelian groups. In particular, there are likely to be many nonabelian PDSs in 2-groups. (For example, consider the results of [8] combined with product theorems such as Theorem 6.2.).
4. In Section 6, the objective was to see that the technique of certain product constructions can carry over to nonabelian input groups with the appropriate partition. One starts to see that many nonabelian groups will have PDSs that are the same as the abelian case. One could begin to catalog all the groups that support \((v,k,\lambda,\mu)\)-PDSs for relatively small \(v\).
5. Since PDSs produce strongly regular Cayley graphs, one could also begin to catalog which groups have PDSs that correspond to the various nonisomorphic \((v,k,\lambda,\mu)\)-strongly regular graphs for small \(v\).
|
2303.09796 | Nonlinearity parameter imaging in the frequency domain | Nonlinearity parameter tomography leads to the problem of identifying a
coefficient in a nonlinear wave equation (such as the Westervelt equation)
modeling ultrasound propagation. In this paper we transfer this into frequency
domain, where the Westervelt equation gets replaced by a coupled system of
Helmholtz equations with quadratic nonlinearities. For the case of the
to-be-determined nonlinearity coefficient being a characteristic function of an
unknown, not necessarily connected domain $D$, we devise and test a
reconstruction algorithm based on weighted point source approximations combined
with Newton's method. In a more abstract setting, convergence of a regularised
Newton type method for this inverse problem is proven by verifying a range
invariance condition of the forward operator and establishing injectivity of
its linearisation. | Barbara Kaltenbacher, William Rundell | 2023-03-17T06:34:09Z | http://arxiv.org/abs/2303.09796v2 | # Nonlinearity parameter imaging in the frequency domain
###### Abstract
Nonlinearity parameter tomography leads to the problem of identifying a coefficient in a nonlinear wave equation (such as the Westervelt equation) modeling ultrasound propagation. In this paper we transfer this into frequency domain, where the Westervelt equation gets replaced by a coupled system of Helmholtz equations with quadratic nonlinearities. For the case of the to-be-determined nonlinearity coefficient being a characteristic function of an unknown, not necessarily connected domain \(D\), we devise and test a reconstruction algorithm based on weighted point source approximations combined with Newton's method. In a more abstract setting, convergence of a regularised Newton type method for this inverse problem is proven by verifying a range invariance condition of the forward operator and establishing injectivity of its linearisation.
**key words:** nonlinearity parameter tomography, multiharmonic expansion, Westervelt equation, Helmholtz equation, extended sources, point sources, Newton's method, range invariance condition
## 1 Introduction
Nonlinearity parameter tomography [7, 9, 10, 20, 35, 40, 41, 42], is a technique for enhancing ultrasound imaging and amounts to identifying the spatially varying coefficient \(\eta=\eta(x)\) in the Westervelt equation
\[p_{tt}-c^{2}\triangle p-b\triangle p_{t}=\eta(p^{2})_{tt}+h\text{ in }(0,T) \times\Omega\,, \tag{1}\]
where \(p\) is the acoustic pressure, \(c\) the speed of sound, \(b\) the diffusivity of sound, and \(h\) the excitation, from observations of the pressure
\[y(x,t)=p(x,t),\quad(x,t)\in\Sigma\times(0,T). \tag{2}\]
on some manifold \(\Sigma\) immersed in the acoustic domain \(\Omega\) or attached to its boundary \(\Sigma\in\overline{\Omega}\); see [2, 25, 26, 27] and the references therein.
While uniqueness from the Dirichlet-to-Neumann operator has been established in [2], our aim here is to reconstruct \(\eta\) from the single boundary measurement (2) like in [25, 26, 27].
Here we will consider this problem in the frequency domain, inspired by the concept of harmonic imaging [4, 39, 40]. Due to the quadratic nonlinearity appearing in the PDE, this is not directly possible by the usual approach of taking the Fourier transform in time. Rather, the idea is to use a multiharmonic ansatz [23] as follows.
Assuming periodic excitations of the specific form \(h(x,t)=\Re(\hat{h}(x)e^{u\omega t})\) for some fixed frequency \(\omega\) and \(\hat{h}\in L^{2}(\Omega;\mathbb{C})\) and inserting a multiharmonic expansion for a time periodic solution of (1) (that due to periodicity of \(h\) can be proven to exist and be unique) \(p(x,t)=\Re\left(\sum_{k=1}^{\infty}\hat{p}_{k}(x)e^{ik\omega t}\right)\) into (1), yields the infinite system of coupled linear Helmholtz type PDEs
\[\begin{split} m&=1:\qquad\qquad\qquad-\omega^{2}\hat {p}_{1}-(c^{2}+u\omega b)\triangle\hat{p}_{1}=\hat{h}\ \underbrace{-\frac{\eta}{2}\omega^{2}\sum_{k=3:2}^{\infty} \overline{\hat{p}_{\frac{k-1}{2}}}\hat{p}_{\frac{k+1}{2}}}\\ m&\in\{2,\ldots,M\}:\quad-\omega^{2}m^{2}\hat{p}_{m} -(c^{2}+\imath\omega mb)\triangle\hat{p}_{m}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad=-\frac{\eta}{4 }\omega^{2}m^{2}\Bigl{(}\sum_{\ell=1}^{m-1}\hat{p}_{\ell}\hat{p}_{m-\ell} \ \underbrace{+2\sum_{k=m+2:2}^{\infty}\overline{\hat{p}_{\frac{k-m}{2}}}\hat{p}_ {\frac{k+m}{2}}}\Bigr{)}.\end{split} \tag{3}\]
The equivalence of (3) to (1) holds with \(M=\infty\), as shown in [23]. The fact that in place of a single Helmholtz equation we have a system (in theory even an infinite one) reveals that nonlinearity actually helps the identifiability. This can be explained by the additional information available due to the appearance of several higher harmonics (similarly to several components arising in the asymptotic expansion in [29]). In practice, the underbraced terms are often skipped and the expansion is only considered up to \(M=2\) or \(M=3\). This is due to the fact that the strength of the signal in these higher harmonics decreases extremely quickly. In fact in our reconstructions, only two of them will be of effective use as the third harmonic only provides marginal improvement over the second one.
In our reconstructions in Section 2, we will focus on the case of a piecewise constant coefficient \(\eta=\eta_{0}\chi_{D}\) with a known constant \(\eta_{0}\) and an unknown domain \(D\), so that (3) (upon skipping the grey terms) becomes
\[\begin{split} m&=1:\qquad\qquad\qquad\triangle\hat {p}_{1}+\kappa^{2}\hat{p}_{1}=\hat{h}\\ m&\in\{2,\ldots,M\}:\ \ \triangle\hat{p}_{m}+m^{2} \kappa^{2}\hat{p}_{m}=\frac{\eta_{0}}{4}\,\chi_{D}\,m^{2}\kappa^{2}\Bigl{(} \sum_{\ell=1}^{m-1}\hat{p}_{\ell}\hat{p}_{m-\ell},\Bigr{)}\end{split} \tag{4}\]
where \(\kappa=\frac{\omega}{\sqrt{c^{2}+\imath\omega b}}\) is the wave number. We do so for practical relevance (e.g., location of contrast agents such as microbubbles on a homogeneous background) and for expected
better identifiability as compared to a general function \(\eta\) (although counterexamples to uniqueness still exist cf., e.g., [3, 28], for the Helmholtz equation as opposed to the Laplace equation). Typically, \(D\) will not necessarily be connected but consist of a union of connected components \(D=\bigcup_{\ell=1}^{m}D_{\ell}\) that we will call inclusions or objects for obvious reasons.
Moreover, throughout this paper we assume the sound speed \(c\) to be known and constant. For results (in the time domain formulation (1)) on simultaneous identification of space dependent functions \(c\) and \(\eta\), we refer to [27].
We will consider (3) on a smooth bounded domain \(\Omega\subseteq\mathbb{R}^{d}\), \(d\in\{2,3\}\) with observations on a subset of \(\partial\Omega\) and equip it with a boundary damping condition
\[\partial_{\nu}\hat{p}_{m}+(m\omega\beta+\gamma)\hat{p}_{m}=0\quad\mbox{ on }\partial\Omega \tag{5}\]
with \(\beta,\,\gamma\geq 0\). These are direct translations to frequency domain of zero and first order absorbing boundary conditions in time domain, see, e.g., the review articles [15, 17] and the references therein. Indeed, these boundary attenuation conditions even allow us to skip the interior damping and assume \(\kappa\) to be real valued, as has been shown in [22] in the time domain setting of (1). We will do so by working with a real valued wave number \(\tilde{\kappa}\) in the numerical tests of Section 2.
In the case where the observation manifold is contained in the boundary of the domain \(\Omega\), we can choose between writing the data (2) as Dirichlet trace or, via the impedance condition (5), with \(g_{m}=-(m\kappa+\gamma)y_{m}\), as Neumann trace
\[y_{m}=\hat{p}_{m}\ \mbox{ or }\quad g_{m}=\partial_{\nu}\hat{p}_{m}\quad\mbox{ in } \Sigma,\quad m\in\{2,\ldots,M\}. \tag{6}\]
In our numerical reconstructions we will also consider the practically relevant case of only partial data being available with \(\Sigma\subseteq\partial\Omega\) being a strict subset. Note that according to the first line in (4), that does not contain the unknown \(D\), observations of the fundamental harmonic \(y_{1}\) or \(g_{1}\) are not expected to carry essential information on \(D\) and are therefore neglected.
## 2 A reconstruction method for piecewise constant \(\eta\) and numerical results
We first of all consider (4) for \(M=2\) and devise a reconstruction method, based on the approach in [28]. While the algorithms described below work in both 2-d and 3-d, we confine the exposition and our numerical experiments to two space dimensions. In our numerical tests we will also study the question of whether taking into account another harmonic \(M=3\) improves the results.
Having computed \(\hat{p}_{1}\) from the first equation in (3) with given excitation \(\hat{h}\), the problem of determining \(\eta\) from the second equation in (3) reduces to an inverse source problem for the Helmholtz equation
\[\triangle u+\tilde{\kappa}^{2}u=\tilde{\kappa}^{2}\eta\,\tilde{f}\quad\mbox{ in }\Omega \tag{7}\]
where \(u=\hat{p}_{2}\), \(\tilde{\kappa}=\frac{2\omega}{c}\), \(\tilde{f}=\frac{1}{4c^{2}}\hat{p}_{1}^{2}\).
In the case of a piecewise constant coefficient as considered here, (7) becomes
\[\triangle u+\tilde{\kappa}^{2}u=\tilde{\kappa}^{2}\,\chi_{D}\,f\quad\text{ in }\Omega. \tag{8}\]
with \(f=\eta_{0}\tilde{f}\). There exists a large body of work on inverse source problems for the Helmholtz equation. Two particular examples for the case of extended sources as related to our setting are [21, 28]. We also point to, e.g., [1, 3, 6, 11, 13] for inverse source problems with multi frequency data; however these do not cover the important special case of restricting observations to higher harmonics of a single fundamental frequency.
We here intend to follow the approach from [28]. Like there, as an auxiliary problem, we will consider the Helmholtz equation with point sources
\[\triangle u+\tilde{\kappa}^{2}u=\sum_{k=1}^{n}\lambda_{k}\delta_{S_{k}}\quad \text{ in }\Omega. \tag{9}\]
with \(\delta\) distributions located at points \(S_{k}\), or more generally with a measure \(\mu\in\mathcal{M}(\Omega)=C_{b}(\overline{\Omega})^{*}\) as right hand side
\[\triangle u+\tilde{\kappa}^{2}u=\mu\quad\text{ in }\Omega. \tag{10}\]
The PDEs (8), (9), (10) are equipped with impedance boundary conditions
\[\partial_{\nu}u+\imath\tilde{\kappa}u=0\quad\text{ on }\partial\Omega. \tag{11}\]
Results on well-posedness of the forward problems (7), (11) and (9), (11) can be found, e.g., in [34, Section VIII] and [37, Section 2].
An essential fact connecting (8) and (9) is that for any solution \(w\) of the homogeneous Helmholtz equation \(\triangle w+\tilde{\kappa}^{2}w=0\) on \(\Omega\), from Green's second identity, written in the form
\[\int_{\Omega}\Bigl{(}u\,(\triangle w+\tilde{\kappa}^{2}w)-w\,(\triangle u+ \tilde{\kappa}^{2}u)\Bigr{)}\,dx=\int_{\partial\Omega}\Bigl{(}u\,(\partial_{ \nu}w+\imath\tilde{\kappa}w)-w\,(\partial_{\nu}u+\imath\tilde{\kappa}u) \Bigr{)}\,ds\]
the following relations hold
\[\begin{split}&\int_{\partial\Omega}\partial_{\nu}u\,(\partial_{ \nu}w+\imath\tilde{\kappa}w)\,ds\\ &=-\imath\tilde{\kappa}\int_{\partial\Omega}u\,(\partial_{\nu}w+ \imath\tilde{\kappa}w)\,ds=\begin{cases}\imath\tilde{\kappa}\int_{D}\tilde{ \kappa}^{2}\,f\,w\,dx&\text{ for }\eqref{eq:11},\eqref{eq:22}\\ \imath\tilde{\kappa}\sum_{k=1}^{n}\lambda_{k}w(S_{k})&\text{ for }\eqref{eq:12},\eqref{eq:22}. \end{cases}\end{split} \tag{12}\]
Combining this with a mean value identity for the Helmholtz equation
\[\frac{1}{|B_{r}(x_{0})|}\int_{B_{r}(x_{0})}w\,dx=\Gamma(\tfrac{d}{2}+1)\frac{J _{d/2}(\tilde{\kappa}\,r)}{(\tilde{\kappa}\,r/2)^{d/2}}\,w(x_{0}) \tag{13}\]
for any \(r>0\), and \(x_{0}\in\Omega\) such that \(B_{r}(x_{0})\subseteq\Omega\), and \(w\) solving \(\triangle w+\tilde{\kappa}^{2}w=0\) (see, e.g., [30] and the references therein), equivalence of (8), (9) in the case of constant background \(f\) is obtained.
**Lemma 2.1**.: _Assume that \(D\) can be represented as the union of finitely many disjoint discs or balls. Then the flux moments \(\int_{\partial\Omega}\partial_{\nu}u\left(\partial_{\nu}w+i\tilde{\kappa}w\right)ds\) (for \(w\) in the kernel of \(\triangle w+\tilde{\kappa}^{2}\mathrm{id}\)) of \(u\) solving the Helmholtz equation (8), (11) with \(f\equiv\mathrm{const.}\) coincide with the flux moments resulting from finitely many weighted point sources (9), (11)._
The method from [28] uses a Pade approximation scheme (see [18], which was inspired by [5]) for recovering point sources in the Laplace equation and a fixed point scheme to extend this for finding point sources in the Helmholtz equation (9). This is proven to converge in [28, Theorem 1] for sufficiently small wave numbers \(\tilde{\kappa}\) and the numerical experiments there show that it works exceedingly well for \(\tilde{\kappa}\leq 1\). However, in ultrasonics, \(\tilde{\kappa}\) is large. Transition from the Laplace point source problem to the Helmholtz point source problem therefore does not seem to be feasible in that situation. However, transition from the Helmholtz point source problem (9) the Helmholtz inclusion problem (8) is still justified by Lemma 2.1, in case of circular or spherical inclusions and a constant background \(f\).
In place of the Pade approximation algorithm in [28], we employ the primal-dual active point PDAP algorithm from [8, 37], which we provide here, for the convenience of the reader. It uses the forward operator \(F:\mathcal{M}(\Omega)\to L^{2}(\Sigma)\), \(\mu\mapsto\partial_{\nu}u|_{\Sigma}\), 1 where \(u\) solves (10), (11) and its Banach space adjoint \(F^{*}\).
Footnote 1: \(L^{2}(\Sigma)\) regularity of the flux (in spite of the low \(W^{1,q}(\Omega)\), \(q<\frac{d}{d-1}\) regularity of \(u\)) is obtained by bootstrapping from the homogeneous impedance conditions in case of \(\Sigma\subseteq\partial\Omega\); otherwise, an assumption of the source domain to be at distance from \(\Sigma\) needs to be imposed in order to be able to invoke interior elliptic regularity.
**Algorithm PDAP:**
For \(i=1,2,3,\ldots\)
1. Compute \(\xi^{i}:=F^{*}(F\mu^{i}-g)\); determine \(\hat{S}^{i}\in\mathrm{argmax}_{x\in\Omega}|\xi(x)|\)
2. Set \((S^{i}_{1},\ldots,S^{i}_{n}):=\mathrm{supp}(\mu^{i})\cup\{\hat{S}^{i}\}\);
3. compute a minimizer \(\vec{\lambda}^{i}\in\mathbb{R}^{n}\) of \(j(\vec{\lambda}):=\|F\sum_{k=1}^{n}\lambda_{k}\delta_{S^{i}_{k}}-g\|^{2}\)
4. Set \(\mu^{i+1}=\sum_{k=1}^{n}\lambda^{i}_{k}\delta_{S^{i}_{k}}\)
Combining this with the other elements from the method in [28], we arrive at the following scheme in case of constant background.
**Algorithm 0:**
Given boundary flux \(g=g_{D}=\sum_{\ell=1}^{m}g_{D_{\ell}}\) arising from the \(m\) unknown objects \(D_{\ell}\) (each of which is the union of \(n_{\ell}\) discs) with constant background \(f\).
1. Identify \(n=\sum_{\ell=1}^{m}n_{\ell}\geq m\) equivalent point sources \(S_{k}\) and weights \(\lambda_{k}\) according to Lemma 2.1 using Algorithm PDAP. This also yields a decomposition \(g=g_{D}=\sum_{k=1}^{n}g_{pts_{k}}\) of the given data;
2. Determine the radii of equivalent discs from weights \(\lambda_{k}\) via the mean value property (13). Merge these discs into \(m\) objects: two discs belong to the same object if their intersection is nonempty; Assigning discs and therewith equivalent point sources to objects \(g_{pts_{k}}\to g_{pts_{\ell,j}}\) for \(k\in\{1,\ldots,n\}\), \(\ell\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,n_{\ell}\}\), also yields a decomposition of the given data: \(g=g_{D}=\sum_{\ell=1}^{m}g_{\ell}\), where \(g_{\ell}=\sum_{j=1}^{n_{\ell}}g_{pts_{\ell,j}}\).
3. For each object \(D_{\ell}\), \(\ell\in\{1,\ldots,m\}\), separately, determine the object boundary parametrised by a curve \(q_{\ell}\) from moment matching (12) of data \(g_{\ell}\), using a Newton iteration;
As a starting value for each curve \(q_{\ell}\) in (3.) we use the disc with the centroid of the union of discs belonging to the \(\ell\)-th object as a center and the radius corresponding to the sum of weights within the \(\ell\)-th object via (13). Alternatively to (3.), one could use algorithms from computational geometry for determining the boundary of a union of discs, see, e.g., [14, 16] and the citing literature.
In case of variable background \(f\) as relevant here, cf. (4), and/or a set \(D\) that is not a finite union of discs, the representation by equivalent discs is not exact and therefore the decomposition of the data according to objects is not valid any more. We therefore replace (3.) by a simultaneous Newton based matching of the flux data \(g\) (not of its moments) to the flux data computed from forward simulations according to the collection of parametrised object boundaries. We can still regard the discs obtained by (2.) as good starting guesses for Newton's method and thus proceed as follows.
**Algorithm 1:**
given boundary flux \(g=g_{D}=\sum_{\ell=1}^{m}g_{D_{\ell}}\) arising from the \(m\) unknown objects \(D_{\ell}\)
1. Identify \(n=\sum_{\ell=1}^{m}n_{\ell}\geq m\) approximately equivalent point sources \(S_{k}\) and weights \(\lambda_{k}\) by Algorithm PDAP;
2. Determine disc radii from weights \(\lambda_{k}\) via the mean value property (13). Merge discs to \(m\) objects: two discs belong to the same object if their intersection is nonempty;
3. For all objects \(D_{\ell}\), \(\ell\in\{1,\ldots,m\}\), simultaneously, determine the object boundaries parametrised by curves \(q_{\ell}\) by matching the combined observational data (6), using a Newton iteration.
The choice of a starting value for \(q_{\ell}\) in (3.) is the same as in Algorithm 0, namely a disc with center determined as centroid of all discs pertaining to the \(\ell\)-th object and radius determined by using the sum of weights in (13).
### Reconstructions
Our forward solvers for (7), (11) (in the special cases (8),(9), of (7)) rely on the fact that with the fundamental solution to the Helmhotz equation \(\mathcal{G}(x)=\frac{\imath}{4}H^{1}_{0}(\tilde{\kappa}|x|)\) in two space dimensions, the solution to
\[\triangle u^{\mathbb{R}^{2}}+\tilde{\kappa}^{2}u^{\mathbb{R}^{2}}=f\quad\text { in }\mathbb{R}^{2}\]
can be determined by convolution \(u^{\mathbb{R}^{2}}=\mathcal{G}*f\). It thus remains to solve the homogeneous boundary value problem
\[\triangle u^{\mathrm{d}}+\tilde{\kappa}^{2}u^{\mathrm{d}}=0\quad\text{ in }\Omega\,,\quad\partial_{\nu}u^{\mathrm{d}}+\imath\tilde{\kappa}u^{\mathrm{d}}=g\]
with \(g=-\partial_{\nu}u^{\mathbb{R}^{2}}-\imath\tilde{\kappa}u^{\mathbb{R}^{2}}\), which we do by the integral equation approach described in [12, Sections 3.1, 3.4], that easily extends to the case of impedance boundary conditions. The solution to (7), (11) is then obtained as \(u=u^{\mathbb{R}^{2}}+u^{\mathrm{d}}\). We point to the fact that solving the Helmholtz equation with large wave numbers is a challenging task and a highly active field of research, see, e.g., [31, 33, 36] and the references therein. Since our emphasis lies on a proof of concept for parameter identification, we did not implement any of these high frequency solvers here.
In all our reconstructions it is apparent that the point source reconstruction algorithm from [8, 37] combined with the equivelant discs approximation - that is, steps (1.) and (2.) in Algorithm 1 - provides an extremely good initial guess of the curves to be recovered. This is essential for the convergence of Newton's method in view of the high nonlinearity of the shape identification problem.
Using the third harmonic \(M=3\):The reconstructions in Figure 1 are obtained by following the steps of Algorithm 1 at wave number \(\tilde{\kappa}=10\) and then carrying out another Newton step with data from the third harmonic at \(\tilde{\kappa}=15\) either: (d) sequentially, using the result from \(\tilde{\kappa}=10\) as a starting value or, (e) applying Newton's method simultaneously to \(\tilde{\kappa}=10\) and \(\tilde{\kappa}=15\).
The numerical results indicate that the additional information obtained from the next (\(m=3\)) harmonic does not yield much improvement. This is due to the lower - by two to three orders of magnitude - intensity of the signal at that higher frequency and seems to confirm the experimental evidence and common practice of skipping higher than second harmonics.
Reconstructions from partial data:In Figures 2, 3 we show reconstructions from partial data. The quality appears to decrease only slightly with decreasing amount of data, until at a certain point (between 30 and 40 per cent of the full angle) the algorithm partially breaks down and fails to find one of the objects completely. The ability of an inclusion to stay reconstructible from a low amount of data is related to its weight according to the associated weight \(\lambda_{k}\) according to (13) (using the object's average radius). In Figures 2
Figure 1: Reconstruction of three (top row) or two (bottom row) inclusions from full data: (a) point sources step (1.) of Algorithm 1; (b) equivalent disks step (2.) of Algorithm 1; (c) Newton with second harmonic; (d) Newton with third harmonic; (e) Newton with second and third harmonic
Figure 2: Reconstruction of three inclusions from partial data; top row: equivalent point sources and disks; bottom row: boundary curves from Newtonβs method
and 3 these weights are: 0.0725 for the circle, 0.0692 for the cardiod and 0.0515 for the ellipse. Also, the position relative to the measurement boundary clearly plays a role.
It may seem that simple completion of data from the measurement subarc to the entire boundary should give similar results by for example using the Fourier series expansion. However, this analytic continuation step comes at a price. If we have \(N\) Fourier modes over an arc of length \(\alpha\) then this analytic continuation results from solving a system with a matrix \(P(N,\alpha)\) the conditioning of which can be computed analytically. Of course the condition number will increase with both \(N\) and decreasing values of \(\alpha\), \(0<\alpha<2\pi\). In fact this is a well-understood problem, see [38] where it has been shown that the condition number of \(P(N,\alpha)\) is asymptotic (for large \(N\)) to
\[c_{N}\sim e^{\gamma(\alpha)N}\text{ where }\gamma(\alpha)=\log\Bigl{(}\frac{ \sqrt{2}+\sqrt{1+\cos\alpha}}{\sqrt{2}-\sqrt{1+\cos\alpha}}\Bigr{)}. \tag{14}\]
This has been used in several inverse problems, see, e.g., [19, 32].
However, in our situation the reconstructions are performing much better than the above pessimistic estimate would suggest. This is due to the fact that our reconstruction does not rely on extending the boundary data but rather on directly applying our method to the restricted flux \(g=\partial_{\nu}\hat{p}|_{\Sigma}\). The additional information that the PDE model provides clearly contributes to this improvement, which is also reflected in the condition number of the Jacobian in Newton's method versus the theoretical prediction for data completion from [38]. This can be seen in Table 1.
Figure 3: Reconstruction of two inclusions from partial data; top row: equivalent point sources and disks; bottom row: boundary curves from Newtonβs method
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\frac{\alpha}{2\pi}\) & cond(J) & \(c_{N}\)[38] \\ \hline
0.75 & 29.6 & 2.8e+2 \\
0.5 & 64.9 & 2.3e+5 \\
0.4 & 73.7 & 1.8e+07 \\
0.3 & 1733.8 & 2.6e+08 \\ \hline \end{tabular}
\end{table}
Table 1: Condition numbers of Jacobian in Newtonβs method for a single inclusion using 9 basis functions versus condition number formula (14) for data completion with \(N=9\)
Figure 4: Reconstruction of two inclusions at different distances; top row: equivalent point sources and disks; bottom row: boundary curves from Newtonβs method
Varying distance between objects:Figure 5 shows reconstructions of two inclusions at several distance, given by the difference \(\theta\) in the phase of the centroid (in polar coordinates). The given data appears to allow distinction of objects very well, as long as they do not overlap. However, decreasing distance between them compromises the quality of reconstructions.
Varying distance to boundary:Figure 5 shows reconstructions of one inclusion at several distances from the boundary. The relative error after application of Newton's method at \(\tilde{\kappa}=10\) was (a) 0.2963 (b) 0.1931 (c) 0.1434. Also visually, it is obvious that closeness to the observation surface significantly improves the reconstruction quality.
Reconstruction from noisy data:Finally we study the impact of noise in the measurements on the reconstruction quality, see Figure 6 for the case of three objects. Regularisation is mainly achieved by the sparsity prior incorporated via the PDAP point source identification and this actually makes the process very stable with respect to perturbations in the measurements up to noise levels of about three per cent. Using partial data clearly impacts this robustness and thus only works with noise levels of two per cent or less.
Figure 5: Reconstruction of one inclusion at different distances from the boundary; top row: equivalent point sources and disks; bottom row: boundary curves from Newtonβs method
## 3 Convergence of Newton's method
Similarly to the time domain setting, [27] one can prove that the all-at-once formulation of this inverse problem (even with arbitrary \(M\in\mathbb{N}\cup\{\infty\}\)) satisfies a range invariance condition, which, together with a linearised uniqueness result, enables to prove convergence of a regularised frozen Newton method.
We write the inverse problem of reconstructing \(\eta\) in (3) as a nonlinear operator equation
\[\begin{split}& G_{m}(\eta,\hat{p})=h_{m}\quad m\in\{1,\ldots,M\} \text{ with }\hat{p}=(\hat{p}_{1},\ldots,\hat{p}_{M})\\ & C_{m}\hat{p}_{m}=y_{m}\quad m\in\{1,\ldots,M\}\end{split} \tag{15}\]
for the model operators \(G_{m}:Q\times V^{M}\to W\) (including the case \(M=\infty\) with \(\ell^{2}(\mathbb{N};V)\) in place of \(V^{M}\)), \(h_{1}=\hat{h}\), \(h_{m}=0\) for \(m\geq 2\) and the observation operators \(C_{m}\in L(V,Y)\). Here \(Q\), \(V\), \(Y\) are the parameter, state, and data spaces.
The components \(G_{m}\) of the model part of the forward operator have the particular structure
\[G_{m}(\eta,\hat{p})=D_{m}\hat{p}_{m}+B_{m}(\hat{p})\eta \tag{16}\]
with \(D_{m}\in L(V,W)\) and \(B_{m}(\hat{p})\in L(Q,W)\) linear for each \(\hat{p}\in V^{M}\) but depending nonlinearly on \(\hat{p}\). (This is different from [24], where we considered a sum of linear operators \(B_{m}(\hat{p})\) in a single model equation rather than a system of model equations.) More concretely, in
Figure 6: Reconstruction of three inclusions from noisy data; top row: equivalent point sources and disks; bottom row: boundary curves from Newtonβs method
our setting with the operators defined by
\[\begin{split}&\mathcal{A}u=\Big{(}v\mapsto\int_{\Omega}\nabla u\cdot \nabla v\,dx+\gamma\int_{\partial\Omega}u\,v\,ds\Big{)}\\ &\mathcal{D}u=\Big{(}v\mapsto b\int_{\Omega}\nabla u\cdot\nabla v \,dx+(c^{2}\beta+b\gamma)\int_{\partial\Omega}u\,v\,ds\Big{)}\,,\\ &\mathcal{M}u=\Big{(}v\mapsto\int_{\Omega}u\,v\,dx+\beta b\int_{ \partial\Omega}u\,v\,ds\Big{)}\end{split} \tag{17}\]
we take
\[\begin{split}& D_{m}=-m^{2}\omega^{2}\mathcal{M}+c^{2}\mathcal{A}+ \imath\,m\omega\,\mathcal{D},\qquad C_{m}=\mathrm{tr}_{\Sigma},\\ & B_{m}(\hat{p})(x)=m^{2}\omega^{2}\tilde{B}_{m}(\hat{p}(x))\\ &\tilde{B}_{m}(\vec{c})=\begin{cases}\frac{1}{4}\sum_{\ell=1}^{m- 1}c_{\ell}c_{m-\ell}+\frac{1}{2}\sum_{k=m+2:2}^{\infty}\overline{c_{\frac{k-m} {2}}}c_{\frac{k+m}{2}}&M=\infty\text{ (a)}\\ \frac{1}{4}\sum_{\ell=1}^{m-1}c_{\ell}c_{m-\ell}&M\in\mathbb{N}\cup\{\infty\} \text{ (b)}\end{cases}&\vec{c}\in\mathbb{C}^{M}\\ &Q=L^{2}(\Omega),\quad V=H^{2}(\Omega),\quad W=L^{2}(\Omega),\quad Y=L^{2}( \Omega),\end{split} \tag{18}\]
where the first sum over \(\ell\) is empty in the case \(m=1\). Here \(B_{m}(\hat{p}):L^{2}(\Omega)\to L^{2}(\Omega)\) is to be understood as a multiplication operator and boundedness \(B_{m}(\hat{p})\in L(L^{2}(\Omega),L^{2}(\Omega))\) follows from the fact that \(H^{2}(\Omega)\) is continuously embedded in \(L^{\infty}(\Omega)\) and therefore the functions \(p_{m}\) as well as their products are in \(L^{\infty}(\Omega)\). Differentiability of the \(B_{m}\) mappings follows from their polynomial (in fact, quadratic) structure in our particular setting.
We consider both the case (a) that gives full equivalence to the Westervelt equation (1) and the simplifications (b) used in our numerical tests.
The abstract structure (15), (16) together with an extension of the dependency of \(\eta\) to \(\vec{\eta}=(\eta_{m})_{m\in\{1,\dots,M\}}\subseteq Q^{M}\) allows one to more generally establish the differential range invariance relation
\[\text{for all }(\vec{\eta},\hat{p})\in U\,\exists r(\vec{\eta},\hat{p})\in Q^{M }\times V^{M}\,:\ F(\vec{\eta},\hat{p})-F(\vec{\eta}_{0},\hat{p}_{0})=F^{ \prime}(\vec{\eta}_{0},\hat{p}_{0})r(\vec{\eta},\hat{p}), \tag{19}\]
for
\[\begin{split}& F=(G_{m},C_{m})_{m\in\{1,\dots,M\}},\quad\hat{p}=( \hat{p}_{m})_{m\in\{1,\dots,M\}},\\ & r(\vec{\eta},\hat{p})=(r_{m}^{\vec{\eta}}(\vec{\eta},\hat{p}), r_{m}^{\hat{p}}(\vec{\eta},\hat{p}))_{m\in\{1,\dots,M\}}.\end{split} \tag{20}\]
Indeed, with
\[G_{m}^{\prime}(\vec{\eta}_{0},\hat{p}_{0})(\underline{d\eta},\underline{d\hat {p}})=D_{m}\underline{d\hat{p}}_{m}+\sum_{n=1}^{M}\frac{\partial B_{m}}{ \partial\hat{p}_{n}}(\hat{p}_{0})\underline{d\hat{p}}_{n}\,\vec{\eta}_{0,m}+B _{m}(\hat{p}_{0})\underline{d\eta}_{m}\]
and
\[\begin{split}& r_{m}^{\hat{p}}(\vec{\eta},\hat{p})=\hat{p}_{m}- \hat{p}_{0,m}\\ & r_{m}^{\vec{\eta}}(\vec{\eta},\hat{p})=\eta_{m}-\eta_{0,m}+B_{m} (\hat{p}_{0})^{-1}\Big{(}\big{(}B_{m}(\hat{p})-B_{m}(\hat{p}_{0})\big{)}\eta_{ m}-\sum_{n=1}^{M}\frac{\partial B_{m}}{\partial\hat{p}_{n}}(\hat{p}_{0})(\hat{p}_{m}- \hat{p}_{0,m})\,\eta_{0,m}\Big{)}\end{split}\]
we obtain (19). To this end, we assume that \(p_{0}\) is chosen such that for each \(m\in\{1,\ldots,M\}\), the operator \(B_{m}(\hat{p}_{0}):Q\to W\) is an isomorphism. Moroever, \(r\) is close to the identity in the sense that
\[\|r(\vec{\eta},\hat{p})-((\vec{\eta},\hat{p})-(\vec{\eta}_{0},\hat{ p}_{0}))\|_{Q^{M}\times V^{M}}\] \[=\Big{\|}B_{m}(\hat{p}_{0})^{-1}\Big{(}\big{(}B_{m}(\hat{p})-B_{m} (\hat{p}_{0})\big{)}(\eta_{m}-\eta_{0,m})\] \[\qquad\qquad\qquad\qquad+\big{(}B_{m}(\hat{p})-B_{m}(\hat{p}_{0}) -\sum_{n=1}^{M}\frac{\partial B_{m}}{\partial\hat{p}_{n}}(\hat{p}_{0})(\hat{p}_ {m}-\hat{p}_{0,m})\big{)}\,\eta_{0,m}\Big{)}\Big{\|}_{Q^{M}\times V^{M}}\] \[\leq C\|\hat{p}-\hat{p}_{0}\|_{V^{M}}\big{(}\|\eta-\eta_{0}\|_{Q^ {M}}+\|\hat{p}-\hat{p}_{0}\|_{V^{M}}\big{)},\]
which implies
\[\|r(\vec{\eta},\hat{p})-((\vec{\eta},\hat{p})-(\vec{\eta}_{0},\hat{p}_{0}))\|_ {Q^{M}\times V^{M}}\leq c\|(\vec{\eta},\hat{p})-(\vec{\eta}_{0},\hat{p}_{0})\| _{Q^{M}\times V^{M}} \tag{21}\]
for \(c\in(0,1)\) in a sufficiently small neighborhood \(U\) of \((\vec{\eta}_{0},\hat{p}_{0})\).
Since the artificial dependence of \(\vec{\eta}\) on \(m\) counteracts uniqueness, we penalise it by a term \(P\vec{\eta}\in Q^{M}\)
\[(P\vec{\eta})_{m}=\eta_{m}-\frac{\sum_{n=1}^{M}n^{-2}\,\eta_{n}}{\sum_{n=1}^{ M}n^{-2}},\]
where the weights \(n^{-2}\) in the \(\ell^{2}\) projection are introduced in order to enforce convergence in case \(M=\infty\). Note that the \(n\) independent target \((\eta,\eta,\ldots)\) is clearly not contained in \(\ell^{2}(\mathbb{N};Q)\) but in the weighed space \(\ell^{2}_{w}(\mathbb{N};Q)\) with weights \(w_{n}=n^{-2}\). We here first of all aim at finding a general \(\eta\in Q=L^{2}(\Omega)\). In case we want to reconstruct a piecewise constant coefficient \(\eta\), we can achieve this by, e.g., adding a total variation term to \(P\).
This penalisation together with condition (19) allows us to rewrite the inverse problem (15) as a combination of an ill-posed linear and a well-posed nonlinear problem
\[\begin{split}& F^{\prime}(\vec{\eta}_{0},\hat{p}_{0})\hat{r}=h-F( \vec{\eta}_{0},\hat{p}_{0})\\ & r(\vec{\eta},\hat{p})=\hat{r}\\ & P\vec{\eta}=0\end{split} \tag{22}\]
for the unknowns \((\hat{r},\vec{\eta},\hat{p})\in Q^{M}\times Q^{M}\times V^{M}\) (or in \(\ell^{2}_{w}(\mathbb{N};Q)\times\ell^{2}_{w}(\mathbb{N};Q)\times\ell^{2}( \mathbb{N};V)\) in case \(M=\infty\)). Here \((\vec{\eta}_{0},\hat{p}_{0})\in Q^{M}\times V^{M}\) is fixed and in (19) \(U\subseteq Q^{M}\times V^{M}\) is a neighborhood of \((\vec{\eta}_{0},\hat{p}_{0})\).
The following regularised frozen Newton method can then be shown to converge.
\[x_{n+1}^{\delta}\in\text{argmin}_{x\in U}\|F^{\prime}(x_{0})(x-x_{n}^{\delta} )+F(x_{n}^{\delta})-h^{\delta}\|_{Y}^{2}+\alpha_{n}\|\vec{\eta}-\vec{\eta}_{0 }\|_{Q^{M}}^{2}+\|P\vec{\eta}\|_{Q^{M}}^{2}. \tag{23}\]
where \(h^{\delta}\approx h\) is the noisy data, \(\alpha_{n}\to 0\) as \(n\to\infty\), (e.g. \(\alpha_{n}=\alpha_{0}q^{n}\) for some \(q\in(0,1)\)), and we abbreviate \(x=(\vec{\eta},\hat{p})\).
An essential ingredient of the convergence proof is verification of the fact that the intersection of the nullspaces of \(F^{\prime}(x_{0})\) and of \(P\) is trivial [24, Theorem 2]. For this purpose, we require the following geometric condition on the observation manifold \(\Sigma\)
\[\text{for all }j\in\mathbb{N}\;:\quad\left(\sum_{k\in K^{j}}b_{k}\varphi_{j}^{k}(x )=0\;\text{ for all }x\in\Sigma\right)\implies\left(b_{k}=0\text{ for all }k\in K^{j}\right) \tag{24}\]
in terms of the eigensystem \((\varphi_{j}^{k},\lambda_{j})_{j\in\mathbb{N},k\in K^{j}}\) of the selfadjoint positive operator \(\mathcal{A}\) defined by (17). This means that the eigenfunctions should preserve their linear independence when restricted to the observation manifold and trivially holds in 1-d, where \(\#K^{j}=1\) for all \(j\in\mathbb{N}\).
We will assume that the operators \(\mathcal{A}\), \(\mathcal{D}\), \(\mathcal{M}\) have the same \(H\)-orthonormal eigenfunctions \(\varphi_{j}^{k}\) with the eigenvalues \(\mu_{j}\) of \(\mathcal{M}\) and \(\rho_{j}\) of \(\mathcal{D}\) satisfying
\[\left(\frac{\rho_{j}}{\lambda_{j}}=\frac{\rho_{\ell}}{\lambda_{\ell}}\text{ and }\frac{\mu_{j}}{\lambda_{j}^{2}}=\frac{\mu_{\ell}}{\lambda_{\ell}^{2}}\right)\; \Rightarrow\;j=\ell. \tag{25}\]
This is the case, e.g., if \(\beta=0\), where \(\mathcal{M}\) is the identity and \(\mathcal{D}=b\mathcal{A}\), \(H=L^{2}(\Omega)\). Condition (25) is needed to prove the following linear independence result that will play a role in the linearized uniqueness result Theorem 3.1. Its proof can be found in the appendix.
**Lemma 3.1**.: _Let \((\mu_{j})_{j\in\mathbb{N}}\), \((\lambda_{j})_{j\in\mathbb{N}}\), \((\rho_{j})_{j\in\mathbb{N}}\subseteq\mathbb{C}\) sequence of distinct numbers such that (25) holds. Then_
\[\left(\text{ for all }m\in\mathbb{N}\,:\;0=\sum_{j=1}^{\infty}\frac{m^{2}}{-m^{2 }\omega^{2}\mu_{j}+c^{2}\lambda_{j}+m\omega\rho_{j}}c_{j}\right)\implies\left( c_{j}=0\text{ for all }j\in\mathbb{N}\right)\]
We are now in the position to prove uniqueness for the linearized problem, which, besides being of interest on its own, is also an essential ingredient to the convergence proof of Newton's method.
**Theorem 3.1**.: _For (20), (16), (18), with \(M=\infty\) and \(\eta\) independent of \(m\) (that is, \(P\vec{\eta}=0\)), \(\hat{p}_{0}\) chosen such that \(\hat{p}_{0,m}(x)=\phi(x)\,\psi_{m}\) for some \(\phi\in H^{2}(\Omega)\), \(\phi\neq 0\) almost everywhere in \(\Omega\), \(\psi_{m}\in\mathbb{C}\), \(f_{m}:=\tilde{B}_{m}(\vec{\psi})\in\mathbb{C}\setminus\{0\}\) for all \(m\in\mathbb{N}\). Then under the linear independence condition (24), with \(\mathcal{A}\), \(\mathcal{D}\), \(\mathcal{M}\) simultaneously diagonalisable with (25), the linearisation \(F^{\prime}(\eta_{0},\hat{p}_{0})\) at \(\eta_{0}=0\) is injective._
Proof.: Using the operators \(\mathcal{A}\), \(\mathcal{D}\), \(\mathcal{M}\) as in (17) we can write the condition \(F^{\prime}(\eta_{0},\hat{p}_{0})(\underline{d\eta},\underline{d\underline{p}})\) for \(\eta_{0}=0\), \(\hat{p}_{0,m}(x)=\phi(x)\,\psi_{m}\), \(f_{m}=\tilde{B}_{m}(\vec{\psi})\) as
\[[-m^{2}\omega^{2}\mathcal{M}+c^{2}\mathcal{A}+\imath\,m\omega\,\mathcal{D}] \underline{d\underline{p}}_{m}+m^{2}\omega^{2}f_{m}\phi\,\underline{d\eta}=0,\text{ and }\text{tr}_{\Sigma}\underline{d\underline{p}}_{m}=0\text{ for all }m\in\mathbb{N}. \tag{26}\]
Using the diagonalisation by means of the eigenfunctions \((\varphi_{j}^{k})_{j\in\mathbb{N},k\in K^{j}}\), by taking the \(H\) inner product of (26) with \(\varphi_{j}^{k}\), relying on \(\underline{dp}_{m}=\sum_{j=1}^{\infty}\sum_{k\in K^{j}}\langle\underline{dp}_{m },\varphi_{j}^{k}\rangle_{H}\varphi_{j}^{k}\) and setting \(a_{j}^{k}=\langle\underline{d\eta}\,\phi,\varphi_{j}^{k}\rangle_{H}\) we can rewrite this as
\[m^{2}\omega^{2}f_{m}\sum_{j=1}^{\infty}\frac{1}{-m^{2}\omega^{2}\mu_{j}+c^{2} \lambda_{j}+\imath\,m\omega\,\rho_{j}}\sum_{k\in K^{j}}a_{j}^{k}\varphi_{j}^{k }(x_{0})=0\text{ for all }x_{0}\in\Sigma,\ m\in\mathbb{N}.\]
Since the entries \(\frac{1}{-m^{2}\omega^{2}\mu_{j}+c^{2}\lambda_{j}+m\omega\rho_{j}}\) define an infinite generalised Hankel matrix which is therefore nonsingular (see Lemma 3.1), this implies
\[0=\sum_{k\in K^{j}}a_{j}^{k}\,\varphi_{j}^{k}(x_{0})\quad\text{ for all }j\in\mathbb{N},\ x_{0}\in\Sigma.\]
Using (24), we conclude \(a_{j}^{k}=0\) for all \(j\in\mathbb{N}\), \(k\in K^{j}\) and thus \(\underline{d\eta}=0\). Returning to the first equation in (26) with \(\underline{d\eta}=0\), due to uniqueness of the solution to this linear homogeneous PDE with homogeneous boundary conditions, we also have \(\underline{dp}=0\).
According to [24, Theorem 2], we obtain the following
**Theorem 3.2**.: _Let \(x^{\dagger}=(\vec{\eta}^{\dagger},\hat{p}^{\dagger})\) be a solution to (22) and let for the noise level \(\delta\geq\|y^{\delta}-y\|_{Y}\) the stopping index \(n_{*}=n_{*}(\delta)\) be chosen such that_
\[n_{*}(\delta)\to 0,\quad\delta\sum_{j=0}^{n_{*}(\delta)-1}c^{j}\alpha_{n_{*}( \delta)-j-1}^{-1/2}\to 0\qquad\text{ as }\delta\to 0 \tag{27}\]
_with \(c\) as in (21). Moreover, let the assumptions of Theorem 3.1 be satisfied with \(B_{m}(\hat{p}_{0})\) as in (18) being an isomorphism from \(L^{2}(\Omega)\) into itself for all \(m\in\mathbb{N}\)._
_Then there exists \(\rho>0\) sufficiently small such that for \(x_{0}\in\mathcal{B}_{\rho}(x^{\dagger})\subseteq U\) the iterates \((x_{n}^{\delta})_{n\in\{1,\dots,n_{*}(\delta)\}}\) are well-defined by (23), remain in \(\mathcal{B}_{\rho}(x^{\dagger})\) and converge in \(Q^{M}\times V^{M}\), \(\|x_{n_{*}(\delta)}^{\delta}-x^{\dagger}\|_{Q^{M}\times V^{M}}\to 0\) as \(\delta\to 0\). In the noise free case \(\delta=0\), \(n_{*}(\delta)=\infty\) we have \(\|x_{n}-x^{\dagger}\|_{Q^{M}\times V^{M}}\to 0\) as \(n\to\infty\)._
## Appendix
Proof of Lemma 3.1:
With \(w_{j}(t):=-\mu_{j}\omega^{2}+c^{2}\lambda_{j}\,t^{2}+\imath\omega\rho_{j}\,t\), the premise of the lemma reads as
\[\text{ for all }t\in\{\frac{1}{m}\,:\,m\in\mathbb{N}\}\,:\quad 0=\sum_{j=1}^{ \infty}\tfrac{1}{w_{j}(t)}\,c_{j}.\]
Thus, after multiplication with \(\prod_{\ell\in\mathbb{N}}w_{\ell}(t)\) and with \(W^{\vec{c}}(t):=\sum_{j=1}^{\infty}\prod_{\ell\neq j}w_{\ell}(t)\,c_{j}\) we get
\[\text{ for all }t\in\{\frac{1}{m}\,:\,m\in\mathbb{N}\}\,:\quad 0=W^{\vec{c}}(t).\]
Since \(W^{\vec{c}}\) is analytic, this implies that \(W^{\vec{c}}\equiv 0\) on all of \(\mathbb{C}\). Choosing \(t_{k\pm}=-\frac{u\omega}{2c^{2}}\frac{\rho_{k}\mp\sqrt{\rho_{k}^{2}-\mu_{k}}}{ \lambda_{k}}\) as the roots of \(w_{k}\), we obtain
\[\text{for all }k\in\mathbb{N}\,:\quad\prod_{\ell\neq k}w_{\ell}(t_{k\pm})\,c_{k}=0 \tag{28}\]
A small side calculation yields that under condition (25), the roots of the functions \(w_{j}\) are distinct for different \(j\):
\[\Big{(}t_{j+}=t_{\ell+}\text{ and }t_{j-}=t_{\ell-}\Big{)}\ \Rightarrow\ \Big{(}t_{j+}+t_{j-}=t_{\ell+}+t_{\ell-}\text{ and }t_{j+}+t_{j-}=t_{\ell+}+t_{\ell-}\Big{)}\] \[\Rightarrow\ \Big{(}\frac{\rho_{j}}{\lambda_{j}}=\frac{\rho_{\ell}}{ \lambda_{\ell}}\text{ and }\frac{\mu_{j}}{\lambda_{j}^{2}}=\frac{\mu_{\ell}}{ \lambda_{\ell}^{2}}\Big{)},\]
which by (25) implies \(j=\ell\).
Hence, \(\prod_{\ell\neq k}w_{\ell}(t_{k\pm})\neq 0\) and from (28) we conclude that \(c_{k}=0\) for all \(k\in\mathbb{N}\).
## Acknowledgment
The work of the first author was supported by the Austrian Science Fund through grant P36318; the second author was supported in part by the National Science Foundation through award DMS -2111020.
|
2304.02664 | Quantum Coding Transitions in the Presence of Boundary Dissipation | We investigate phase transitions in the encoding of quantum information in a
quantum many-body system due to the competing effects of unitary scrambling and
boundary dissipation. Specifically, we study the fate of quantum information in
a one-dimensional qudit chain, subject to local unitary quantum circuit
evolution in the presence of depolarizating noise at the boundary. If the qudit
chain initially contains a finite amount of locally-accessible quantum
information, unitary evolution in the presence of boundary dissipation allows
this information to remain partially protected when the dissipation is
sufficiently weak, and up to time-scales growing linearly in system size $L$.
In contrast, for strong enough dissipation, this information is completely lost
to the dissipative environment. We analytically investigate this ``quantum
coding transition" by considering dynamics involving Haar-random, local unitary
gates, and confirm our predictions in numerical simulations of Clifford quantum
circuits. We demonstrate that scrambling the quantum information in the qudit
chain with a unitary circuit of depth $ \mathcal{O}(\log L)$ before the onset
of dissipation can perfectly protect the information until late times. The
nature of the coding transition changes when the dynamics extend for times much
longer than $L$. We further show that at weak dissipation, it is possible to
code at a finite rate, i.e. a fraction of the many-body Hilbert space of the
qudit chain can be used to encode quantum information. | Izabella Lovas, Utkarsh Agrawal, Sagar Vijay | 2023-04-05T18:00:08Z | http://arxiv.org/abs/2304.02664v1 | # Quantum Coding Transitions in the Presence of Boundary Dissipation
###### Abstract
We investigate phase transitions in the encoding of quantum information in a quantum many-body system due to the competing effects of unitary scrambling and boundary dissipation. Specifically, we study the fate of quantum information in a one-dimensional qudit chain, subject to local unitary quantum circuit evolution in the presence of depolarizing noise at the boundary. If the qudit chain initially contains a finite amount of locally-accessible quantum information, unitary evolution in the presence of boundary dissipation allows this information to remain partially protected when the dissipation is sufficiently weak, and up to time-scales growing linearly in system size \(L\). In contrast, for strong enough dissipation, this information is completely lost to the dissipative environment. We analytically investigate this "quantum coding transition" by considering dynamics involving Haar-random, local unitary gates, and confirm our predictions in numerical simulations of Clifford quantum circuits. We demonstrate that scrambling the quantum information in the qudit chain with a unitary circuit of depth \(\mathcal{O}(\log L)\) before the onset of dissipation can perfectly protect the information until late times. The nature of the coding transition changes when the dynamics extend for times much longer than \(L\). We further show that at weak dissipation, it is possible to code at a finite rate, i.e. a fraction of the many-body Hilbert space of the qudit chain can be used to encode quantum information.
## I Introduction
The chaotic unitary evolution of an isolated quantum systems will spread initially localized quantum information over non-local degrees of freedom, a process known as quantum information scrambling [1; 2; 3; 4]. This delocalization of information aids in protecting quantum information against external interference from local noise, which is present in any real physical system. Studying the robustness of quantum information in the presence of both unitary scrambling and dissipation is important both to understand new dynamical regimes of quantum many-body dynamics, and from a practical standpoint, to design quantum codes and to appropriately interpret studies of quantum many-body evolution in near-term quantum simulators. While dissipative dynamical phases of matter have been the subject of intense research for decades [5; 6; 7; 8; 9], addressing the dynamics of quantum information in this context opens a new perspective. Similarly to how understanding the spreading of information has led to a deeper understanding of quantum chaos and thermalization [2; 10; 11; 12; 13; 14; 15; 16], studying quantum information in dissipative systems can shed light on the structure of (possibly new) dynamical regimes of quantum matter.
Besides its fundamental relevance for the dissipative dynamics of generic quantum systems, the fate of quantum information in the presence of unitary scrambling and destructive local noise or measurements has been explored in the context of quantum information theory, leading to the development of the theory of quantum error correcting codes [17; 18; 19; 20]. A key result in the theory of quantum error correction (QEC) is the threshold theorem, stating that for error rates below some threshold, one can reverse the effects of the errors by applying additional quantum gates [21; 22; 23]. In other words, it is possible to correct errors faster than they are created.
The threshold theorem is essential in designing fault-tolerant quantum computers. Applying additional gates, trying to preserve the code-space against the noise, allows one to perform logical operations for long times with high precision. Such an active error correction is feasible in artificial quantum systems with a "digital" architecture, in which real-time measurements and unitary evolution can be executed over targeted degrees of freedom. However, in analog quantum simulators realized, e.g., with ultracold atoms, the options for active error correction are more restricted and costly due to the limited control over the dynamics. This provides a strong motivation for exploring whether the system's intrinsic dynamics alone can protect information, by hiding it from destructive local noise. Despite this fundamental relevance, the conditions for obtaining such a robust, self-generated coding dynamics in a generic quantum system without any degree of external control, are still not fully explored.
Recently, the robustness of a self-generated code space against a special class of local perturbations has been investigated, taking the form of local projective measurements. These studies revealed a phase transition driven by the measurement rate, such that the code space can store an extensive amount of information, as long as the rate of measurements remains below a finite threshold [24; 25; 26; 27; 28]. However, this result cannot be generalized to more generic noise channels. For example, a quantum many-body system evolving in the presence of random erasures occurring in the bulk with finite rate destroys all quantum information in constant time [29; 30], and active error-correction during the dynamics is required to protect the information beyond this time scale. Understanding the conditions (if any) that unitary evolution and local errors have to satisfy to guarantee the emer
gence of a robust, self-generated code space, without the need for an active error correction during the dynamics, is an open question of utmost relevance.
### Summary of Results
With these motivations, we take a step towards understanding the dynamics of quantum information under generic scrambling and local noise, by exploring the fate of quantum information, subjected to the competing effects of boundary dissipation and unitary spreading in a one-dimensional chaotic quantum system. For concreteness and simplicity, we focus on the setup sketched in Fig. 1a, which shows a single timestep of a random quantum circuit with a depolarization channel acting at the left boundary. We note that it is known both in classical coding theory [31; 32; 33; 34; 35] and in the quantum case [36; 37; 38] that random unitary dynamics provides an optimal encoding of information. We entangle one external reference qubit \(R\) near the boundary into a Bell pair, thereby encoding one qubit of quantum information initially localized near the dissipative boundary. We then ask what happens to this information as the system is subject to noisy dynamics, up to time scales \(T\) scaling linearly with the system size \(L\), such that \(T/L\) is fixed. Importantly, by taking the thermodynamic limit \(L\to\infty\) and the long time limit \(T\to\infty\) simultaneously, with \(T/L\) constant, we probe the system on time scales where it is expected to thermalize.
Interestingly, we find that this quantum information can remain robust even at these long times, giving rise to a rich dynamical phase diagram as a function of dissipation strength \(p\) and the ratio \(T/L\), as displayed in Fig. 1b. The left panel shows the case where the noisy dynamics starts immediately after the encoding of the quantum information locally, near the leftmost boundary. We find a dissipation-induced quantum coding phase transition, separating a region where the coherent information remains partially protected and gets delocalized within the system, and a phase where all of this information leaked to the environment. The nature of the coding transition, however, depends on the ratio \(T/L\). For \(T/L\lesssim 1\) the right boundary is effectively decoupled from the dynamics of information and we observe a continuous second-order phase transition (blue line). For even larger ratios \(T/L\), the right boundary plays a crucial role and gives rise to a first order phase transition (red). We also demonstrate that adding a unitary "pre-scrambling" step after the local encoding, before the onset of the dissipative dynamics, can efficiently increase the robustness of the encoded information. In particular, as shown in the right panel of Fig. 1b, a pre-scrambling time \(t_{scr}\) scaling logarithmically with system size, \(t_{scr}\sim\log L\), ensures that quantum information remains perfectly protected for small enough dissipation strengths \(p\), up to time scales \(T\sim L/p\).
We gain a detailed understanding of these different types of coding transitions, by mapping the dynamics of quantum information in a circuit with Haar-random unitary gates and boundary dissipation to the statistical mechanics of a two-dimensional lattice magnet. This mapping, which has been extensively employed to understand unitary circuit quantum dynamics as well as dynamics with projective measurements (see Ref. [39; 40] for a review), allows us to obtain analytical predictions, as well as instructive numerical results. While the entanglement measures of interest which diagnose the quantum coding transition require taking a formal replica limit of this lattice magnet (akin to a limit arising when considering "quenched" disorder), we focus our attention on understanding this lattice magnet away from the replica limit (akin to studying an "annealed" disorder-average). Specifically, we focus on the "annealed" disorder average of the second Renyi mutual information between the output of the circuit \(A\), and the reference qubit \(R\). In this limit, the circuit with the boundary depolarization can be mapped to the statistical mechanics of an Ising magnet, in which a single Ising domain wall experiences an attractive/repulsive potential at one boundary of the two-dimensional system, whose strength is tuned by the dissipation strength. In this language, the coding transition at times \(T/L\lesssim 1\) can be understood as a second order pinning/depinning transition of the Ising domain wall at the noisy boundary; we provide conjectures as to the true nature of this transition in the replica limit. At later times \(T/L>1/p\), the right boundary gives rise to a different, first order transition by "absorbing" the Ising domain wall. Insights gained from this classical statistical picture are confirmed by large scale numerical simulations performed on Clifford quantum random circuits.
Finally, we show that the coding transition for \(T/L>1/p\) can also be understood as a transition arising from the monogamy of entanglement. In this case, as the system of \(L\) qubits becomes entangled with a growing number of environmental degrees of freedom, scaling as \(pT\), eventually it can no longer stay simultaneously entangled with the reference qubit, and all information leaks to the environment. We conclude with the interesting scenario of encoding an extensive amount of information in the system. Specifically, we show that a similar coding transition persists when we entangle an extensive number of reference qubits into Bell pairs with the qubits of the system. In particular, we identify two threshold values for the dissipation strength \(p\), \(p_{th,1}\) and \(p_{th,2}\), separating three regions according to the behavior of the information density. The information density is perfectly protected in the system for \(p<p_{th,1}\), while it starts to leak into the environment above this threshold. A finite density of information still survives in the region \(p_{th,1}<p<p_{th,2}\), until eventually reaching zero at the upper threshold \(p_{th,2}\).
The rest of the paper is organized as follows. In Sec. II, we introduce the mapping between the coherent quantum information in random circuits and the properties of an Ising domain wall experiencing a repulsive/attractive
boundary on the left and an absorbing boundary on the right, by considering the "annealed" second Renyi mutual information between the circuit output and the encoded information. We derive the random walk model in Sec. II.1. We then show in Sec. II.2 that different phases on either side of the coding transition can be understood by inspecting the weighted trajectories of the Ising domain wall in this statistical mechanical model.
We turn to the detailed discussion of the second order coding transition in the regime \(T\lesssim L/p\), induced by the dissipative boundary alone without the interference of the clean boundary, in Sec. III. We first rely on the random walk model to gain a qualitative understanding of the phase transition, and discuss the classical pinning/depinning transition of the Ising domain wall in Sec. III.1. Building on these insights, we verify the presence of the quantum coding transition and study its properties numerically in Sec III.2, by performing large scale numerical simulations on Clifford quantum circuits, before discussing the nature of this transition in more detail in Sec. III.3. To end the section, in Sec. III.4 we comment on increasing the robustness of the encoded information by applying a unitary pre-scambling before the onset of dissipative dynamics. We show that a pre-scrambling time \(t_{\rm scr}\) scaling logarithmically with system size provides perfect protection for the coherent information for weak enough dissipation \(p\), up to time scales \(T/L\sim O(1)\).
We turn to the first order coding transition, induced by the interplay of the dissipative left boundary and the clean right boundary at times \(T\gtrsim L/p\), in Sec. IV. First, we discuss that this phase transition can be understood in the statistical mechanical framework as the absorption of the entanglement domain wall by the right boundary and is driven by the monogamy of entanglement as the system becomes entangled with a growing number of environmental qubits. We present and analyze the numerical results obtained from Clifford circuit simulations in Sec. IV.1, and find good agreement with the predictions of the statistical mechanics of the Ising lattice magnet. We argue that this coding transition is of first order, and discuss its scaling properties in Sec. IV.2. Finally, Sec V serves as an outlook to the case of encoding an extensive amount of information into the system. Here we consider entangling a finite density of reference qubits with the system, and find a monogamy induced coding transition at late times \(T\gtrsim L/p\), similar to the one observed for a single bit of quantum information. Here we find three phases, with the information perfectly protected for \(p<p_{th,1}\), a finite density of information surviving for \(p_{th,1}<p<p_{th,2}\), and the density reaching zero above \(p_{th,2}\). We conclude by summarizing our results, and discussing open questions in Sec. VI.
###### Contents
* I Introduction
* I.1 Summary of Results
* II Dissipation in Quantum Circuit Evolution
* I.2 Statistical Mechanics of Random Unitary Evolution and Dissipation
* II.3 Boundary Dissipation and the Encoding of Quantum Information
* III Quantum Coding Transition
* III.1 Annealed Mutual Information, and the Pinning of an Ising Domain Wall
* III.2 Numerical Study
Figure 1: (a) Quantum information is encoded in a qudit chain which subsequently evolves with a βbrickworkβ array of Haar-random, two-site unitary gates and dissipation at the boundary. One timestep of these dynamics corresponds to two layers of unitary gates along with depolarizing noise at the boundary, as shown schematically in (a). A phase diagram for the coding transition is shown in (b). The blue critical line is the coding transition when the total number of timesteps \(T\lesssim L/p\), see Section III. This transition also corresponds to the de-pinning transition of an Ising domain wall in a statistical mechanical description of quantum information in these dynamics, as derived in the main text (Section II). This transition occurs when \(R\) is localized near the boundary and is not scrambled across the system. The red critical line is the coding transition as the system approches thermalization (see Section IV), across which the system becomes maximally entangled with the environment resulting in information loss.
3. The Replica Limit and the Nature of the Phase Transition 4. Perfect information protection using scrambling
4. Coding transition on the approach to thermalization 5. Numerical Study 6. Nature of the Phase Transition
5. Encoding at a Finite Rate
6. Summary and Discussion
7. Acknowledgments
8. Lattice Partition Function and the Annealed Phase Transition
9. Alternative random circuit protocols
## II Dissipation in quantum circuit evolution
### Statistical Mechanics of Random Unitary Evolution and Dissipation
Past studies of random local unitary evolution [39; 40], evolution with projective measurements [24; 25; 26] and with dissipation [29; 30; 41; 42; 43; 44] have uncovered a wealth of universal structures governing the dynamics of information-theoretic quantities such as the Renyi entanglement entropy. Averaging over an ensemble of unitary gates in this setting gives rise to an emergent classical statistical mechanics of quantum entanglement, which must be understood in an appropriate "replica limit" in order to recover the behavior of the information-theoretic quantities of interest. A qualitatively-accurate understanding of the behavior of quantum entanglement in chaotic unitary dynamics, and in dynamics with projective measurements can still be obtained even without taking the replica limit [45; 46; 47; 13], though these approaches often fail to capture quantitative, universal properties characterizing distinct regimes of quantum many-body evolution (e.g. of the volume-law-entangled phase of infrequently monitored quantum many-body evolution [48]) or of critical points (e.g. separating different phases of monitored quantum dynamics).
Here, we consider the evolution of qudits under random, local unitary gates and boundary dissipation. Averaging over the ensemble of unitary gates, in the calculation of the evolving _purity_ of subsystem, leads to an emergent statistical mechanics of an Ising magnet. We present the various ingredients that the unitary evolution and dissipation correspond to in this setting, before using these ingredients extensively in subsequent sections to understand the stability of encoded quantum information under this evolution.
We focus our attention on a one-dimensional chain of qudits, with Hilbert space dimension \(q\) at each lattice site. The dissipation acts on the boundary qudit, and is described by the depolarizing channel \(\Phi\) acting on the density matrix \(\rho\) of this qudit as
\[\Phi(\rho)=\left(1-p\right)\rho+p\cdot\frac{\mathds{1}_{q\times q}}{q} \tag{1}\]
with \(p\in[0,1]\) parametrizing the "strength" of the dissipation. For future convenience, we choose to rewrite the depolarizing channel as an _operator_\(\hat{\Phi}\) which acts within a Hilbert space of dimension \(q^{2}\). The operator \(\hat{\Phi}\) takes the form
\[\hat{\Phi}=\sum_{i,j=1}^{q}\left[\left(1-p\right)\ket{i,j}\bra{i,j}+\frac{p}{ q}\ket{i,i}\bra{j,j}\right] \tag{2}\]
where \(\ket{i}\) for \(i\in\{1,\ldots,q\}\) denotes an orthonormal basis of states of a single qudit1.
Footnote 1: The qudit density matrix \(\rho\equiv\sum_{i,j}\rho_{ij}\ket{i}\bra{j}\) is a _state_\(\ket{\rho}\equiv\sum_{i,j}\rho_{ij}\ket{i,j}\) in the doubled Hilbert space on which the operator \(\hat{\Phi}\) acts as \(\hat{\Phi}\ket{\rho}=\left(1-p\right)\ket{\rho}+\left(p/q\right)\sum_{i}\ket{ i,i}\).
Apart from the dissipation, the remaining qudits will be chosen to evolve according to two-site unitary gates, chosen from the uniform (Haar) measure for the unitary
Figure 2: _Top._ Performing a Haar-average over the unitary gates in the calculation of the purity of the evolving state gives rise to an Ising magnet, whose partition function may be written as the product of transfer matrices, given in Eq. (5), (6) and (7). _Bottom._ A coarse-grained description of this Ising magnet involves a single Ising domain wall (green)in the presence of a boundary magnetic field (shaded red). The boundary conditions at the bottom of the Ising magnet, which are fixed by the initial state of the quantum system, are not shown.
group \(\mathrm{U}(q^{2})\). Given such a two-qudit unitary gate \(U\), we note that the average over the Haar measure of \(U\otimes U^{*}\otimes U\otimes U^{*}\) - a quantity which will naturally appear in subsequent sections - is given by
\[V\equiv \langle U\otimes U^{*}\otimes U\otimes U^{*}\rangle\] \[=\sum_{\sigma,\tau\in\{\uparrow,\downarrow\}}\mathrm{wg}_{2}( \sigma)\ket{\tau,\tau}\bra{\sigma,\sigma} \tag{3}\]
where \(\bra{\cdots}\) denotes the Haar average, the Weingarten function is given as \(\mathrm{wg}_{2}(+)=\frac{q^{2}}{q^{4}-1}\) and \(\mathrm{wg}_{2}(-)=\frac{-1}{q^{4}-1}\), and the states \(\ket{\uparrow}\) and \(\ket{\downarrow}\) are defined as \(\ket{\uparrow}\equiv\sum_{i,j=1}^{q}\ket{i,i,j,j}\) and \(\ket{\downarrow}\equiv\sum_{i,j=1}^{q}\ket{i,j,j,i}\) so that
\[\bra{\sigma}=(q^{2}-q)\delta_{\sigma,\tau}+q. \tag{4}\]
From these expressions, it is clear that
\[V\ket{\uparrow\uparrow}=\ket{\uparrow\uparrow} V\ket{\downarrow\downarrow}=\ket{\downarrow\downarrow} \tag{5}\] \[V\ket{\uparrow\downarrow}=V\ket{\downarrow\uparrow}=\frac{q}{q^ {2}+1}\ket{\downarrow\downarrow}+\ket{\uparrow\uparrow} \tag{6}\]
From Eq. (2), the operator \(D\equiv\hat{\Phi}\otimes\hat{\Phi}\) acts on these states as
\[D\ket{\uparrow}=\ket{\uparrow} D\ket{\downarrow}=(1-p)^{2}\ket{\downarrow}+\frac{p(2-p)}{q}\ket{\uparrow} \tag{7}\]
### Boundary Dissipation and the Encoding of Quantum Information
We now consider a qudit chain consisting of \(L\) qudits, into which quantum information has been encoded. We may imagine that this quantum information is represented by physical reference qudits which are maximally-entangled with the one-dimensional system. This system subsequently evolves according to a unitary circuit composed of Haar-random unitary gates in a "brickwork" array, together with dissipation which acts near the boundary. We first focus on the case where only a single qudit is encoded in the one-dimensional system, and with dissipation acting periodically in time on the boundary qudit, as shown schematically in Fig. 2a. A single timestep of this evolution corresponds to the application of two layers of two-site unitary gates, followed by the depolarizing channel (1) on the boundary qudit.
To diagnose whether this qudit of encoded information can remain in the system, even as the boundary dissipation continues to act, we study the behavior of the bipartite mutual information between the reference qudit (\(R\)), and the system (\(A\)) at a time \(t\); this mutual information is defined as
\[I_{A,R}(t)=S_{A}(t)+S_{R}(t)-S_{A\cup R}(t) \tag{8}\]
where \(S_{A}\equiv-\operatorname{Tr}\left[\rho_{A}(t)\log_{q}\,\rho_{A}(t)\right]\) is the von Neumann entanglement entropy of subsystem \(A\) at a time \(t\). We note that \(I_{A,R}(t)\) is related to the coherent information present in the system. If \(I_{A,R}=2\) the entangled qudit can be perfectly recovered by applying a recovery operation to the system _alone_ whereas for \(I_{A,R}=0\) the information has leaked to the environment, that is, \(I_{E,R}=2\)[49; 50].
The mutual information (8) averaged over realizations of the random unitary evolution, thus diagnoses whether quantum information remains in the system, even in the presence of boundary dissipation. Instead of considering the Haar-average of the mutual information, we turn our attention on the "annealed" average of the second Renyi mutual information between \(A\) and \(R\), defined as
\[I_{A,R}^{(\mathrm{ann})}(t)\equiv\log_{q}\langle q\,I_{A,R}^{(2)}(t)\rangle \tag{9}\]
where \(I_{A,R}^{(2)}(t)=S_{A}^{(2)}(t)+S_{R}^{(2)}(t)-S_{A\cup R}^{(2)}(t)\), with the second Renyi entropy defined as \(S_{A}^{(2)}\equiv-\log_{q}\operatorname{Tr}\rho_{A}(t)^{2}\), and \(\langle\cdots\rangle\) denotes the Haar average over the unitary gates in the circuit. The behavior of the annealed mutual information (9) can provide a qualitative understanding of the quantity of interest (8), as discussed at the beginning of this section, though quantitative details may differ, as we will later clarify.
We proceed to calculate the annealed mutual information (9). We initialize the qudits in a product state, except for the qudit at a site \(x_{0}\) away from the boundary which is maximally entangled with the reference qudit. As the system evolves in the presence of unitary gates and dissipation, it is evident that the purity of the reference qudit remains unchanged, \(\operatorname{Tr}\rho_{R}(t)^{2}=q^{-1}\) for all times \(t\). Furthermore, calculation of \(\langle\operatorname{Tr}\rho_{A}(t)^{2}\rangle\) and \(\langle\operatorname{Tr}\rho_{A\cup R}(t)^{2}\rangle\) involves performing a Haar average of four copies of the quantum circuit. Following the discussion in the previous section, it is thus clear that these Haar-averaged purities may be written as partition functions for an Ising magnet of finite extent in the vertical direction - corresponding to the time direction in the quantum circuit - and with horizontal extent fixed by the number of qudits in the system. The Ising spins live on the links of a square lattice, and are acted upon by the transfer matrices matrices \(V\) and \(D\), as given in Eq. (5), (6) and (7), depending on whether a Haar-random unitary gate or dissipation is applied at a particular point in spacetime in the quantum circuit, respectively. The full transfer matrix is shown schematically in Fig. 2b.
The boundary conditions for the Ising partition sum, at the \((i)\) bottom and \((ii)\) top boundaries are determined by \((i)\) the initial state of the qudit chain along with the location of the reference qudit, and \((ii)\) the subsystem over which the purity is being calculated, respectively. First, fixing Ising spins at the top boundary to be in the \(\downarrow\) state corresponds to keeping the corresponding qudit within the region for which the purity is being calculated. As a result, the spins at the top boundary are all fixed in the \(\downarrow\) state for both the calculation of \(\langle\operatorname{Tr}\,\rho_{A}(t)^{2}\rangle\) and \(\langle\operatorname{Tr}\,\rho_{A\cup R}(t)^{2}\rangle\), as shown in Fig. 2b. These two purities thus only differ in their bottom boundary conditions.
Here, the boundary spins are allowed to freely fluctuate, with the exception of the spin corresponding to the qudit at a distance \(x\) away from the boundary; the state of this Ising spin determines whether the reference qudit is included in the subsystem whose purity is being computed. More precisely, this spin is fixed in the \(\uparrow\) or \(\downarrow\) state in the calculation of the quantities, \(\langle\mathrm{Tr}\ \rho_{A}(t)^{2}\rangle\) and \(\langle\mathrm{Tr}\ \rho_{A\cup R}(t)^{2}\rangle\), respectively.
It is convenient to evaluate these partition functions by contracting the transfer matrix from the top boundary condition, i.e. "backwards" in time with respect to the arrow of time in the quantum circuit. Let \(Z(t)\) denote the partition sum obtained by evolving the all-down state of the Ising spins for \(t\) timesteps by repeatedly applying the row transfer matrix corresponding to a single timestep of the dynamics. The partition sum \(Z(t)\) describes a single, directed Ising domain wall, which can only be created/annihilated at the boundary of the system. This can be seen as follows. First, starting with the all-down state, the dissipation (7) can flip the boundary Ising spin from \(\ket{\downarrow}\) to \(\ket{\uparrow}\), thus creating an Ising domain wall near the boundary. The effect of the Haar-random unitary gates (5), (6) in the bulk of the quantum circuit is to simply move the domain wall. Notably, Eq. (5) implies that the Haar-random gates cannot create or annihilate Ising domain walls in the bulk of the system, though gates acting near the boundary can annihilate the Ising domain wall. Once the state of the boundary spin is \(\ket{\uparrow}\), the dissipation cannot alter this state since \(D\ket{\uparrow}=\ket{\uparrow}\); this is simply a consequence of the fact that the depolarizing channel (1) leaves the maximally-mixed density matrix \(\rho=\mathds{1}_{q\times q}/q\) unchanged.
The partition sum \(Z(t)\) is thus performed over histories of the entanglement domain wall trajectories, which can propagate in the bulk of the system, or be created/annihilated at the boundary. Formally, we write
\[Z(t)=\sum_{x\geq 0}z(x,t) \tag{10}\]
where \(z(x,t)\) is a restricted sum over trajectories of the entanglement domain wall where the domain wall ends up between sites \(x-1\) and \(x\) at time \(t\). In this convention, \(z(0,t)\) corresponds to trajectories where the entanglement domain wall no longer exists at time \(t\), as it has been annihilated at the left interface.
We may now write the Haar-averaged purities as
\[\langle\mathrm{Tr}\ \rho_{A}(t)^{2}\rangle =q^{2}\sum_{y>x_{0}}z(y,t)+q\sum_{y\leq x_{0}}z(y,t) \tag{11}\] \[\langle\mathrm{Tr}\ \rho_{A\cup R}(t)^{2}\rangle =q^{2}\sum_{y\leq x_{0}}z(y,t)+q\sum_{y>x_{0}}z(y,t) \tag{12}\]
This is due to the fact that \(\langle\mathrm{Tr}\ \rho_{A\cup R}(t)^{2}\rangle\) involves a sum over trajectories of the entanglement domain wall, with an additional weight \(q^{2}\) given to trajectories which end at a position \(y>x_{0}\) and a weight \(q\) given to trajectories ending at \(y\leq x_{0}\), where \(x_{0}\) is the location of the entangled reference qudit. The opposite weighting scheme is true for \(\langle\mathrm{Tr}\ \rho_{A}(t)^{2}\rangle\). These additional weights arise due to the fact that depending on the final position of the entanglement domain wall, the boundary spin at \(x\) is contracted with the state \(\ket{\uparrow}\) or \(\ket{\downarrow}\). These overlaps are given in Eq. (4). With these expressions, it is straightforward to see that
\[I_{A,R}^{(\mathrm{ann})}(t)=\log_{q}\left[\frac{q^{2}-q(q-1)P(x_{0},t)}{1+(q-1 )P(x_{0},t)}\right] \tag{13}\]
where
\[P(x_{0},t)\equiv\frac{1}{Z(t)}\sum_{y\geq x_{0}}z(y,t) \tag{14}\]
is the probability that the domain wall ends at a position \(y\geq x_{0}\) at time \(t\).
## III Quantum Coding Transition
In this section, we study the behavior of the encoding of quantum information in the system, after evolving the system by the quantum circuit for \(T\) timesteps, for a fixed dissipation strength \(p\). The number of timesteps of the evolution \(T\) can be large so that \(T/L\sim O(1)\) but is taken to be small enough throughout the entirety of this section, so that the left and right ends of the one-dimensional qudit chain are causally disconnected. As \(p\) is increased from zero, we will find an "quantum coding" transition, where information initially encoded in the system is lost to the environment above a threshold \(p=p_{c}\).
### Annealed Mutual Information, and the Pinning of an Ising Domain Wall
First, we investigate the behavior of \(I_{A,R}^{(\mathrm{ann})}\) as the dissipation strength \(p\) is tuned, by studying the Ising lattice magnet that emerges after performing a Haar-average over the unitary gates in the quantum circuit.
As discussed in Sec. II.2, the partition sum \(Z(T)\) describes a single Ising domain wall which can propagate through the bulk of the two-dimensional system, and be created/annihilated at the left boundary of the system. Tuning the dissipation strength, which alters the Ising symmetry-breaking field applied at the boundary, modulates an effective "pinning potential" for the Ising domain wall. This can be clearly seen in the limiting cases when \(p=0\) or \(1\). In the former case, the dissipation is completely absent, and Eq. (5) implies that the all-down state is left invariant by the transfer matrix for the Haar-averaged circuit. Thus, in this limit, there is no Ising domain wall. In contrast, when \(p=1\), the boundary spin is fixed in the \(\ket{\uparrow}\) state, and the domain wall is effectively repelled from the left boundary.
Increasing the dissipation strength can then drive a pinning/de-pinning phase transition for the entanglement domain wall. Similar phase transitions due to the presence of a boundary magnetic field in an Ising magnet have been studied in the literature (see, e.g. Ref. [51; 52; 53]). Equivalently, the temporally-directed nature of the Ising domain wall also suggests these paths may be thought of as the imaginary-time trajectories of a single quantum-mechanical particle on the half-line, which experiences a potential near the boundary, which is tuned by the dissipation strength. \(Z(T)\) is thus an amplitude for this particle to propagate under imaginary time-evolution by this Hamiltonian. In this setting, the particle can undergo a localization transition when the potential is _sufficiently_ attractive [52]. This result is to be contrasted with the well-studied problem of a particle on the full line, with a delta-function potential near the origin, which always forms a bound-state in the potential well as long as the potential is attractive.
The annealed mutual information precisely measures the localization of the Ising domain wall, as is evident from Eq. (13). Deep within a localized phase, where the transverse wandering of the domain wall is governed by a length-scale \(\ell_{\perp}\), the probability \(P(x_{0},T)\sim e^{-x_{0}/\ell_{\perp}}\) (\(\ell_{\perp}\ll x_{0}\)), so that \(I_{A,R}^{(\text{ann})}\) is a constant, deviating from its maximal value of \(2\) by a constant correction which changes within the localized phase. In contrast, in the delocalized phase, the probability \(P(x_{0},T)\stackrel{{ T\to\infty}}{{=}}1\), where the limit is taken, keeping the ratio \(T/L=\text{const.}\) fixed.
Properties of this coding transition, as seen by annealed-averaged observables, such as the annealed mutual information, may be obtained by studying the lattice partition function for the Ising domain wall, which we present in Appendix A, due to the technical nature of the calculations involved. From this study, we find that
1. The phase transition occurs at a probability \(p_{c}\) which varies as a function of the on-site Hilbert space dimension \(q\). The behavior of \(p_{c}\) as \(q\) is tuned may be determined by studying the lattice partition function. In the limit \(q\to\infty\), the coding transition is absent. Specifically, we find that \[p_{c}=1-O(q^{-2})\] (15) so that information is always preserved in the system in the limit that the on-site Hilbert space dimension is strictly infinite.
2. Near the phase transition, the annealed mutual information takes the universal scaling form \[I_{A,R}^{(\text{ann})}(T)=T^{-\beta/\nu}F(T^{1/\nu}(p-p_{c}))\] (16) where \(\beta=1/2\) and \(\nu=2\). The function \(F(x)\sim x^{\beta}\) as \(x\to-\infty\). This relation is obtained by determining that in the thermodynamic limit, the annealed mutual information should vanish on approaching the transition as \(I_{A,R}^{(\text{ann})}\sim\ell_{\perp}^{-1}\), where \(\ell_{\perp}\) is the distance of a transverse excursion of the Ising domain wall in the pinned phase. This length scale is shown to diverge as \(\ell_{\perp}\stackrel{{ p\to p_{c}^{-}}}{{\sim}}(p_{c}-p)^{-\beta}\) upon approaching the phase transition.
The above scaling form for the annealed mutual information is in good quantitative agreement with numerical studies, which we perform by directly studying the transfer matrix for the Ising magnet. A numerically-obtained scaling collapse for the annealed mutual information is shown in Fig. 3, which is consistent with Eq. (16).
We expect that the qualitative behaviors presented here hold for the "quenched-averaged" quantities of interest, such as the averaged von Neumann mutual information \(\langle I_{A,R}(t)\rangle\), which truly diagnose the loss of quantum information from the system, as the dynamics proceed. The true nature of the phase transition, however, will be different, as we discuss in Sec. III.3.
### Numerical Study
Having obtained a qualitative understanding of the coding transition by considering the "annealed" Haar average of the Renyi mutual information, we now demonstrate the presence of this transition in numerical studies of quantum circuit evolution in a qubit chain (\(q=2\) on-site Hilbert space dimension). Here, the unitary time evolution of the bulk is governed by Clifford random unitary gates, arranged in a brickwork structure. This setup allows us to simulate the dynamics of large systems for sufficiently long times to study the phase transition introduced above, by relying on the stabilizer formalism. The boundary dissipation is realized as a random erasure channel, acting on the leftmost qubit with probability
Figure 3: Scaling collapse of the annealed mutual information, consistent with the scaling form in Eq. (16). The inset shows the behavior of the annealed mutual information as a function of dissipation strength \(p\), indicating the presence of an coding transition. The exponents \(\beta=1/2\), \(\nu=2\) are determined from properties of the pinning transition of the Ising domain wall. The system size is taken to be large enough that the left and right ends of the qudit chain are causally disconnected.
\(p\) in each time step, by deleting the information stored in the qubit. In the stabilizer formalism, this boundary erasure channel is implemented by deleting all stabilizers acting non-trivially (as a non-identity operator) on the leftmost qubit.
We note that besides the protocol described above, we also considered other forms of boundary dissipation and Clifford scrambling, all giving rise to similar results for the behavior of the mutual information. Specifically, we implemented an alternative dissipation channel, by applying a CNOT gate entangling the boundary qubit with an environmental ancilla qubit that was subsequently traced out from the density matrix. Moreover, we considered protocols with sparse bulk scrambling, where each unitary gate in the brickwork structure is a random Clifford unitary with probability \(p_{U}<1\), but the trivial identity operator with probability \(1-p_{U}\). This scenario allowed us to tune the efficiency of the scrambling through the parameter \(p_{U}\), while keeping the boundary noise fixed, leading to a phase transition similar to the one discussed in the main text. We discuss these alternative protocols in more detail, and present supplementary numerical results in Appendix B.
The Bell pair is encoded in the initial state at the leftmost site, by entangling the boundary qubit with a reference qubit, while the remaining qubits are initialized in a random product state. We run the dissipative dynamics for time \(T\), with system size \(L\) chosen to keep \(T/L<1\) fixed, such that the right boundary of the system is not causally connected to the Bell pair. This setting allows us to detect the coding transition induced by a single boundary, by increasing the evolution time \(T\). Importantly, due to the fixed ratio \(T/L\), the long time limit \(T\to\infty\) and the thermodynamics limit \(L\to\infty\) are performed simultaneously, therefore, we are probing the mutual information on time scales where the system is expected to become thermalized.
The mutual information \(I_{A,R}\) between the output of the dissipative quantum circuit \(A\) and the reference qubit \(R\) is shown in Fig. 4, for different dissipation strengths \(p\) and circuit depths \(T\). These results are consistent with a coding transition tuned by the dissipation strength \(p\), between a phase where the system retains part of the encoded information, and a strongly dissipative phase with all information lost. We note that determining the critical exponents and critical point of this transition from finite time data is numerically challenging. Nevertheless, we attempt to estimate these parameters by noting that the mutual information obeys the finite size scaling \(I_{A,R}\sim T^{-\beta/\nu}\) at the critical dissipation strength \(p_{c}\), while it saturates to a finite value as \(T\to\infty\) for \(p<p_{c}\). Relying on this observation, we identify \(p_{c}\) with the smallest \(p\) where the numerical data are consistent with \(I_{A,R}\) approaching zero algebraically as \(T\to\infty\), yielding the estimate \(p_{c}\approx 0.5\). We then use the critical scaling \(\left.I_{A,R}\right|_{p=p_{c}}\sim T^{-\beta/\nu}\) to fit the ratio \(\beta/\nu\), see Fig. 5a. Finally, we fit estimate \(\nu\) by requiring a good scaling collapse for the full set of data from Fig. 4. We obtain the critical parameters \(p_{c}=0.5\), \(\beta/\nu=0.34\) and \(\nu=2\), yielding the scaling collapse shown in Fig. 5b. We note, however, that due to the large number of fitting parameters, the critical exponents extracted this way carry a considerable uncertainty. We leave the more thorough investigation of critical properties for future work.
Figure 4: Coding transition induced by a single boundary. The mutual information between the reference qubit and the output of the circuit shown as a function of dissipation strength \(p\), for \(T/L<1\) fixed, with boundary dissipation realized as a random erasure channel. The scaling with circuit depths \(T\) points to a phase transition between a phase with partially protected information, and a phase with all information lost.
Figure 5: Critical properties of the coding transition for a single boundary. (a) Critical power law scaling of the mutual information with respect to circuit depth \(T\) at the estimated transition point, \(p_{c}=0.5\). The scaling relation \(I_{A,R}\sim T^{-\beta/\nu}\) is used to extract \(\beta/\nu=0.34\) (dashed line). (b) Full scaling collapse of rescaled mutual information \(T^{\beta/\nu}I_{A,R}\) as a function of \(T^{1/\nu}\left(p-p_{c}\right)\), using \(\nu=2\).
### The Replica Limit and the Nature of the Phase Transition
The behavior of quenched-averaged quantities, e.g. the Haar-averaged Renyi mutual information \(\langle I_{A,R}^{(2)}(t)\rangle\), close to the coding phase transition are quantitatively distinct from the annealed-averaged mutual information studied in Sec. III.1. This is suggested by the numerical studies in the previous section, which present strong evidence that the coding phase transition is in a different universality class from a de-pinning phase transition for a single Ising domain wall. Here, we will provide some conjectures on the nature of this phase transition, based on analytic arguments.
We will focus our attention on the averaged second Renyi mutual information \(\langle I_{A,R}^{(2)}(t)\rangle\) whose behavior may be obtained via a "replica trick"; the second Renyi entropy may be obtained in the limit \(S_{A}^{(2)}(t)=\lim\limits_{k\to 0}\left(1-\left[\operatorname{Tr}\rho_{A}(t)^{2} \right]^{k}\right)/k\), so that the calculation of the Haar-averaged mutual information reduces to evaluating quantities such as \(\left\langle\left[\operatorname{Tr}\rho_{A}(t)^{2}\right]^{k}\right\rangle\) in a replica limit \(k\to 0\). After the Haar average, these quantities may be regarded as partition functions for lattice magnets with "spins" taking values in the permutation group on \(2k\) elements \(S_{2k}\)[39]. A drastic simplification in the limit of large, but finite, on-site Hilbert space dimension \(q\) occurs [54], whereby \(\left\langle\left[\operatorname{Tr}\rho_{A}(t)^{2}\right]^{k}\right\rangle\) may be regarded as \(k\) copies of an Ising magnet, with weak inter-replica interactions at each spacetime point where a Haar-random unitary gate has been applied. The intra-replica interactions for each Ising magnet are described by the statistical mechanical rules presented in Sec. II.1. The inter-replica interactions are known to be attractive, and vanish in the limit that \(q\) is strictly infinite [54]. As already derived in II.1, the boundary dissipation acts as an Ising symmetry-breaking field, giving rise to a boundary potential for the Ising domain wall within each replica.
The replica limit of the resulting theory may thus be regarded as the description of a directed path in a random environment [55; 56], restricted to the half-line \(x\geq 0\), and in the presence of a potential near this boundary, due to the dissipation. The path integral for this problem for a given realization of the disorder is formally given by
\[Z[V]=\int\,Dx(\tau)\,e^{-S[x,V]} \tag{17}\]
where
\[S[x,V]\equiv\int d\tau\left[\frac{1}{2}\left(\frac{dx}{d\tau}\right)^{2}+V[x,\tau]-u\,\delta[x]\right]. \tag{18}\]
Here \(x(\tau)\) is the coordinate of the path at time \(\tau\). The random potential in the bulk \(V[x,\tau]\) is taken to have zero mean, and is short-range-correlated in spacetime, e.g. we may take the potential to be delta-function-correlated as \(\overline{V[x,\tau]V[x^{\prime},\tau^{\prime}]}=\sigma^{2}\delta(x-x^{\prime })\delta(\tau-\tau^{\prime})\), where \(\overline{\cdots}\) denotes an average over the probability distribution for the disorder. The statistical mechanics of the replicated theory \(\overline{Z^{k}}\) thus describes \(k\) interacting paths in the presence of a boundary potential, and thus resembles that of the Haar-averaged quantities \(\left\langle\left[\operatorname{Tr}\rho_{A}(t)^{2}\right]^{k}\right\rangle\), \(\left\langle\left[\operatorname{Tr}\rho_{A\cup R}(t)^{2}\right]^{k}\right\rangle\) in the limit of large, but finite, \(q\). A schematic depiction of this replicated theory is shown in Fig. 6.
The weak inter-replica interactions are known to be a relevant perturbation at the critical point describing the pinning of a single Ising domain wall [57]. Remarkably, the new critical point describing the pinning/de-pinning of a directed polymer to an interface, has been understood exactly [57] by Bethe ansatz techniques. The characteristic wandering length of the polymer transverse to the interface diverges with an exponent \(\nu_{\perp}=2\) on approaching the phase transition from the localized phase, while the divergence of the specific heat is characterized by the exponent \(\alpha=0\). For time-independent dissipation (e.g. the depolarizing channel is applied identically at the boundary at each time of the quantum circuit evolution), we thus expect the coding transition to be in the universality class of this de-pinning phase transition for a directed polymer.
In contrast, if the boundary dissipation varies randomly in time - as was studied in Sec. III.2 - then the nature of the phase transition is not completely understood. This problem corresponds to having an imaginary-time-dependent boundary potential \(u(\tau)=u_{0}+v(\tau)\) in (18), where \(v(\tau)\) has zero mean and is short-range-correlated in spacetime; for simplicity, we take \(\overline{\overline{v(\tau_{1})v(\tau_{2})}}=\mu^{2}\delta(\tau_{1}-\tau_{2})\), with \(\overline{\overline{\cdots}}\) denoting the average over the distribution for \(v(\tau)\).
We may study the relevance of randomness in this boundary potential at the de-pinning transition. Here, the action is invariant under coarse-graining and re-scaling \(\tau^{\prime}=\tau/b^{z}\), and \(x^{\prime}\equiv x/b\) where \(z\) is the dynamical critical exponent at the phase transition. Under this
Figure 6: The Haar-averaged RΓ©nyi mutual information between the reference qudit(s) and the system, \(\langle I_{A,R}^{(2)}(t)\rangle\) is described in the large-\(q\) limit, by the \(k\) Ising domain walls in the presence of attractive, inter-replica interactions, and an attractive interface within each replica, in the limit \(k\to 0\). This is described by the path integral in Eq. (18).
transformation, the random boundary potential becomes \(\int d\tau\,v(\tau)\delta[x]\longrightarrow b^{z-1}\int\,d\tau^{\prime}\,v(b^{z} \tau^{\prime})\delta[x^{\prime}]\), so that we identify \(v^{\prime}(\tau^{\prime})\equiv b^{z-1}v(b^{z}\tau^{\prime})\) as the renormalized potential in the coarse-grained theory. The correlations of the renormalized potential are thus
\[\overline{\overline{v^{\prime}(\tau^{\prime}_{1})v^{\prime}(\tau^{\prime}_{2})} }=\mu^{2}b^{z-2}\delta(\tau^{\prime}_{1}-\tau^{\prime}_{2}) \tag{19}\]
Therefore, the strength of the disorder decreases under renormalization when \(z<2\). It has been conjectured [58] that \(z=3/2\) at the pinning transition for the directed polymer, so that the randomness in the boundary potential should be irrelevant by Eq. (19), so that the same fixed-point describing the de-pinning of a directed polymer studied in Ref. [57] should describe the resulting transition in the presence of randomness.
We are, however, unaware of the correctness of this result in Ref. [58] for the dynamical exponent. The numerical studies presented in Sec. III.2 further suggest that \(p_{\parallel}=2\) (as opposed to \(\nu_{\parallel}=z\nu_{\perp}=3\), which is what would be predicted on the basis of \(z=3/2\) and \(\nu_{\perp}=2\)), though more extensive numerical studies are required to pin down the nature of this transition. We note, for completeness, that Eq. (19) suggests that the random boundary potential is a marginal perturbation exactly at the de-pinning phase transition for the Ising domain wall (which has \(z=2\)[53]). A Wilsonian renormalization-group calculation to higher order further suggests that the disorder is marginally _relevant_[59]. The nature of the resulting critical point is not understood, and deserves further investigation.
### Perfect information protection using scrambling
In the low-dissipation phase of the coding transition, quantum information is only partially protected. One would expect that the information protection can be improved by first scrambling the information with unitary gates, which can effectively act like a random encoding, before the dissipation is turned on; we refer to this as a "pre-scrambling" step. Here we argue that for fixed system size \(L\) and dissipation strength \(p\), scrambling the initially local quantum information via a random unitary circuit of logarithmic depth \(t_{\rm scr}=k\log L\) for some sufficiently large \(k\), can lead to perfect protection of quantum information within the system, up to times of order \(T\sim L/p\). For a pre-scrambling step with a fixed depth \(t_{\rm scr}=k\log L\) and for low \(k\), we can observe the coding transition by tuning the dissipation strength \(p\). The coding transition will now be manifest in a step-function-like behavior of the mutual information \(I_{A,R}\) across the transition due to the perfect preservation of information for sufficiently low dissipation.
To gain some intuition for this result, we again consider the statistical mechanics of the Ising domain wall. As before, the domain wall is naturally thought of as propagating in a direction which is opposite to the arrow of time in the quantum circuit evolution. The domain wall thus propagates through \(T\) timesteps of the circuit involving boundary dissipation, and then encounters the pre-scrambling step where the dissipation is absent. This corresponds to free evolution of the domain wall without the symmetry-breaking field at the boundary. When this field at the boundary is turned off, trajectories of the domain wall which have already been annihilated at the boundary - such as the one shown in the left panel of Fig. 7 - do not cost additional weights in the partition sum. On the other hand, "surviving" domain wall trajectories in the bulk - such as the one shown in the right panel of Fig. 7 - incur a weight of \(q/(q^{2}+1)\) at each time step. Thus the weights of the bulk trajectories of the domain wall are exponentially suppressed in time relative to trajectories terminating at the boundary.
Let \(Z_{a}(t,T)\) be the partition function for the Ising domain wall, after the \(T\) timesteps of the dynamics with dissipation have taken place, followed by an additional \(t\) timesteps of pre-scrambling, and so that the domain wall has been annihilated at the boundary of the system. In contrast, let \(Z_{b}(t,T)\) be the partition function for the Ising domain wall to "survive" in the bulk of the system after the same evolution. To determine the behavior of the annealed mutual information, we wish to determine the probability that the domain wall ends at position \(x\geq x_{0}\) after another \(t\) steps of the dissipation-free evolution, as per Eq. (13), where \(x_{0}\) is the location of the entangled reference qubit of quantum information. For simplicity of presentation, we take \(x_{0}\) to be at the boundary of the qubit chain, so that this probability \(P(t,T)\) is
\[P(t,T)=\frac{Z_{b}(t,T)}{Z_{a}(t,T)+Z_{b}(t,T)} \tag{20}\]
To make progress, we note that since the "surviv
Figure 7: The behavior of the Ising domain wall in the presence of a pre-scrambling step, whereby the initially local quantum information is evolved by a quantum circuit of depth \(t_{\rm scr}\). We consider propagation of the domain wall backwards in time, with respect to the arrow of time in the quantum circuit. In this picture, trajectories of the domain wall which are survive in the bulk into the pre-scrambling step (right) are exponentially suppressed relative to trajectories which are annihilated at the boundary beforehand (left).
ing" trajectories contributing to \(Z_{b}(t,T)\) are exponentially suppressed in time, we may write that \(Z_{b}(t,T)=Z_{b}(0,T)e^{-\gamma t}\), where \(\gamma\) is a phenomenological decay rate which will be a function of the local Hilbert space dimension, and the dissipation strength. We further approximate the partition sum \(Z_{a}(t,T)\) by its value before the pre-scrambling step, so that \(Z_{a}(t,T)=Z_{a}(0,T)\). With these approximations, we may write
\[P(t,T)=\frac{P(0,T)}{P(0,T)+[1-P(0,T)]e^{\gamma t}} \tag{21}\]
The annealed mutual information is now obtained from Eq. (13). At sufficiently long times, so that \(P(t,T)\ll 1\), we thus find that the mutual information deviates from its maximal value by
\[2-I_{A,R}^{\text{(ann)}}(t)=\frac{q^{2}-1}{q}\cdot\frac{P(0,T)}{P(0,T)+[1-P(0,T)]e^{\gamma t}} \tag{22}\]
In the pinned phase of the domain wall, we expect \(P(0,T)\) is exponentially small in the number of timesteps \(T\). In contrast, in the de-pinned phase, the probability that the domain wall has been annihilated at the interface decays as a power-law in time due to the diffusive nature of the Ising domain wall, so that \(P(0,T)=1-O(T^{-a})\), with \(a\) a constant. For fixed \(T\), we thus find that for a sufficiently long pre-scrambling time \(t\), the mutual information deviates from its maximal value as
\[2-I_{A,R}^{\text{(ann)}}(t)\sim\begin{cases}e^{-\gamma t}&p<p_{c}\\ T^{a}e^{-\gamma t}&p>p_{c}\end{cases}. \tag{23}\]
Evaluating this expression at the scrambling time \(t_{\text{scr}}=k\log L\) yields
\[2-I_{A,R}^{\text{(ann)}}(t)\sim\begin{cases}L^{-\gamma k}&p<p_{c}\\ L^{a-\gamma k}&p>p_{c}\end{cases}. \tag{24}\]
The above calculation implies that for \(t_{\text{scr}}=k\log L\), with \(k\) large enough, quantum information is perfectly preserved. Logarithmic scrambling is enough to protect the information against noise. For low values of \(k\), the mutual information can exhibit different behavior depending on whether \(a-\gamma k\) is positive or negative. We show the results obtained from studying the annealed MI numerically in Fig. (a)a, and find good agreements with the considerations above.
We now turn to the simulation of Clifford quantum circuit dynamics. To explore how logarithmic pre-scrambling affects the coding transition induced by a single boundary, we modify the circuit protocol to include a unitary, non-dissipative pre-scrambling step, with pre-scrambling time scaling logarithmically with system size, \(t_{\text{scr}}=k\log L\), before applying the dissipative dynamics for time \(T\). We then approach the thermodynamic limit by increasing \(T\) and \(L\), while keeping the aspect ratio \(T/L<1\) fixed. In accordance with the insights gained above from the annealed Haar average, we find a phase transition for \(k=1\) as a function of \(p\) between a phase retaining information between the input and output of the circuit, and a phase with all information destroyed by dissipation, as shown in Fig. (b)b. The critical properties are different from the case without pre-scrambling discussed in the previous subsection, and, as predicted by the annealed model, the critical point is signaled by a crossing point in the mutual information obtained for
Figure 8: Coding transition with logarithmic-depth pre-scrambling. In (a), \(I_{A,R}^{\text{(ann)}}\) vs \(p\) is plotted with a pre-scrambling circuit of depth \(t_{\text{scr}}\sim\log L\). The subsequent evolution with dissipation proceeds for a total number of timesteps \(T=L\). The main plot is for \(t_{\text{scr}}=4\log_{2}(L)\). The annealed mutual information approaches the maximum value as \(L\) is increased indicating that logarithmic-depth encoding is enough to protect the information against boundary dissipation. _Inset_ shows the plot for \(t_{\text{scr}}=\log_{2}(L)\) with \(I_{A,R}^{\text{(ann)}}\) going through a transition with respect to \(p\). The results agree with eq. (24) derived in the main text. In (b), the mutual information, as calculated in Clifford dynamics, for dynamics with pre-scrambling of depth \(t_{\text{scr}}=\log_{2}(L)\), plotted as a function of dissipation strength \(p\). Boundary dissipation is realized as a random erasure channel, and \(T/L=1/2\) is kept fixed for different system sizes. The mutual information reveals a phase transition, with the critical point appearing as a crossing point of the data for different system sizes.
different system sizes. We find a similar coding transition for \(k\leq k_{\rm max}\), with \(k_{\rm max}\sim O(1)\). For even larger values of \(k\), the mutual information remains maximal for all values of \(p\).
## IV Coding transition on the approach to thermalization
In the previous section, we studied systems of size \(L\) with dissipation acting near the left boundary in the regime \(T\lesssim L\) so that the right boundary did not play a role in the dynamics. More precisely, as long as \(L/T\) remains larger than the velocity of the entanglement domain wall, which is less than the lightcone velocity in the quantum circuit, the coding transition can be understood as a depinning transition of the domain wall, such that for noise rate \(p\) below the critical value \(p_{c}\) some amount of information survives.
In this section, we study what happens when the dynamics in the coding phase extend for even longer periods of time, and show that the surviving information will eventually be lost to the environment as the system completely thermalizes. We may understand this result by considering the dynamics of the Ising domain wall, which describes the behavior of the annealed mutual information. For sufficiently large \(T/L\) the domain wall will escape and get annihilated at the right boundary. Thus using eq. (13) \(I_{A,R}^{\rm(ann)}\) becomes zero and the information gets leaked to the environment. Intuitively speaking, the system gets entangled with \(pT\) number of environment qubits, and when \(pT\gtrsim L\) the system gets maximally entangled with the environment and become thermalized. By the monogamy of entanglement, the reference qudits can no longer be entangled with the system but are lost to the environment. Therefore for large \(T/L\) there is a transition with respect to the dissipation strength \(p\), and the location of the critical point scales as \(p_{d}\sim T/L\); for \(p>p_{d}\) the information gets completely entangled with the environment. This transition is also visible with respect to \(T\) and fixed dissipation strength \(p\).
We study this coding transition by performing \(t_{\rm scr}=L\) steps of pre-scrambling before turning on the noise. As explained in the previous section, linear pre-scrambling perfectly protects the information for all strengths of dissipation, and when \(T/L\) is sufficiently small. This pre-scrambling step has the effect of making the transition appear as a "step function" in the mutual information \(I_{A,R}\) as a function of dissipation strength. Indeed, \(I_{A,R}^{\rm(ann)}\left(T\right)\) vs \(p\) for \(T/L=4\) in Haar random circuit in Fig. 9 shows such a behavior, and appears to be a scaling function of \((p-p_{d})L\) (see inset).
### Numerical Study
We also verify the above transition in the Clifford circuit setting introduced in the previous section. Here, after initializing the Bell pair at the left boundary of the chain, we run a pre-scrambling step linear in system size, \(t_{\rm scr}=L\), followed by the dissipative dynamics applied for time \(T\). As before, we examine the finite size scaling by increasing \(T\) and \(L\), while keeping \(T/L>1\) fixed. As already discussed in the annealed framework, we find a phase transition for large enough aspect ratio \(T/L>1\). In Fig. 10a, we plot the mutual information between the reference qubit and the output of the circuit as a function of \(p\) for different system sizes \(L\), using a pre-scrambling time \(t_{\rm scr}=L\) and aspect ratio \(T/L=4\). In perfect agreement with the annealed picture, the mutual information curve approaches a step function in the thermodynamic limit, confirming a phase transition between a phase with all the information protected, and a phase with all information destroyed.
We find a good scaling collapse with the scaling function depending on \((p-p_{d})L^{1/2}\), see Fig. 10b. The form of the scaling function differs from the annealed result. This deviation can be understood by noting that for the annealed case we applied a deterministic boundary depolarization channel, Eq. (1) whereas the dissipation in the Clifford circuit is applied at random time steps, and this disorder may change the properties of the transition. Indeed, the effect of randomness in the dissipation channel can be studied by introducing disorder into the annealed model and applying channel (1) at random times which leads to scaling function depending on \((p-p_{d})L^{1/2}\) (data not shown), in perfect agreement with the Clifford circuit results. The discrepancy between the factor of \(L\) and \(L^{1/2}\) can be understood as follows. With randomness, the number of environment qubits entangled with the system increase linearly with \(T\) but has fluctuations of order \(\sqrt{T}\). This results in the critical point fluctuating as \(\delta p/\sqrt{T}\) leading to \((p-p_{d})L^{1/2}\) dependence of the mutual information.
Figure 9: Plot of \(I_{A,R}^{\rm(ann)}\) in Haar random circuits. \(T/L=4\) and \(t_{\rm scr}=L\). _Inset_. The data collapse to a single curve as a function of \((p-p_{d})L\).
### Nature of the Phase Transition
We end this section by discussing the nature of the transition explored above. We argue below that coding transition in this regime is a first-order phase transition.
To begin with, let us consider the large qudit limit such that \(1/q\ll(1-p)^{2}\). The partition function in the annealed picture contains contributions coming from all possible trajectories of the domain wall. The contribution at time \(t\) from trajectories having the domain wall at \(n_{DW}\) number of time steps is of order \((1/q)^{n_{DW}}((1-p)^{2})^{t-n_{DW}}\). The entropic factor, due to there being more configurations with the domain wall as opposed to without it, can only renormalize the \(1/q\) factor. Thus the partition function is dominated by the term having no domain wall at any point of time, \((1-p)^{2t}\). However, for \((1-p)^{2t}>(1/q)^{L}\), it is preferable for the domain wall to go all the way to the right boundary and get annihilated there. Thus at \(t_{c}\sim\frac{\log 1/q}{\log(1-p)}L\) the nature of the domain wall changes discontinuously from being stuck at the noisy boundary to getting annihilated at the un-noisy boundary indicating a first-order transition. The finite \(q\) corrections to the above picture only act as thermal fluctuations which causes the domain wall to have some excursions inside the bulk. The contributions from these excursions will be sub-leading and we expect the transition to remain first-order. Note that similar time scales were also identified in [30] for the system to become perfectly thermalized in the presence of noise.
As in the standard theory of first-order phase transition, the two boundaries correspond to the two local minima for the domain wall and the system discontinuously jumps from one to another. The mutual information then is a function of the probability that the system is in one of the two minima (see eq. (13)). Since the free energy is extensive, the probability of being in a particular minimum scales as a function of \(\delta gV\) where \(\delta g\) is the tuning parameter for the transition and \(V\) is the total volume of the system. In our case, the volume is equal to \(T\). This explains the observed finite-size collapse as a function of \((p-p_{d})T\) in Fig. 9.
## V Encoding at a Finite Rate
So far we looked into the dynamics of a single bell pair localized near the noisy boundary. But it is equally interesting to understand the effects of the noise when we have an extensive number of Bell pairs in the initial state. We denote the code rate, defined as the fraction of the system's qubits entangled in Bell pairs, by \(C=N_{R}/L\) where \(N_{R}\) is the total number of Bell pairs. For the purpose of this section, we will consider code density \(C=1/2\) but we believe that the qualitative results should not change for different values of \(C\) as long as \(C\) is not close to \(1\). To make the final results independent of the distribution of the Bell pairs at the initial time we will perform random encoding by performing unitary scrambling for time \(t_{\rm scr}=L\).
We plot the annealed mutual information between the input and output, \(I_{A,R}^{\rm(am)}\), in Fig. 11 as a function of the dissipation strength for \(T=7L\). We find two threshold values for the noise rate, \(p_{th,1},p_{th,2}\). For \(p<p_{th,1}\), the information is perfectly protected and \(I_{A,R}^{\rm(am)}\) is equal to the maximal value \(2CL\). For \(p_{th,1}<p<p_{th,2}\), the information starts leaking to the environment but still a finite density of it remains in the system. Finally when \(p>p_{th,2}\) the information is completely leaked to the environment. Note that the values of \(p_{th}\) change with the ratio \(T/L\).
Similarly to the strategy followed in the previous sections, we verify these predictions by performing numerical simulations in Clifford quantum random circuits. We show the density of the mutual information between the output of the circuit \(A\) and the reference qubits, \(I_{A,R}/N_{R}\), with \(N_{R}=L/2\) denoting the number of input Bell pairs, as a function of dissipation strength \(p\) in Fig. 12, for different system sizes \(L\) with \(T/L=4\) fixed. As noted above, here we applied a linear unitary pre-scrambling step for time \(t_{\rm scr}=L\), before the onset of
Figure 10: Coding transition upon approaching thermalization. (a) Mutual information between the input and the output of the circuit shown as a function of dissipation strength \(p\), converging towards a step function in the thermodynamic limit. Pre-scrambling time is set to \(t_{\rm scr}=L\), followed by dissipative dynamics for time \(T\), with \(T/L=4\) fixed. (b) Data collapse as a function of \((p-p_{d})L^{1/2}\), with the critical point \(p_{d}=0.136\) corresponding to the crossing point of finite size data.
the noisy dynamics, such that the results do not depend on the spatial distribution of the Bell pairs in the initial state. We find a phase with perfectly protected information for small enough dissipation strength \(p\), followed by a crossover region with a finite density of preserved coherent information decreasing continuously with \(p\), eventually decaying to zero for large \(p\).
To understand this behavior we again resort to the statistical mechanics of the Ising domain wall. This model for a finite code rate differs importantly from that of the model when an \(O(1)\) amount of quantum information is encoded. In the case of finite coding rate there are an extensive number of Ising spins at the top boundary whose state is fixed by the boundary conditions, though the bulk dynamics of the domain wall remain the same. This leads to an exponential amplification of the trajectories that minimize the number of domain walls at the top boundary (note that these domain walls at the boundary are different than the Ising domain wall performing random walk in the bulk). As shown at top of Fig. 11, the annealed \(I\) is given by
\[I_{A,R}^{\text{(ann)}}=CL+\log\left(\frac{Z_{\Downarrow}}{Z_{\Uarrow}}\right) \tag{25}\]
where \(Z_{\Downarrow},Z_{\Uarrow}\), are the partition function of the statistical mechanics model with down and up spins respectively at the locations of the encoded Bell pairs; the log is in the base of \(q\). As discussed in Sec. IV the domain wall discontinuously changes from being at the left boundary to being at the right boundary. To a good approximation, we can thus only keep these two trajectories in the partition function. For clarity of the expressions we also introduce \(\bar{p}\equiv 1-p\). The partition functions \(Z_{\Downarrow},Z_{\Uarrow}\) can thus be written as
\[Z_{\Downarrow} \approx\bar{p}^{2T}q^{2CL}+\left(\frac{1}{q}\right)^{L}q^{CL} \tag{26}\] \[Z_{\Uarrow} \approx\bar{p}^{2T}q^{CL}+\left(\frac{1}{q}\right)^{L}q^{2CL} \tag{27}\]
Putting the above expression in eq. (25) and identifying the threshold values to be \(1-p_{th,1}\sim q^{-(1-C)L/(2T)},1-p_{th,2}\sim q^{-(1+C)L/(2T)}\), we get
\[I_{A,R}^{\text{(ann)}}\approx\begin{cases}2CL&p<p_{th,1}\\ 2CL-2T\log\left(\frac{1-p_{th,1}}{1-p}\right)&p_{th,1}<p<p_{th,2}\\ 0&p>p_{th,2}.\end{cases} \tag{28}\]
Intuitively, for low \(p\) the domain wall remains localized near the noisy boundary and mutual information is maximal. As \(p\) is increased, it is easier for the DW in \(Z_{\Uarrow}\) to
Figure 11: _Top._ Schematic representation of the statistical mechanics of the Ising domain wall in the calculation of the annealed mutual information, when coding at a finite rate. Typical domain wall trajectories when \(p_{th,1}<p<p_{th,2}\) are shown. In \(Z_{\Downarrow}\) the domain wall remains localized whereas it is delocalized for \(Z_{\Uarrow}\), as explained in the text. _Bottom._ Plot of the annealed mutual information between Bell pairs entangled with the systemβs qubits at alternate sites (\(C=1/2\)) and the system. The Bell pairs are scrambled by a unitary circuit for time \(t_{\text{scr}}=L\). The system is evolved in presence of the boundary dissipation for time \(T=7L\). We find that for \(p<p_{th}^{\text{th}}\approx 0.06\), full information is preserved, while for \(p_{th}^{\text{t}}<p<p_{th}^{\text{t}}\approx 0.2\), a finite density of information is protected. The threshold values decrease as \(T\) is increased. _Inset._ For low \(p<p_{th}^{\text{t}}\) there is no information loss even for \(T=7L\), that is, the difference between \(I_{A,R}^{\text{(ann)}}\) and the maximum value \(L\) goes to zero with system size. Thus all Bell pairs can be perfectly recovered by a recovery operation acting on the system.
Figure 12: Coding transition for finite code rate. Density of mutual information between the output of the circuit and the reference qubits shown as a function of dissipation strength \(p\), for fixed evolution time \(T/L=4\) and number of initial Bell pairs \(N_{R}=L/2\). Pre-scrambling time is \(t_{\text{scr}}=L\), followed by noisy dynamics with a random boundary erasure channel. The information density is perfectly protected for weak enough dissipation \(p\), then decays continuously towards zero with \(p\) in a crossover region, with all information leaked to the environment for \(p\) large enough.
delocalize compared to \(Z_{\Downarrow}\) as in the former delocalization results in an exponential reduction in the cost associated with having domain walls at the boundary. Thus the critical point at which the DW delocalizes is different for the two boundary conditions resulting in the two thresholds discussed above.
## VI Summary and discussion
In this work, we studied one-dimensional quantum many-body systems with a noisy boundary. We focused on the dynamics of the information of an initially localized Bell pair near the (noisy) boundary by studying the mutual information \(I_{A,R}(t)\) between the inert spin of the Bell pair with the system at later times where \(A\) is the system and \(R\) is the inert spin. This is also related to the coherent information about the Bell pair remaining in the system [49; 50]. We find that the chaotic scrambling due to the unitary dynamics is sufficient to protect a part of this information against getting leaked to the environment for noise rate \(p<p_{c}\) and long times \(T\lesssim L/p\) by allowing the information to escape away from the boundary. We further show that a random encoding of the Bell pair via noise-less scrambling dynamics of depth \(\mathcal{O}(\log L)\), is sufficient to _perfectly_ protect the information for all strengths of the noise upto time \(T\lesssim L/p\). See Fig. 1.b for a schematic representation of the phase diagram.
In the regime when the total time of evolution \(T\gtrsim L/p\), any remaining information in the system is revealed to the environment and the system go through a first-order coding transition. This transition can also be seen as a result of the system approaching thermalization to infinite temperature. We expect this form of coding transition to be present in all noisy channels though in the case of the boundary noise considered here, the timescales associated with the transition increase parametrically with the system size [30].
We also look at the coding dynamics for finite code rate, that is, when an extensive number \(N_{R}=CL\), with \(C<1\), of the system's qubits are entangled in Bell pairs. We find that the code space can be _perfectly_ preserved for noise strength below some threshold \(p_{th,1}\) and for strength above \(p_{th,2}\) the code space is completely destroyed, see Fig. 11, 12. We can also look at the time for which the information stays in the system for a fixed noise rate \(p\) and equivalently define two threshold times \(T_{th,1}<T_{th,2}\) both of which scales linearly with system size.
This work provides new insights into the competition between scrambling and decoherence. Normally, active feedback in the form of error correction is needed to counter the decoherence effects of the noise. However, we present the case of boundary noise where it is possible to have stable quantum error codes (QEC) in presence of generic noise, with the code space dynamically protected by scrambling. Previously such dynamical protection of information was also observed for the special case of dephasing noise which can be unraveled into quantum trajectories corresponding to projective measurements, but there an extensive number of ancilla qubits that act as register for the measurement outcomes are made part of the system [27]. It would be of interest to generalize our results and techniques in the presence of ancilla qubits for cases different from the boundary noise. We leave this for future work.
Other interesting directions to explore are the presence of similar coding transitions in purely unitary evolution. It seems possible for quantum information to remain confined in part of a system evolving under chaotic unitary dynamics for a long time, and before the system thermalizes. We leave a detailed discussion of this direction to future work [60].
The competition between chaos and decoherence has also been studied in the context of open quantum systems. Previous studies have mostly focussed on level statistics and quantities like spectral form factor, purity, and Loschmidt echo to study the effect of decoherence in chaotic dynamics [61; 62; 63; 64; 65; 66; 67; 68]. It is an open question to study such probes in our context and whether the coding transitions can also be seen in these quantities. There is also a close relationship between the input-output mutual information and operator spreading (measured via out-of-time-correlators (OTOCs)) in noise-free unitary dynamics [4]. It is interesting to understand how OTOCs in noisy systems are related to the emergent QEC property of the noisy dynamics [69; 70; 71]. Or more generally, how is the dynamics of information related to the above-mentioned quantities for open quantum systems?
The coding transitions imply protection of the code-space against noise and the potential existence of a decoding protocol that brings back the code-space to its initial state. Such a protocol is notoriously hard for random dynamics having little structure, except in a few special cases like the Preskill-Hayden black hole protocol [1; 72] or for special types of noises like erasure channel. For Clifford circuits with boundary dissipation considered here, an efficient decoder can probably be constructed for the erasure channel. Another interesting direction in further understanding the error-correcting properties of the coding transitions is to look into the code distance of the resulting code. We leave a detailed study of the decoding protocols and code distance for future studies.
We also find similar coding transitions for bulk defects where noise acts on the same site in the bulk. Protection of quantum information against bulk defects is important for the design of modular quantum computers in which smaller modules of quantum memory/computer are connected together to form a bigger block. In this case, one expects the noise in the gates connecting the two modules to be far greater than the noise in the bulk of the individual modules. Thus the existence of an error-threshold against a bulk defect and the availability of the decoding protocol discussed above gives a fault-tolerant way of building a modular quantum computer.
A possible extension of our work is to study information dynamics in noisy symmetric systems. The behavior of information in symmetric systems with local charge density in presence of measurements has been shown to be qualitatively different than without symmetry [73; 74; 75; 76]. It is also known that systems with local charge conservation can have charge transport and long-time operator entanglement growth even in the presence of strong dephasing noise [77; 78]. This may potentially lead to a more robust encoding of the information for when the code-space is spread across different charge sectors as opposed to being confined to one sector. We leave this for future studies.
###### Acknowledgements.
The authors thank the Kavli Institute for Theoretical Physics (KITP), where this research was initiated and partly performed. The KITP is supported, in part, by the National Science Foundation under Grant No. NSF PHY-1748958. S.V. thanks Matthew Fisher for helpful discussions. U.A. thanks Ali Lavasani for helpful discussions. I.L. acknowledges support from the Gordon and Betty Moore Foundation through Grant GBMF8690 to UCSB. This work was supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, U.A.).
|
2305.05222 | FishRecGAN: An End to End GAN Based Network for Fisheye Rectification
and Calibration | We propose an end-to-end deep learning approach to rectify fisheye images and
simultaneously calibrate camera intrinsic and distortion parameters. Our method
consists of two parts: a Quick Image Rectification Module developed with a
Pix2Pix GAN and Wasserstein GAN (W-Pix2PixGAN), and a Calibration Module with a
CNN architecture. Our Quick Rectification Network performs robust rectification
with good resolution, making it suitable for constant calibration in
camera-based surveillance equipment. To achieve high-quality calibration, we
use the straightened output from the Quick Rectification Module as a
guidance-like semantic feature map for the Calibration Module to learn the
geometric relationship between the straightened feature and the distorted
feature. We train and validate our method with a large synthesized dataset
labeled with well-simulated parameters applied to a perspective image dataset.
Our solution has achieved robust performance in high-resolution with a
significant PSNR value of 22.343. | Xin Shen, Kyungdon Joo, Jean Oh | 2023-05-09T07:38:09Z | http://arxiv.org/abs/2305.05222v3 | # FishRecGAN: An End to End GAN Based Network for Fisheye Rectification and Calibration
###### Abstract
We propose an end-to-end deep learning approach to rectify fisheye images and simultaneously calibrate camera intrinsic and distortion parameters. Our method consists of two parts: a Quick Image Rectification Module developed with a Pix2Pix GAN and Wasserstein GAN (W-Pix2PixGAN), and a Calibration Module with a CNN architecture. Our Quick Rectification Network performs robust rectification with good resolution, making it suitable for constant calibration in camera-based surveillance equipment. To achieve high-quality calibration, we use the straightened output from the Quick Rectification Module as a guidance-like semantic feature map for the Calibration Module to learn the geometric relationship between the straightened feature and the distorted feature. We train and validate our method with a large synthesized dataset labeled with well-simulated parameters applied to a perspective image dataset. Our solution has achieved robust performance in high-resolution with a significant PSNR value of 22.343. 1
Footnote 1: This work was done while authors attended Carnegie Mellon University at 2020. To communicate, please contact author Xin Shen via [email protected]
Xin Shen, et al. FishRecGAN: An End to End GAN Based Network for Fisheye Rectification and Calibration. Advances in Artificial Intelligence and Machine Learning. 2023;3(2):69.
## 1 Introduction
Fisheye cameras have become popular in robotics-related industries due to their large field of view, but they introduce severe distortion and nonlinearity among pixels. To address this issue, the industry relies on traditional fisheye camera calibration [1, 2], which requires an individual to hold a checkerboard in front of the camera and take pictures with different poses. These pictures are then fed into a geometric algorithm to calibrate the camera intrinsic and distortion parameters. Many
existed works can be operated through the OpenCV library [3] following the traditional method [4]. However, this process requires a significant amount of human labor.
### Motivation
The conventional calibration method exhibits inconsistencies and necessitates significant human involvement, thereby introducing the potential for human errors. Furthermore, it lacks the capability to promptly rectify camera settings in real-time without pre-calibration. Additionally, it relies on specific equipment and mandates the utilization of a compatible camera to capture images with a checkerboard for optimization purposes. Consequently, our objective is to present an algorithmic solution that is independent of human intervention, ensuring consistency and efficiency. Our proposed algorithm aims to enable real-time rectification for camera surveillance operations, which necessitate continuous adjustments during their execution.
### Our Contribution
In this paper, we proposed an enhanced approach to construct an end- to- end multi-contextual network architecture consisted of GANs and CNNs. The architecture can be found in the FIGURE 1. Specifically, we make the following major contributions:
* We proposed an end- to- end GAN based multi- contextual network to better learn the geometric mapping relationship between the distorted nonlinear pixels (fisheye images) and the rectified linear pixels (fisheye- rectification) as an enhancement for one of the previous state-of- art work [5]. We developed a novel training algorithm for the Pix2Pix GAN model [6] by integrating Wasserstein GAN's (WGAN) [7] approach. This allowed the model to rectify a fisheye image to its corresponding straightened image pair with high resolution (avg. PSNR 22.343 4.2) without the need for traditional geometric algorithm via the calibrated parameters, which is computationally expensive. The single GAN model provided the ability to rectify fisheye images for surveillance equipment that requires constant calibration.
* We synthesized a large- scale dataset consist of fisheye- image- straightened- image pairs with the corresponding parameters. This dataset contains both clear structural framework and weak structural image pairs with a well- simulated distortion parameters which provides consistency for a deep neural network to learn.
### Overview of Our Approach
We use Generative Adversarial Networks (GAN) [8] to solve the problem of fisheye image rectification, which involves finding the mapping function between nonlinearity and linearity among pixels. Compared to many traditional computer vision based algorithms to infer an object's geometric conditions [1, 2, 9, 10], GAN is advantageous because it can achieve real-time performance during inference with its lightweight architecture. Specifically, we use the Pix2Pix GAN to solve the direct mapping problem from fisheye image to perspective image. However, Pix2Pix GAN struggles with
high differences between distributions, such as those found in fisheye images, making it difficult to learn in a high-level manifold. To address this, we incorporate the Wasserstein GAN (WGAN) learning algorithm, using the Earth Mover (EM) distance to provide a continuous learning curve. Our proposed W-Pix2PixGAN model achieves a high-resolution direct rectification from fisheye image to its corresponding perspective image FIGURE 2, with an average PSNR score of 22.343.
In many industrial applications, calibrated camera parameters are required for further use. While convolutional neural networks (CNNs) have been explored for predicting visual-based camera parameters [11], a simple feed-forward CNN architecture is often more suitable for subtle visual
Figure 1: The model comprises three key components: the Quick Rectification Module, the Calibration Module, and the Rectification Layer. The Quick Rectification Module, based on an enhanced Wasserstein GAN and Pix2Pix GAN, generates a ground-truth-like semantic guidance and performs real-time preliminary rectification. The Calibration Module employs a ResNet-based CNN architecture, utilizing the concatenated feature to extract pixel relationships and calibrate parameters for curved-to-straightened pixel mapping. The rectification layer utilizes the obtained distortion parameters to perform image rectification.
Figure 2: The quick rectification performed by our WGAN enhanced Pix2Pix GAN model. **1****st row**: original synthesized fisheye images; **2****nd Row**: rectified images
classification and object detection [12, 13]. However, for regression tasks such as predicting camera parameters, a deep learning model requires more geometric information constraints than just a single raw fisheye image as the sole input feature. To address this issue, we focused on creating a strong inter-pixel relationship feature map for a convolution network to learn the mapping function between given features and the parameters for regression, following a similar idea presented in [14]. In line with Xue, Zhucun and colleagues, we used the lines detected on the raw fisheye image from the line detection network as a "semantic guidance" concatenated with the raw fisheye image to create a new feature map for a ResNet based model to learn [5, 15]. We proposed an assumption that this would help to enhance the performance more than the previous work. Thus, unlike this previous work, we concatenate the output from the W-Pix2PixGAN, which is already a ground- truth- alike feature, to the raw fisheye image. By doing so, we are able to create an inter- relationship between any curved structures in the fisheye image and the corresponding straightened ones which are how the curved supposed to be rectified. Then this new feature map is fed into the an similar Parameter-Calibration Module architecture as the previous work to perform the regression/ calibration, which is to predict 9 distortion parameters.
## 2 Related Work
In 2019, Wuhan University proposed the "Multi-contextual Network" approach, which introduces a "Guidance-alike" semantic feature map generated from a CNN and concatenated with the original fisheye image for enhanced learning [16]. Previous work in 2019 by Xue et al. [5] used a line detection network to highlight distorted lines in fisheye images and concatenated them with the original image to create a feature map containing more geometric information for a ResNet-based regression network to learn. The architecture consists of a line detection network, a calibration module with ResNet, and a traditional geometric rectification method to take in the calibrated parameters for rectification.
While the use of distorted lines as guidance for introducing more geometric information on the pixel level is an innovative idea, it falls short in providing a real-time solution for fisheye image rectification. The pipeline still requires simulating the calibration process to obtain distortion parameters, and during inference, the multi-contextual network requires too much computation to run in real-time. Additionally, using distorted lines as guidance may have limitations in detecting well-structured lines in non-line sensitive input images, which can reduce the quality of the semantic feature.
Our objective in this research study was to enhance the existing work by focusing specifically on the semantic generation aspect within the pipeline. We operated under the assumption that improving the quality of semantic features would yield superior calibration outcomes. Notably, recent studies [17, 18, 19] have demonstrated that employing generic semantic guidance leads to significant improvements in model performance for regression and classification tasks. In order to further enrich the information provided to the network, we proposed the incorporation of a ground-truth-like feature, referred to as the corresponding perspective image. This feature not only rectifies all lines within the image but also exhibits linear pixel patterns, thereby enhancing the effectiveness of guidance in the calibration process. To accomplish this, we leveraged generative adversarial networks (GANs) and trained them using a pair of distorted and perspective images. The GAN model facilitated the generation of a ground-truth-like feature, enabling rapid rectification through the use of the
generator alone. We integrated the training algorithm of Wasserstein GAN (WGAN) and introduced modifications to the original loss function of Pix2Pix GAN, incorporating the Earth Mover's (EM) distance, thereby enhancing the GAN model's performance and yielding high-resolution outputs.
## 3 Technical Approach
To validate our approach and assumption, we aimed to replicate the previous work as closely as possible, with the exception of replacing the original line detection network with the W-Pix2PixGAN for semantic generation. However, the previous work's authors did not publicly share their implementation, so we developed a similar dataset by implementing a fisheye-image-synthesis algorithm with the same camera model mentioned in the paper. Through simulating fisheye-effect-synthesis, we identified the parameters needed to generate a fisheye image similar to the previous work. During training and inference, we followed the same pipeline as the previous work, randomly selecting four distortion parameter sets out of the total twelve to synthesize the fisheye image. We replicated a similar Calibration Module architecture using ResNet34 as the backbone. By changing only the first part of the network, we conducted a fair comparison to determine which semantic generation model provided better guidance semantics.
### Data Synthesis and General Fisheye Camera Model
To train the model, we synthesized our datasets by distorting a perspective image using a general polynomial projection model[20]. With a given normal perspective pinhole camera, a point \(\mathbb{P}:=\{X,Y,Z\}\in\mathbb{R}^{3}\) in the world frame can be projected onto the image frame \(\mathbb{P}_{i}:=\{u,v\}\in\mathbb{R}^{2}\) in the following transformation using the camera intrinsic matrix. See Appendix A for detailed mathematical models and derivations.
### Parameter Selection and Simulation
Since the authors of the previous work, Zhucun Xue and et al [5], did not provide the distortion parameters they used, we picked up our own parameters which yield the similar fisheye distortion effect shown in their work.
To be consistent with the previous work, we generated our synthesized dataset by artificially adding distortion upon the WireFrame Dataset [21] with randomly selecting 4 distortion parameters out of 12. In FIGURE 4, we list several samples of the distortion effect, such as full- frame fisheye image, minor- distortion image, drum- fisheye image, and full- circle image.
Machine learning algorithms often face the difficulty of learning a non-deterministic and inconsistent mapping function. The problem of generating a fisheye effect from a given perspective image is particularly challenging, given the many random combinations of the nine distortion parameters involved. Blindly and randomly selecting parameter combinations can make it difficult for the network to learn the transformation pattern. To overcome this issue, we conducted a simulation process that changed one parameter at a time while ruling out the others and observed the physical
the effect of each parameter. We varied each parameter from -0.9 to 1 and visualized the effects of changing each \(k_{i}\) at the same level in FIGURE 3. Upon observation, we discovered that:
* \(k_{1}\) is doing the major contribution which has a sensitive and significant effect on both the center and the edge of a given perspective image.
* \(k_{2}\) and \(k_{3}\) have a less sensitive effect on distorting an image and both have a slight impact on the center of the image.
* \(k_{4}\) and \(k_{5}\) almost have no effect in distorting the center pixels while both have a slight and non- sensitive effect on the edge.
From the \(1^{st}\) row of FIGURE 3, we have found that by only changing \(k_{1}\) could we obtain a similar visual distortion effect as the previous works. However, in order to increase the model's generaliza
Figure 4: A sample showing the synthesized fisheye- perspective image pair
Figure 3: Shows the distortion parametersβ simulation to figure out the proper ones for fisheye image generation
tion ability and meanwhile to keep the parameters consistency, we chose \(k_{2},k_{3},k_{3},k4,k_{5}\) to be as simple as possible but not to be 0. In FIGURE 4 shows a set of samples of our synthesized dataset with 12 different fisheye distortion effect.
### Deep Rectification and Calibration Network
In this section, we mainly exploit the details of the two major modules of our model, namely the W-Pix2PixGAN model and the Calibration Module with Resnet34 as the backbone. Meanwhile, we will introduce the training scheme and the loss function designed.
As shown in FIGURE 1, our full model is mainly consisted of two major deep neural networks. The first one is the Rectification Module consisted of W-Pix2PixGAN model to perform a preliminary and quick rectification with a given fisheye image. The second is the Calibration Module with ResNet34 as the backbone; we built this module in the similar architecture, such as filter sizes and convolutional layers designs, as much similar as possible to the previous work [5]. This module is designed to perform the estimation of the 9 important parameters including the distortion parameters \(K_{d}\) as universal regressor.
Given a RGB fisheye image \(I\) with size of \(H\times W\), a rectified semantic map \(\mathbb{H}\in\mathbb{R}^{\mathbb{H}\times\mathbb{W}}\) is generated from the Rectification Module, and then this semantic feature is used as a guidance to be concatenated with the original fisheye image to create a new feature \(F\). This new feature is then fed into the Calibration Module to learn the inner- pixel relationship between the curved lines and the corresponding rectified line in a high manifold, and finally to learn the 9 parameters through a multi- layer perception network. Thus, for our model, every training data sample contains: (1) a fisheye image \(I\), (2) the ground truth of the corresponding rectified image map \(H\), (3) the ground truth of the distortion parameters \(K_{d}\).
We used the architecture of the Pix2PixGAN for the W-Pix2PixGAN model, which includes the U-Net structure for the generator and the patch-discriminator structure. However, since we needed the model to learn a mapping between two distributions, the fisheye distribution and the rectified distribution, we used Instance-Normalization instead of Batch-Normalization. Initially, training the Pix2PixGAN model was challenging due to the inherent limitation of the GAN's objective function, which minimized the difference between the Jensen-Shannon divergence between the real and fake distributions and a constant value. However, this led to zero-overlapping in the high manifold and a lack of learning. To resolve this, we used a \(16\times 16\) patch design and modified the discriminator's architecture to use a linear layer instead of a sigmoid layer, so that the output is a regression used as the GAN loss. Overall, we used the original structural design of the generator and the \(16\times 16\) patch-discriminator design with four convolutional layers.
**Calibration Module.** In order to validate our assumption that by using GAN as the semantic generation part, we needed to control variables. Thus, we tried to follow the original architecture of the previous work's design as much as possible; however, most architecture details were not clearly indicated.This module is trimmed to estimate the distortion parameters from the concatenated features. As mentioned above, the input feature for this module is the concatenation of the rectification map \(H\) and the raw fisheye image \(I\) with the size of \(H\times W\times 6\). As shown in the FIGURE 1, we applied a 4- level ResNet-34 [15] as the backbone for this module. A high- level dense feature map
out from the L1- L4 ResNet is then fed to 2 other convolutional layers to be introduced with more nonlinearity with LeakyReLU activation. A 3- layer fully connected (FC) layers are then connected after the last convolutional layer. In order to restrict the model's learning behavior, we did not introduce any dropouts within the FC layers, and as this is a regression problem, we used all linear activation within the FC layers. The last FC layer is used to predict a 9- D vector representing the distortion parameters denoted by \(K_{d}\).
**Rectification Layer.** In this module, we followed geometric model in Eq. (10) to iteratively remove the distortion parameters predicted using bi-linear interpolation.
\[P_{d}=\tau(p,K_{d})=\begin{bmatrix}u_{0}\\ v_{0}\end{bmatrix}+\frac{r(\theta)p}{\|p\|_{2}} \tag{1}\]
, where the pixel coordinate in the rectified image is \(\mathcal{P}=(x,y)\), and the pixel coordinate in the fisheye image is \(\mathcal{P}_{\mathcal{D}}=(x_{d},y_{d})\).
### Loss Function and Training Scheme
In our network, which performs both quick rectification by W-Pix2PixGAN and distortion parameter calibration by a ResNet-based CNN, we performed supervised training for both modules. To pre-train the GAN, we provided a pair of images: a fisheye image denoted as \(Real_{A}\) and a ground truth picture of the rectified perspective image denoted as \(Real_{B}\). The learning objective of the GAN was to learn a direct mapping between the fisheye image and the generated rectified image, denoted as \(Fake_{B}\). Following the scheme in FIGURE 1, we next used the generated rectified image as guidance and concatenated it with the raw fisheye image to create a new feature map, which was then fed into the Calibration Module. This learning was supervised by the ground truth of the 9 distortion parameters \(K_{d}\), and in turn, this network was trained to perform a universal regression to predict the corresponding parameters using the concatenated feature map.
We integrated the original Pix2PixGAN model [6] with the Wasserstein GAN's idea of EM distance [7] to achieve a continuous GAN loss for training. This enabled our model to successfully learn the mapping function between two significantly different pixel distributions. We modified the original MSE loss between the probability output distribution from the discriminator and the truth distribution (either all ones or all zeros, representing being real and being fake, respectively) to the EM distance by removing the last sigmoid layer of the discriminator, \(f_{w}\). The input fed into the discriminator was identical to the original Pix2PixGAN's design, where we concatenated \(Real_{A}\) to \(Real_{B}\) as a new distribution, \(P_{r}\), to train the discriminator to recognize the real distribution.
\begin{table}
\begin{tabular}{||c|c|c||}
**Layer Number** & **Kernel Information** & **Receptive Field** \\ conv\_layer1 & \([4\times 4,64]\), s = 2, p = 1 & 4 \\ conv\_layer2 & \([4\times 4,128]\), s = 2, p = 1 & 10 \\ conv\_layer3 & \([4\times 4,256]\), s = 2, p = 1 & 22 \\ conv\_layer4 & \([4\times 4,512]\), s = 2, p = 1 & 46 \\ conv\_layer5 & \([4\times 4,1]\), s = 2, p = 1 & 70 \\ \end{tabular}
\end{table}
Table 1: The summary of the receptive field in each convolutional layer. As shown in the table, with this design we could achive a high receptive field up to 70 at the last convolutional layer
Similarly, we concatenated \(Real_{A}\) to \(Fake_{B}\) as a new distribution in \(P_{g}\) to train the discriminator to recognize the fake distribution. The expectation of the output distribution from the discriminator was directly treated as the GAN loss. The discriminator loss and the generator loss are shown in Eq. 2 and Eq. 3, respectively.
\[\mathcal{L}_{D}=\mathbb{E}_{x\in P_{r}}\left[f_{w}(x)\right]-\mathbb{E}_{x\in P _{g}}\left[f_{w}(x)\right] \tag{2}\]
\[\mathcal{L}_{G}=-\mathbb{E}_{x\in P_{g}}\left[f_{w}(x)\right] \tag{3}\]
The previous loss function designed enlightened by Wasserstein GAN helps a continuous learning curve when the Pix2PixGAN is faced with 2 significantly different distribution; however, meanwhile, the generator's role is not only to fool the discriminator but also to generate an output as closer to the ground truth as possible. Thus, we also utilized the orginal pixel loss by using the \(L_{1}\) loss between the generator's output and the ground truth, shown in Eq. 4.
\[\mathcal{L}_{L1(G)}=\mathbb{E}\|y-G(x)\|_{1} \tag{4}\]
Overall, our final objective is shown in Eq.5.
\[G^{\star}=argmax_{D}\mathcal{L}_{D}+argmin_{G}\mathcal{L}_{G}+\lambda\mathcal{ L}_{L1(G)} \tag{5}\]
Lastly, the pseudo code of our training algorithm can be found below:
```
0:\(\alpha\), the learning rate. c, the weight clipping parameter. m, the batch size. n, how many more iterations to train discriminator more whileWithin training epochsdo for\(t=0\)..., \(n\)do Sample \(\{x_{A}^{(t)}\}_{t=1}^{m}\) from fisheye data; Sample \(\{x_{B}^{(t)}\}_{t=1}^{m}\) from perspective data; \(g_{B}=f_{g}(x_{A})\); \(d_{A}^{(t)}=cat(x_{A}^{(t)},x_{B}^{(t)})\); \(d_{B}^{(t)}=cat(x_{A}^{(t)},x_{B}^{(t)})\); \(G_{w}=\nabla_{w}\mathbb{I}\sum_{i=1}^{m}f_{w}(d_{B}^{(t)})-\frac{1}{m}\sum_{i }^{m}f_{w}(d_{B}^{(t)})\)] ; \(w\leftarrow w+\alpha\cdot RMSProp(w,G_{d})\); \(w\gets clip(w,-c,c)\); end for Sample \(\{x_{A}^{(t)}\}_{t=1}^{m}\) from fisheye data; \(g_{B}^{(t)}=G_{G}(x_{A}^{(t)})\); \(d_{B}^{(t)}=cat(x_{A}^{(t)},g_{B}^{(t)})\); \(G_{G}=\nabla_{w}\mathbb{I}\sum_{i=1}^{m}f_{w}(d_{B}^{(t)})+\lambda\cdot \mathcal{L}_{L1(G)}\)] ; \(\theta\leftarrow\theta-\alpha\cdot RMSProp(\theta,G_{\theta})\); end for
```
**Algorithm 1**Training Algorithm for W-Pix2PixGAN
The training was done on Nivida 1080Ti GPU device with 500 epochs and the learning rate was set to decay dynamically with respect to the validation performance using PyTorch.
The Training of Calibration Module.In this module, the learning goal is to build a universal regressor to predict the 9 distortion parameters \(K_{d}\). Thus, ideally, we perform a L2 loss upon the prediction against the ground truth \(K_{gt}\). However, as shown in FIGURE 3, we have found out that
among all 9 parameters, \(K_{1}\) is making the significant impact on the distortion effect both on the center and on the edge of the image. Thus, we performed a weighted L2 loss which emphasizes on \(K_{1}\) more with a parameter \(\beta\).
\[\mathcal{L}_{L2}=\frac{1}{9}[\beta\cdot(K_{g}(1)-K_{gt}(1))^{2}+\sum_{i=2}^{9}( K_{g}(i)-K_{gt}(i))^{2}] \tag{6}\]
Similarly, this training was implemented with PyTorch using Nvidia TITAN GPU device for 500 epochs, and the learning rate was set to decay dynamically with respect to the validation performance.
## 4 Experiment and Evaluation
### Implementation Details
We randomly selected 4 out of the total 12 distortion parameters and applied them to the WireFrame dataset, creating 20,000 training samples and 1,848 test samples. We trained the Rectification Module (W-Pix2PixGAN) for 500 epochs using the training scheme outlined in Section 3.4. We used an initial discriminator learning rate of \(Lr_{D}:=0.0009\), an initial generator learning rate of \(Lr_{G}:=0.0001\), and a batch size of 32. We also allowed for dynamic learning rate decay with respect to the validation performance to refine the GAN model output resolution during training.
We concatenated the output from the W-Pix2PixGAN with the raw fisheye image to create a new feature map to train the calibration network for 500 epochs with an initial learning rate of 0.001 and a batch size of 16. We also allowed for dynamic learning rate decay with respect to the validation loss. We used \(\beta\), the weighted penalty upon distortion \(K_{1}\), as 32. Finally, during inference, we loaded the best performing weights for both models and performed quick rectification, followed by concatenation and calibration, and then calibration and fine rectification sequentially.
### Evaluation Details
As the authors of the previous model have not yet published their code, we were unable to access their line detection module. To assess the impact of our approach, which replaces the line detection module with W-Pix2PixGAN, we assumed that our Calibration Module operates similarly to that of the previous work. As a measure of the quality of the guidance feature map, we concatenated the ground truth of distorted fisheye lines used in the previous work to the raw fisheye image. We then compared the rectified fisheye image using our approach to that of the previous work, using the predicted distortion parameters \(K_{Dpred}\). To evaluate the quality of the rectified image, we used the peak signal to noise ratio (PSNR) and the structure similarity index (SSIM) [22], following the evaluation metrics used in the previous work [5]. To assess the fairness of this comparison, we also compared the PSNR and SSIM scores of the baseline output to the ground truth of the perspective image. We then used these metrics to evaluate the performance of our W-Pix2PixGAN model for quick rectification. Finally, we used the distributions of the differences in PSNR and SSIM scores
between the baseline output and our model's output to construct 95% confidence intervals, and checked whether 0 was within each interval.
## 5 Experiment Results
### The Performance by Quick Rectification Module
As one of our objectives is to provide a direct and quick rectification given any fisheye images without going through a calibration work either by human labor or through a computationally heavy calibration network with ResNet as the backbone, we put a lot of attention on refining our W-Pix2PixGAN model, and in turn we have provided the PSNR and SSIM calculation on the 1,848 test samples with 4 randomly selected distortion parameters applied. We then separately sampled out the quick rectification performed by GAN for distortions, such as, _minor distortion_, _drum- fisheye image_, _severe- drum-fisheye image_, _full- frame fisheye image_, and _severe-full- frame-fisheye iamge_. From the range of minor distortion to severe full- frame fisheye distortion, as shown in the FIGURE 5, our W-Pix2PixGAN model could perform a quick and high- resolution rectification work directly from a given fisheye image by learning a universal pixel- to- pixel mapping relationship.
In the TABLE 2, we show the summaries of the quality of PSNR and SSIM. Compared to the previous work's overall average PSNR 27.61 and SSIM 0.8746 via the full- pipeline rectification using the calibrated parameters, we can see that given that it is solely a quick rectification by image transferring performed by W-Pix2PixGAN, the quantitative performance by our GAN model solely is acceptable.
Figure 5: Our Quick Rectification Module demonstrates effective performance, rectifying curved structures back to straightened forms with high resolution.
[https://www.oajaiml.com/](https://www.oajaiml.com/) June 2023
### Full- pipeline Comparison to Previous Work
Following the evaluation protocol outlined in Section 4.2, we conducted an end-to-end rectification process, beginning with the input of raw fisheye images and proceeding to rectify them using predicted distortion parameters. The calibration network remained fixed throughout, with the only variation being the guidance semantic concatenated with the fisheye image, using both ground truth fisheye lines and the output from our GAN. However, our attempts to replicate the quantitative results of the previous work in terms of PSNR and SSIM, as presented in TABLE 3, were unsuccessful. This may be attributed to the fact that duplicating a specific neural network necessitates a more comprehensive understanding of its details, despite our efforts to recreate the calibration network based on the information provided in the previous work's publication.
However, both yielded a enhanced performance compared to solely using the Quick Rectification Module with a pair of very closed averaged result. By following the evaluation pipeline in Section. 4.2, we constructed a 95% confidence interval on the distribution of difference between the baseline model and our model for both PSNR and SSIM respectively shown in FIGURE 7.
The results presented in TABLE 4 demonstrate that we have obtained narrow confidence intervals for both PSNR and SSIM, encompassing a range where zero difference is observed. This indicates that our approach, which replaces the line detection model with W-Pix2PixGAN while utilizing the
\begin{table}
\begin{tabular}{||c|c|c||} \hline & **AVG. PSNR** & **AVG. SSIM** \\ \hline
**Minor Distortion** & 27.7673 & 0.8733 \\ \hline
**Full-frame Fisheye** & & \\
**Distortion** & 23.4372 & 0.7431 \\ \hline
**Drum-fisheye** & & \\
**Distortion** & 24.357 & 0.7823 \\ \hline
**Full Dataset** & & \\
**with** & & \\
**4 Random Distortions** & 22.343 & 0.7185 \\ \hline \end{tabular}
\end{table}
Table 2: The summary of the W-Pix2PixGANβs performance on each separated distortion set and on the full- dataset with randomly selected 4 distortions
\begin{table}
\begin{tabular}{||c|c|c||} \hline & **Average** & **Average** \\ & **PSNR** & **SSIM** \\ \hline
**Our Approach** & 23.4717 & 0.7344 \\ \hline Baseline & & \\ via & & \\ Ground Truth & 23.4263 & 0.7326 \\ \hline \end{tabular}
\end{table}
Table 3: The summarized results comparing the averaged PSNR and SSIM by both using the ground- truth based baseline and our approach
ground truth for distorted fisheye lines as an upper bound, does not exhibit a significant disparity. Thus, we consider our approach to be comparable to the previous work. Additionally, we provide a compilation of rectification performance achieved by utilizing the predicted distortion parameters below.
\begin{table}
\begin{tabular}{||c|c|c||} & **PSNR** & **SSIM** \\ & **Difference** & **Difference** \\ \hline \multicolumn{3}{||c||}{**Confidence Interval**} \\ \multicolumn{3}{||c||}{**at**} \\ \multicolumn{3}{||c||}{**95\%**} & [-0.0369, 0.0538] & [-0.0016, 0.0018] \\ \hline \multicolumn{3}{||c||}{**Marginal Error**} & \(\pm\) 0.008479 & \(\pm\) 0.00012 \\ \end{tabular}
\end{table}
Table 4: Confidence Interval at 95% significant level for PSNR and SSIM
Figure 6: Experimental results demonstrate the rectification performance achieved through the predicted distortion parameters using both our proposed approach and the baseline method.
Figure 7: Shows the distribution of the SSIM difference between the baseline performance and our modelβs performance
The results with structured dataset are promising. As mentioned in Section 4.2, this trial of experiment shows an unfairness for our approach because of the fact that we are comparing our method with the ground truth of the distorted lines but our approach still shows a statistically proved comparable result with a slight improvement based on TABLE 3. Meanwhile, the baseline approach via using line detection model shows a clear advantage of the obvious presence the lines of within distorted images and in turn is expected to perform well on structured dataset, such as the WireFrame dataset used in this experiment. However, the previous work's approach might show a limitation when faced with unstructured dataset where the edges of lines in one image is not clear to be detected, such as human faces. However, our GAN based approach is not limited by the nature of the dataset regardless of being structured or unstructured. Thus, this stimulates a further experiment on dataset such as CelebA [23] where the images might not include rich information in terms of lines for the baseline approach to exploit.
## 6 Conclusions
In this paper, we presented an enhanced approach for improving fisheye image calibration and rectification using a multi-contextual neural network. Our method incorporates a GAN-based semantic-guidance generator, which provides a ResNet-based calibration network with a ground-truth-like semantic feature, enabling end-to-end automatic fisheye image rectification from a single input image. Due to the unavailability of detailed implementation information regarding the secondary calibration network from the previous work, we were unable to replicate their exact experimental results. However, we anticipate performing a thorough evaluation once the authors of the previous work release their implementation. Statistically, we have demonstrated that our approach does not exhibit a significant difference compared to using the ground truth of distorted fisheye lines as an upper bound for the output of the previous work's line detection module. This validates our assumption that improved guidance leads to better calibration, as the direct utilization of a ground-truth-like feature proves advantageous over detecting distorted lines as the guidance. Consequently, refining the GAN model becomes crucial. Furthermore, the baseline approach may encounter challenges when confronted with unstructured images such as human faces. Hence, for future work, we plan to apply our approach's pipeline to unstructured datasets, such as Celeb-A [23]. Recent advancements in transformer-based structures have demonstrated significant improvements in visual reconstruction and regression tasks [24, 25, 26]. In our future research, we aim to explore the potential of utilizing these advanced transformer models for our Calibration Module, comparing their performance against our current _ResNet34_ architecture. In summary, our future research will concentrate on two main objectives: (1) improving our model architecture through the incorporation of contemporary advancements in transformers, and (2) reassessing the complete pipeline when the authors of the previous work make their implementations publicly available.
[https://www.oajaiml.com/](https://www.oajaiml.com/)
|
2307.13239 | RoSAS: Deep Semi-Supervised Anomaly Detection with
Contamination-Resilient Continuous Supervision | Semi-supervised anomaly detection methods leverage a few anomaly examples to
yield drastically improved performance compared to unsupervised models.
However, they still suffer from two limitations: 1) unlabeled anomalies (i.e.,
anomaly contamination) may mislead the learning process when all the unlabeled
data are employed as inliers for model training; 2) only discrete supervision
information (such as binary or ordinal data labels) is exploited, which leads
to suboptimal learning of anomaly scores that essentially take on a continuous
distribution. Therefore, this paper proposes a novel semi-supervised anomaly
detection method, which devises \textit{contamination-resilient continuous
supervisory signals}. Specifically, we propose a mass interpolation method to
diffuse the abnormality of labeled anomalies, thereby creating new data samples
labeled with continuous abnormal degrees. Meanwhile, the contaminated area can
be covered by new data samples generated via combinations of data with correct
labels. A feature learning-based objective is added to serve as an optimization
constraint to regularize the network and further enhance the robustness w.r.t.
anomaly contamination. Extensive experiments on 11 real-world datasets show
that our approach significantly outperforms state-of-the-art competitors by
20%-30% in AUC-PR and obtains more robust and superior performance in settings
with different anomaly contamination levels and varying numbers of labeled
anomalies. The source code is available at https://github.com/xuhongzuo/rosas/. | Hongzuo Xu, Yijie Wang, Guansong Pang, Songlei Jian, Ning Liu, Yongjun Wang | 2023-07-25T04:04:49Z | http://arxiv.org/abs/2307.13239v1 | # RoSAS: Deep Semi-supervised Anomaly Detection with Contamination-resilient Continuous Supervision
###### Abstract
Semi-supervised anomaly detection methods leverage a few anomaly examples to yield drastically improved performance compared to unsupervised models. However, they still suffer from two limitations: 1) unlabeled anomalies (i.e., anomaly contamination) may mislead the learning process when all the unlabeled data are employed as inliers for model training; 2) only discrete supervision information (such as binary or ordinal data labels) is exploited, which leads to suboptimal learning of anomaly scores that essentially take on a continuous distribution. Therefore, this paper proposes a novel semi-supervised anomaly detection method, which devises _contamination-resilient continuous supervisory signals_. Specifically, we propose a mass interpolation method to diffuse the abnormality of labeled anomalies, thereby creating new data samples labeled with continuous abnormal degrees. Meanwhile, the contaminated area can be covered by new data samples generated via combinations of data with correct labels. A feature learning-based objective is added to serve as an optimization constraint to regularize the network and further enhance the robustness w.r.t. anomaly contamination. Extensive experiments on 11 real-world datasets show that our approach significantly outperforms state-of-the-art competitors by 20%-30% in AUC-PR and obtains more robust and superior performance in settings with different anomaly contamination levels and varying numbers of labeled anomalies. The source code is available at [https://github.com/xuhongzuo/rosas/](https://github.com/xuhongzuo/rosas/).
keywords: Anomaly detection, Anomaly contamination, Continuous supervision, Semi-supervised learning, Deep learning +
Footnote β : journal: Information Processing and Management
## 1 Introduction
Anomaly detection is to identify exceptional data objects that are deviated significantly from the majority of data, which has wide applications in many vital domains, e.g., network security, financial surveillance, risk management, and AI medical diagnostics (Pang et al., 2021). Anomaly detection is often posited as an unsupervised problem due to the difficulty of accessing adequate labeled data (Han et al., 2022; Jiang et al., 2023). The past decade has witnessed a plethora of unsupervised anomaly detection methods that estimate/learn data normality via various data characteristics (e.g., proximity, probability, or clustering membership) or deep models (e.g., different kinds of Autoencoders or generative adversarial networks). However, these unsupervised methods often have many false alarms which can overwhelm human analysts, leading to the failure of investigating real threats. It is challenging, if not impossible, to accurately detect true anomalies of real interest without any prior information indicating what kind of data are anomalies.
In fact, in many real-world applications, there are often a few readily accessible anomaly examples. For example, some abnormal events such as credit card frauds or insiders' unauthorized access are reported (by users) or logged
(in the system). Small genuine anomaly data can be directly retrieved from these records, without requiring extra annotations. This naturally inspires us to harness these true anomalies in combination with unlabeled data when training detection models. This learning paradigm falls into the category of semi-supervised learning (Chen et al., 2019; Kang et al., 2021; Van Engelen and Hoos, 2020; Yu et al., 2018) that permits using small labeled data as well as a large amount of unlabeled data. Recently, with the help of dozens of anomaly examples, semi-supervised methods have shown drastically improved detection performance compared to unsupervised methods that work on unlabeled data only (Ding et al., 2021, 2022; Jiang et al., 2023; Pang et al., 2018, 2019, 2023; Zhou et al., 2021, 2022).
By summarizing prior arts, this paper first proposes a general deep semi-supervised anomaly detection framework by introducing a two-stage network structure and a general learning objective. This framework presents a unifying view of this research line. More importantly, this framework reveals the following two key limitations of existing deep semi-supervised anomaly detection models that we aim to address in this study:
_Robustness w.r.t. anomaly contamination._ Many studies (Pang et al., 2018; Ruff et al., 2020; Wu et al., 2021; Zhou et al., 2021) assume all the unlabeled data as normal since anomalies are rare events. However, some anomalies are still hidden in the unlabeled set (i.e., _anomaly contamination_). This contamination might disturb anomaly detection models and blur the boundaries of normal patterns, leading to the potential overfitting problem. Some attempts (Pang et al., 2019, 2021, 2023) have been made to address this problem by using a Gaussian prior when defining optimization targets or using concatenated data pairs as augmented training data.
_Continuous supervision of anomaly score optimization._ Anomaly detection models are typically required to output anomaly scores to indicate the degree of being abnormal for human investigation of the top-ranked anomalies. However, current models only use discrete supervision information, e.g., binary optimization targets (Pang et al., 2018, 2019, 2021; Ruff et al., 2020; Wu et al., 2021; Zhou et al., 2021) or ordinal class labels (Pang et al., 2020, 2023), to optimize anomaly scores that essentially take on a continuous distribution. The lack of continuous supervision may result in suboptimal learning of anomaly scores. To the best of our knowledge, we are the first to raise this problem in anomaly detection.
To exemplify the issues described above, we use a toy dataset1 in Figure 1. Figure 1 (a) visualizes the data with ground-truth annotations, in which the left panel uses the two most relevant dimensions as coordinate axes and the right panel is the T-SNE (Van der Maaten and Hinton, 2008) result. Most existing models use the contaminated discrete supervisory signals directly supplied by raw labels of the semi-supervised setting, as shown in Figure 1 (b). Data samples in this supervision are labeled by discrete values, and more importantly, this supervision is biased by unlabeled anomalies, i.e., anomaly contamination (e.g., two gray triangles highlighted in the blue rectangle). This supervision is not indicative enough to support the detection of the hard anomalies that are mixed up with inliers, or similar to the unlabeled anomalies. As shown in Figure 1 (d), five current state-of-the-art semi-supervised detectors suffer from these issues and fail to yield satisfactory detection results.
Footnote 1: This toy dataset is generated via the make_classification function of the Scikit-learn library (Pedregosa et al., 2011). The dataset is described by ten features, including three informative features, five redundant features (i.e., random linear combinations of the informative features), and two noisy features. The anomaly class contains three clusters.
To fill these gaps, this paper further proposes a novel Robust deep Semi-supervised Anomaly Scoring method (termed RoSAS), in which the produced anomaly scores are optimized via _contamination-resilient continuous supervisory signals_. RoSAS follows our general network structure consisting of a feature representation module and an anomaly scoring module to directly yield anomaly scores, where the whole process is optimized in an end-to-end manner. Specifically, we first propose a mass interpolation method to diffuse the abnormality of labeled anomaly examples to the unlabeled area, which yields augmented data samples. As the interpolation process is measurable according to the diffusion intensity, these newly created data can be labeled with continuous values that faithfully indicate their abnormal degrees, thereby offering continuous supervision to straightly optimize the anomaly scoring mechanism. The located area of anomaly contamination can be covered by the new data generated by the interpolation of data combinations with correct labels. Even if the anomalies hidden in unlabeled data are used in interpolation, their negative effects can be diluted when they are grouped with genuine normal data or real anomalies in a mass. Consequently, new supervisory signals can better tolerate the anomaly contamination problem. Besides, our optimization process encourages the consistency between the anomaly score of each augmented sample and the score interpolation of their corresponding original data samples. This consistency learning can produce smoother anomaly scores to describe continuous abnormal degrees better. Additionally, we pose a feature learning-based objective that
ensures effective isolation of labeled anomalies in the intermediate representation, which serves as an optimization constraint to further regularize the network and enhance the robustness w.r.t. anomaly contamination.
Figure 1 (c) illustrates the devised contamination-resilient continuous supervision, which is not only noise-tolerant but very faithful to the ground truth, demonstrating significantly higher supervision quality. Therefore, as depicted in Figure 1 (d), RoSAS produces more reliable anomaly scoring results than competing methods that rely on raw supervision information.
Our main contributions are summarized as follows.
* Motivated by the two limitations manifested by the general framework of this research line, we propose a novel semi-supervised anomaly detection method RoSAS, in which we devise a new kind of contamination-resilient continuous supervisory signals to optimize anomaly scores in an end-to-end manner.
* We propose a mass interpolation method in RoSAS to generate augmented data samples together with continuous values as data labels. In addition to offering continuous supervision, the created supervisory signals can tolerate anomaly contamination.
* We introduce consistency learning which encourages RoSAS to produce smoother anomaly scores, thus better describing abnormal degrees. We also set a feature learning-based objective to regularize RoSAS. The intermediate representation is constrained to further enhance its robustness w.r.t. anomaly contamination.
Extensive experiments show that: 1) RoSAS achieves significant AUC-PR and AUC-ROC improvement over state-of-the-art semi-supervised anomaly detection methods; 2) RoSAS obtains more robust and superior performance in
Figure 1: (**a**) Ground-truth labels of a toy case (the left panel uses two raw features that are informative to show the data distribution, and the right panel shows the 2-D data transformed by T-SNE). In the following sub-figures, we rely on the two raw informative features to visualize. (**b**) Raw supervision information (i.e., _contaminated discrete supervision_) directly offered by the semi-supervised setting. (**c**) _Contamination-resilient continuous supervision_ generated by our model. (**d**) Anomaly scoring results of our method RoSAS vs. existing approaches including PReNet (Pang et al., 2023), DevNet (Pang et al., 2019, 2021), FeaWAD (Zhou et al., 2021), DSAD (Ruff et al., 2020), and TSS (Zhang et al., 2017). The blue rectangle in (a)(b)(c) is used to highlight two real anomalies that are hidden in the unlabeled set (i.e., anomaly contamination). These two noisy points may mislead the learning model, but they are effectively covered in our supervision. The generated augmented samples in this area are labeled with higher values clearly indicating the anomalism of this field. Benefiting from the proposed contamination-resilient continuous supervision in (c), our method RoSAS produces more accurate anomaly scores than prior arts as shown in (d).
settings with different anomaly contamination levels and varying numbers of labeled anomalies. We also empirically show the advantage of the proposed contamination-resilient continuous supervisory signals over discretized, conventional ones and validate contributions of the consistency constraint in anomaly scoring and the regularizer based on feature learning.
## 2 Related Work
This section first reviews unsupervised anomaly detection and summaries semi-supervised models that exploit labeled anomaly examples.
### Unsupervised Anomaly Detection
Traditional unsupervised anomaly detection identifies anomalies according to different data characteristics like proximity and probability (Bandaragoda et al., 2018; Li et al., 2020; Liu et al., 2008). The burgeoning of deep learning has fueled a plethora of deep anomaly detectors. In this research line, many studies (Ding et al., 2019; Gong et al., 2019; Lv et al., 2023; Xu et al., 2019; Zhang et al., 2019) train Autoencoders or generative adversarial networks to reconstruct/generate the original inputs. Self-supervised methods (Golan & El-Yaniv, 2018; Shenkar & Wolf, 2022; Xu et al., 2023) define data-driven supervision and proxy tasks. These methods essentially learn intrinsic patterns of training data that are dominated by normal data, and loss values are directly used to estimate abnormal degrees during inference. In addition, some studies enhance traditional models by harnessing the strong representation capability of deep learning. Deep SVDD (Ruff et al., 2018) is based on support vector data description (Tax & Duin, 2004), and DIF (Xu et al., 2023) enhances the isolation process of (Liu et al., 2008) by proposing deep representation ensemble. Basic insights in mainstream deep anomaly detectors can be also achieved via non-deep models. The literature (Xu et al., 2021) uses tree models to realize the reconstruction pipeline. Although these unsupervised methods are intuitive and practical, without knowing real anomalies, they often lead to many false alarms which may overwhelm anomalies of real interest.
### Semi-supervised Anomaly Detection
In contrast, relatively few studies consider semi-supervised anomaly detection utilizing limited anomaly examples. In this category, we also review the related literature that uses both labeled normal data and labeled anomalies since they can also work under this scenario by treating unlabeled data as normal.
The study (Zhang et al., 2018) employs canonical clustering to divide labeled anomalies into \(k\) clusters and detect anomalies by a (\(k\)+1)-class classifier. Non-deep unsupervised anomaly detection methods can be also enhanced to leverage weak incomplete supervision. Barbariol & Susto (2022) extend ensemble-based isolation forest (Liu et al., 2008) by leveraging supervision information to filter ensemble members, which improves detection performance and simultaneously reduces computational costs.
This incomplete supervision can be also leveraged in deep models to learn a good representation. Some methods map input data to a representation space and explicitly impose specific criteria such as triplet loss (Pang et al., 2018) and anomaly-informed one-class loss (Ruff et al., 2020) upon the representation. They further employ distance-based anomaly scoring protocols upon this learned representation space. Besides, data representations can be also implicitly learned via Autoencoders or generative adversarial networks. Huang et al. (2020) propose a novel encoder-decoder-encoder structure. It modifies the reconstruction loss to force the network to reconstruct labeled anomalies to pre-defined noises. Bidirectional GAN is used in (Tian et al., 2022), in which labeled anomalies are used to learn a probability distribution, and the distribution can assign low-density values to labeled anomalies. These methods are indirectly optimized to yield abnormal degrees of data samples, and anomaly scores can be only obtained in an isolated manner.
Some advanced deep approaches are in an end-to-end fashion to directly optimize the produced anomaly scores. The pioneering work in this research line (Pang et al., 2019, 2021) assumes anomaly scores of normal data follow a Gaussian distribution and yield the reference score. It further employs the z-score function to define the deviation loss to ensure anomaly scores of labeled anomalies significantly deviate from the reference. An Autoencoder is added to the above framework in (Zhou et al., 2021). In addition to a deviation loss imposed on the derived anomaly scores, the reconstruction error of labeled anomalies is optimized to be as larger as a pre-defined margin. By defining the
ordinal target of paired data samples, Pang et al. (2019) use mean absolute error to optimize anomaly scores. The cross-entropy loss is used in (Ding et al., 2022) to classify labeled anomalies, transferred pseudo anomalies, and latent residual anomalies from unlabeled data.
It is also noteworthy that, except for tabular data or images, related studies also consider this semi-supervised learning paradigm of anomaly detection in graph data (Ding et al., 2021; Dou et al., 2020; Zhou et al., 2022) and time series (Carmona et al., 2022; Huang et al., 2022).
## 3 A General Framework of Deep Semi-supervised Anomaly Detection
Problem StatementWe assume a few labeled anomaly examples \(\mathcal{X}_{A}\) are accessible in addition to large-scale unlabeled training data \(\mathcal{X}_{U}\), where \(|\mathcal{X}_{A}|\ll|\mathcal{X}_{U}|\), i.e., the quantity of labeled anomalies is very small compared to the number of true anomalies and the whole dataset. Given the training data \(\mathcal{X}=\mathcal{X}_{U}\cup\mathcal{X}_{A}\), an anomaly detection model is trained to assign higher scores to data samples with higher likelihoods to be anomalies.
### General Framework
We below introduce a general framework of deep semi-supervised anomaly detection, and this framework can well cover representative existing models (Carmona et al., 2022; Pang et al., 2018, 2019, 2021, 2023; Ruff et al., 2020; Wu et al., 2021; Zhou et al., 2021) and summarize their limitations.
We first define the network structure of the framework. Let \(f:\mathcal{X}\mapsto\mathbb{R}\) represent the network that outputs anomaly scores given the input data \(\mathcal{X}\). The whole procedure can be divided into a feature representation module \(\phi:\mathcal{X}\mapsto\mathbb{R}^{H}\) and an anomaly scoring module \(\psi:\mathbb{R}^{H}\mapsto\mathbb{R}\). Feature representation module \(\phi\) aims to map \(\mathcal{X}\) into a feature space with dimensionality \(H\). Anomaly scoring module \(\psi\) outputs final anomaly scores based on the intermediate representation. Anomaly detection network \(f\) is denoted as:
\[f(\mathbf{x})=\psi(\phi(\mathbf{x};\Theta_{\phi});\Theta_{\psi}), \tag{1}\]
where \(\Theta_{\phi}\) and \(\Theta_{\psi}\) are network parameters in \(\phi\) and \(\psi\).
We then define a general learning objective. Under the semi-supervised setting, each data sample in the training set can be assigned a target. Let \(\mathcal{D}=\{(\mathbf{x},y)\in\mathcal{X}\times\mathcal{Y}\}\) with \(\mathcal{Y}=\{y^{*},y^{-}\}\) be a set of training samples, where \(y^{+}\) denotes labeled anomalies and \(y^{-}\) denotes unlabeled data. Although most of \((\mathbf{x},y^{-})\) are genuine normal samples, there are still some unlabeled anomalies that are wrongly assigned \(y^{-}\). Data augmentation techniques can be also used to obtain a new training set \(\mathcal{\hat{D}}=\{(\mathbf{\hat{x}},\mathbf{\hat{y}})\}\). A general objective function is defined as follows:
\[\begin{split}\min_{\{(\Theta_{\phi},\Theta_{\phi})\}}\mathbb{E} _{(\mathbf{x},\mathbf{\hat{y}})\sim\mathcal{D}}\Big{[}\mathcal{L}_{D}(\psi( \phi(\mathbf{x})),y)\Big{]}+\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}}\Big{[} \mathcal{L}_{D}^{\prime}(\phi(\mathbf{x}),y)\Big{]}\\ +\mathbb{E}_{(\mathbf{\hat{x}},\mathbf{\hat{y}})\sim\mathcal{D}} \Big{[}\mathcal{L}_{D}(\psi(\phi(\mathbf{\hat{x}})),\mathbf{\hat{y}})\Big{]}+ \mathbb{E}_{(\mathbf{\hat{x}},\mathbf{\hat{y}})\sim\mathcal{D}}\Big{[} \mathcal{L}_{D}^{\prime}(\phi(\mathbf{\hat{x}}),\mathbf{\hat{y}})\Big{]}.\end{split} \tag{2}\]
The above equation can be interpreted as the optimization of the representation \(\phi(\cdot)\) and/or the final anomaly scores \(\psi(\phi(\cdot))\) by using supervision signals provided by original data \(\mathcal{D}\) and/or augmented data \(\mathcal{\hat{D}}\).
### Generalization of Current Studies
As for the network structure in Eqn (1), different network structures are used according to data types and data characteristics, e.g., multi-layer perceptron net is used for multi-dimensional tabular data (Pang et al., 2018, 2019, 2021, 2023; Wu et al., 2021; Zhou et al., 2021), convolutional net is used for image data (Ruff et al., 2020), and temporal net is used for time series (Carmona et al., 2022).
The proposed objective function Eqn. (2) can well cover existing deep semi-supervised anomaly detectors by specifying each of its terms, as shown in Table 1. We below explain their instantiation method in detail.
* Deep SAD (Ruff et al., 2020) defines \(\mathcal{L}_{D}^{\prime}\). Upon the representation space, labeled anomalies are repulsed to be distant to a pre-defined center \(\mathbf{c}\) as far as possible, and unlabeled data are expected to be included in a compact hypersphere with the minimum volume taking \(\mathbf{c}\) as the center.
* FeaWAD (Zhou et al., 2021) first instantiates \(\mathcal{L}_{D}\). It is optimized to enlarge anomaly scores of labeled anomalies to a pre-defined margin \(e\) and maps scores of unlabeled data to zero. \(\mathcal{L}^{\prime}_{D}\) is further instantiated by a reconstruction loss with the help of an Autoencoder structure.
* DevNet (Pang et al., 2019, 2021) specifies \(\mathcal{L}_{D}\). It proposes a z-score-based deviation function by assuming a pre-defined Gaussian prior of anomaly scores and sampling reference scores \(\mu\) and standard deviation values \(\sigma\) from this distribution.
* PReNet (Pang et al., 2023) specifies \(\mathcal{L}_{\tilde{D}}\) as Mean Absolute Error (MAE) between the scores of concatenated pairs (anomaly-unlabeled, anomaly-anomaly, and unlabeled-unlabeled) and pre-defined ordinal regression targets (\(e_{1}\), \(e_{2}\), and \(e_{3}\)).
REPEN (Pang et al., 2018) and NCAD (Carmona et al., 2022) also repulse labeled anomalies upon the representation space, as has been done in Deep SAD. PLSD (Wu et al., 2021) is similar to PReNet by replacing MAE loss with cross-entropy loss. Therefore, these methods are omitted in Table 1.
### Limitations of Current Studies
By looking into Table 1, we perceive two key gaps in these existing approaches, i.e., _robustness w.r.t. contamination_ and _continuous supervision of optimization_.
#### 3.3.1 Robustness w.r.t. contamination
Deep SAD and FeaWAD use unlabeled data as an opposite data class against labeled anomalies. They define a specific loss term (starting with \(\mathbbm{1}_{y^{-}}\)) to _indistinguishably_ map all of these unlabeled data to a unified target. This operation seems to be reasonable due to the unsupervised nature of anomaly detection (i.e., anomalies are rare data). To further enhance detection performance, we need to consider the negative effect brought by anomaly contamination in unlabeled data and improve the model robustness. DevNet (Pang et al., 2019, 2021) assumes a Gaussian distribution prior of optimization targets. Due to the flexibility of Gaussian distribution, it can partially eliminate interference. PReNet (Pang et al., 2023) uses vector concatenation of data pairs to redefine three surrogate classes. This kind of data combination can resist small anomaly contamination since the interference of noisy samples can be mitigated when they are combined with genuine normal data or labeled anomalies.
#### 3.3.2 Continuous supervision
Anomaly scores produced by anomaly detection models are expected to indicate abnormal degrees, and human investigators can examine the reported suspicious data in descending order of anomaly scores. Deep SAD, DevNet and FeaWAD utilize discrete binary supervision to respectively map labeled anomalies and unlabeled data to _two extremes_ during training, but their models are required to output continuous anomaly scores during inference. Specifically, Deep SAD uses one fixed center **c** (unlabeled data are gathered at this center, and anomalies are repelled), FeaWAD directly maps anomaly scores to zero and a pre-defined margin \(e\), and DevNet first calculates z-scores of anomaly
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Anomaly** & \(\mathcal{L}_{D}\) / \(\mathcal{L}_{D}\): & \(\mathcal{L}_{D}^{*}\)/ \(\mathcal{L}_{D}^{*}\) & & \\
**Detectors** & **anomaly score** & **representation** & **Robustness** & **Continuous supervision** \\ & **optimization** & **optimization** & & \\ \hline Deep SAD & - & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & \\ (Rulf et al., 2020) & - & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & \\ FeaWAD & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & \\ (Zhou et al., 2021) & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & \\ DevNet & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & \\ (Pang et al., 2019, 2021) & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & \\ \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & & \\ PReNet & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & \\ (Pang et al., 2023) & \(1_{y^{-}}\)/\(\mathcal{L}_{D}\) & & & \\ \hline
**RoSAS (ours)** & \(\ell(\phi(\mathbf{dX}),y)\) & \(\max(0,e+a(\phi(\mathbf{dX}),\phi(\mathbf{dX})))^{-}\) & & \\ & \(\ell(\phi(\mathbf{dX}),\sum_{i=1}^{n}\lambda\phi(\phi(\mathbf{dX}),i))\) & \(d(\phi(\mathbf{dX}),\phi(\mathbf{dX}))\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Instantiation method and gaps of existing deep semi-supervised anomaly detection studies
scores and employs zero and a margin \(e\) as two extreme targets. Instead of using two extremes, PReNet employs three pre-defined ordinal targets, but this is also a kind of discrete supervision. Prior arts utilize the above discrete supervision information to optimize the continuously distributed anomaly scores. Due to the lack of continuous supervision, these models may fail to learn how to subtly describe abnormal degrees, resulting in suboptimal anomaly scoring mechanism.
## 4 The proposed RoSAS
This paper proposes a concrete deep semi-supervised anomaly detection method termed RoSAS. The overall procedure is shown in Figure 2. As described in Eqn. (1), RoSAS also follows the basic network structure \(f\) consisting of a feature representation module \(\phi(\cdot|\Theta_{\phi})\) and a scoring module \(\psi(\cdot|\Theta_{\psi})\). RoSAS is optimized by the loss function \(L\) and the regularizer \(L^{\prime}\).
The network architecture of \(\phi\) and \(\psi\) is determined according to the input data types and/or data characteristics. In terms of the design of loss function \(L\), the simplest way is to directly treat the whole unlabeled set as normal data, and discrete targets can be assigned to labeled anomaly examples and unlabeled data, as has been done in many prior studies (Pang et al., 2019; Zhou et al., 2021). However, these labels are inaccurate due to anomaly contamination and fail to sufficiently reflect anomaly scores that by definition take on a continuous distribution. It is very challenging to obtain reliable abnormal degrees of original training data since we do not exactly know whether one anomaly is more abnormal than another. Hence, we finally resort to synthesizing new data samples by diffusing the abnormality of these accessible labeled anomalies to the unlabeled area, and thus the abnormal degree is controllable. That is, the design of \(L\) is essentially to specify the term \(\mathcal{L}_{\tilde{D}}\) in Eqn. (2). Specifically, based on the original training mini-batch
Figure 2: The overall procedure of RoSAS. Each training mini-batch \(\mathcal{B}\) is composed of unlabeled data \(\mathcal{B}_{U}\) and anomaly examples \(\mathcal{B}_{A}\). The derived anomaly scores are end-to-end optimized by loss function \(L\). \(L\) is defined based on contamination-resilient continuous supervision signals that are offered by augmented samples \(\mathcal{B}\). A feature learning-based objective \(L^{\prime}\) on the intermediate representation \(\phi\) is added to further regularizes the network. \(L\) and \(L^{\prime}\) are assembled via dynamic weight averaging \(\copyright\).
data \(\mathcal{B}\), we propose a mass interpolation method to create a set of augmented data samples attached with continuous supervision targets \(\mathcal{\tilde{B}}=\{(\mathbf{\tilde{x}},\tilde{y})\}\). Note that this new supervision can also resist the anomaly contamination problem. The contaminated area can be covered by new data samples generated via combinations of data with correct labels. Anomaly contamination also becomes less harmful when these notorious unlabeled anomalies are combined with genuine normal samples or labeled anomalies during the interpolation. Consequently, RoSAS successfully devices contamination-resilient, continuous supervision of anomaly score optimization.
Further, motivated by the potential generalization and regularization effect of multi-task learning (Vandenhende et al., 2021), we define an additional objective \(L^{\prime}\) upon the feature representation module \(\phi\) to encourage significant deviations of labeled anomalies in the intermediate representation space. The network can be regularized by this new optimization constraint, which further improves the robustness to anomaly contamination.
Two loss terms \(L\) and \(L^{\prime}\) are finally assembled via dynamic weight averaging \(\oplus\) to avoid manually setting a fixed weight. Dynamic weight averaging \(\oplus\) can balance the optimization pace at two loss terms.
We below present the design of the loss function \(L\) (Section 4.1), the regularization term \(L^{\prime}\) (Section 4.2), and the dynamic averaging \(\oplus\) (Section 4.3) in detail. We finally illustrate the procedure of RoSAS by giving its pseudo code (Section 4.4).
### Anomaly Score Optimization by Contamination-resilient Continuous Supervision
RoSAS first produces new augmented data samples with controllable and reliable abnormal degrees via the mass interpolation method. Compared to directly using contaminated discrete targets, RoSAS can optimize anomaly scores as a regression problem with faithful continuous targets.
Specifically, based on the original mini-batch data \(\mathcal{B}=\{(\mathbf{x},y)\}\) with \(y=1\) for labeled anomalies and \(y=-1\) for unlabeled data, RoSAS creates a novel mini-batch \(\mathcal{\tilde{B}}\) of augmented data samples by the mass interpolation. These augmented data samples are synthesized as a weighted summation of \(k\) original data samples. Different weights of candidates \(\{\lambda_{1},\cdots,\lambda_{k}\}\) produce continuous targets in new supervision. \(\mathcal{\tilde{B}}\) is defined as follows.
\[\mathcal{\tilde{B}}=\Big{\{}(\mathbf{\tilde{x}},\tilde{y})|\mathbf{\tilde{x}}= \sum_{i=1}^{k}\lambda_{i}\mathbf{x}_{i},\tilde{y}=\sum_{i=1}^{k}\lambda_{i}y_ {i}\Big{\}}, \tag{3}\]
where \(\sum_{i=1}^{k}\lambda_{i}=1\), \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{k}\subset\mathcal{B}\), and \(\lambda_{i}\) is sampled from a continuous distribution.
As for the distribution of \(\lambda\), inspired by (Zhang et al., 2018), RoSAS uses Beta distribution, i.e., \(\lambda\sim\text{{Beta}}(\alpha,\alpha)\). It is because adjusting distribution parameter \(\alpha\) can produce different types of weights, e.g., a uniform distribution when \(\alpha=1\) or an approximate truncated normal distribution when \(\alpha\) is a larger value. If \(\alpha>1\), interpolation weights will centralize around 0.5, and some noisy labels might be produced (e.g., mixing two anomalies may yield a new sample in the normal manifold, but an anomalous label is given to this new sample). This is also known as "manifold intrusion" (Guo et al., 2019). To tackle this problem, we use \(\alpha=0.5\) by default. In doing so, interpolation weights are more likely to be slightly larger/smaller than 0/1, which makes the interpolation located in the local regions of the original samples. Thus, these possible noisy labels can be reduced or eliminated.
The loss function \(L\) measures the empirical risks of derived anomaly scores of augmented samples compared to the continuous targets. Additionally, we add a consistency term to measure the difference between each augmented sample's anomaly score and the weighted summation of their original data instances' anomaly scores using the same interpolation weights. This consistency learning is to encourage the network to produce smoother anomaly scores, thus better describing abnormal degrees. Therefore, \(L\) is finally defined as:
\[L=\mathbb{E}_{(\mathbf{\tilde{x}},\tilde{y})\sim\mathcal{\tilde{B}}}\bigg{[} \ell\Big{(}\psi(\phi(\mathbf{\tilde{x}})),\tilde{y}\Big{)}+\ell\Big{(}\psi( \phi(\mathbf{\tilde{x}})),\sum_{i=1}^{k}\lambda_{i}\psi(\phi(\mathbf{x}_{i})) \Big{)}\bigg{]}, \tag{4}\]
where \(\{\mathbf{x}_{i}\}_{i=1}^{k}\) is the original data samples when creating \(\mathbf{\tilde{x}}\) as defined in Eqn. (3), and \(\ell(\cdot,\cdot)\) is a base regression loss.
The loss function \(L\) not only fulfills continuous optimization but can tolerate anomaly contamination. The unlabeled set is still dominated by genuine normal data because of the rarity of anomalies. The contaminated area can be calibrated via new data samples that are augmented from a group of data with correct labels. Even if these noisy unlabeled anomalies are sampled in Eqn. (3), they are likely to be combined with labeled anomalies or real normal data. That is, the generation process of augmented data samples also dilutes the anomaly contamination in a simple yet effective manner. Therefore, RoSAS is more robust w.r.t. anomaly contamination.
### Regularization by Feature Learning
The feature representation module \(\phi:\mathcal{X}\mapsto\mathbb{R}^{H}\) maps input data into a new feature space. We further define a new loss term \(L^{\prime}\) upon this intermediate representation space, which serves as a new optimization constraint to regularize the network and further enhance the robustness.
To fully leverage these labeled anomalies, \(L^{\prime}\) is designed to learn a feature representation that can effectively repulse these labeled anomaly examples from unlabeled data (the majority of unlabeled data is normal). Let \(\mathbf{q}\) be an anchor data object, and we utilize the difference between the deviation of unlabeled-anchor and anomaly-anchor pairs to measure the separability of labeled anomalies, which is defined as follows.
\[L^{\prime}=\mathbb{E}_{\begin{subarray}{c}(\mathbf{x}^{*},\phi^{*})\sim\mathcal{ B}_{a}\\ (\mathbf{x}^{*},\phi^{*})\sim\mathcal{B}_{b}\end{subarray}}\Big{[}\text{max} \Big{(}d(\phi(\mathbf{x}^{-}),\phi(\mathbf{q}))-d(\phi(\mathbf{x}^{*}),\phi( \mathbf{q}))+e,0\Big{)}\Big{]}, \tag{5}\]
where \(d(\cdot,\phi(\mathbf{q}))\) indicates the deviation given the anchor data, and \(e\) is a margin. Different distance functions or similarity measures can be used. Considering the simplicity, we employ Euclidean distance here. \(\mathcal{B}_{U}\) and \(\mathcal{B}_{A}\) are unlabeled data and labeled anomalies in mini-batch \(\mathcal{B}\). In practical implementation, a mini-batch of anchor data is sampled from the unlabeled set along with mini-batch \(\mathcal{B}\), i.e., \(\mathbf{q}\in\mathcal{B}_{q},\mathcal{B}_{q}\subset\mathcal{X}_{u}\). Anchor data can also be determined as representative normal prototypes if labeled normal data are available.
It is noteworthy that \(L^{\prime}\) uses a relative and soft manner to judge whether these labeled anomalies are effectively separated by introducing a reference divergence degree \(d(\phi(\mathbf{x}^{-}),\phi(\mathbf{q}))\) between unlabeled data and anchor data. It avoids blindly enlarging \(d(\phi(\mathbf{x}^{+}),\phi(\mathbf{q}))\), i.e., the anomalies that have been successfully deviated are no longer required to be optimized; thus, the optimizer can focus on true errors. On the other hand, even if unlabeled anomalies are wrongly identified as anchor data \(\mathbf{q}\) or \(\mathbf{x}^{-}\) in Eqn. 5, this function can still work to isolate labeled anomalies.
### Dynamic Averaging
Instead of setting a fixed weight, the loss term \(L\) and the regularizer \(L^{\prime}\) are assembled via dynamic weight averaging (Liu et al., 2019), i.e.,
\[wL+(1-w)L^{\prime}, \tag{6}\]
where \(w\) is defined according to the optimization pace (loss descending rate) of \(L\) and \(L^{\prime}\). \(w\) is defined as follows.
\[w=\frac{\exp(L/T\bar{L})}{\exp(L/T\bar{L})+\exp(L^{\prime}/T\bar{L}^{\prime})}, \tag{7}\]
where \(L\) and \(L^{\prime}\) are the average loss of the last training epoch, and \(T\) is the temperature as used in the softmax function.
### Algorithm of RoSAS
Algorithm 1 presents the training procedure of RoSAS. Step 1 initializes the loss terms for the subsequent dynamic weight averaging. For each training batch, a mini-batch of known anomalies \(\mathcal{B}_{a}\) of size \(b\) is sampled from \(\mathcal{X}_{A}\), \(2b\) data objects are sampled from \(\mathcal{X}_{U}\) as mini-batch \(\mathcal{B}_{u}\) and anchor \(\mathcal{B}_{q}\) in Steps 4-5. Step 6 creates a mini-batch of augmented data. The scoring loss and the regularization term are computed in Steps 7-8. Dynamic weights are adjusted in Step 9. Step 10 performs back propagation to optimize the network parameters w.r.t. the loss \(wL+(1-w)L^{\prime}\). Step 12 updates the average losses.
The computation of loss term \(L\) and \(L^{\prime}\) has an overall time complexity of \(O(n\_epoch*n\_batch*h)\), where \(H\) is the representation dimension. The time complexity of RoSAS also depends on the network structure. We take a multi-layer perceptron network with \(u\)-hidden layer as an example, the feed-forward propagation incurs \(O(n\_epoch*n\_batch*b*(Dh_{1}+h_{1}h_{2}+\cdots+h_{u}*1))\), where \(h_{i}\) is the number of hidden units in the \(i\)-th hidden layer.
## 5 Experiments
In this section, we first describe experimental setup (Section 5.1) and conduct experiments to answer the following questions:
* **Effectiveness**: Is RoSAS more effective than state-of-the-art anomaly detectors on real-world datasets? Can RoSAS handle different types of anomalies? (Section 5.2)
* **Robustness**: How does the robustness of RoSAS and its competitors when the unlabeled set is contaminated by different levels of anomalies? (Section 5.3)
* **Data Efficacy**: Can RoSAS fully leverage different numbers of labeled anomalies? (Section 5.4)
* **Scalability Test**: How does the time efficiency of RoSAS compared to its competitors? (Section 5.5)
* **Ablation Study**: Do key designs contribute to better anomaly detection performance? (Section 5.6)
* **Sensitivity**: How do the hyper-parameters influence the detection performance of RoSAS? (Section 5.7)
### Experimental Setup
#### 5.1.1 Datasets
Eleven publicly available real-world datasets are used2. The dataset information is reported in Table 2, including abbreviation (Abbr.), domain/task, data dimensionality (\(D\)), the number of data samples (\(N\)), and the anomaly ratio (\(\delta\)). The first eight datasets are with real anomalies, which cover three important real-world applications of anomaly detection in cybersecurity, medicine, and finance. The last three datasets are from ODDS, a popular repository of anomaly detection datasets, and they contain semantic anomalies. All of these datasets are broadly used as benchmarks in many anomaly detection studies, e.g., (Bandaragoda et al., 2018; Pang et al., 2019; Xu et al., 2023a). We scale each feature to \([0,1]\) via min-max normalization. All the datasets are separated by a random 60:20:20 train-valid-test split while maintaining the original anomaly proportion.
Footnote 2: These datasets are available at [https://github.com/GuansongPang/ADRepository-Anomaly-detection-datasets](https://github.com/GuansongPang/ADRepository-Anomaly-detection-datasets), [https://www.unb.ca/cic/datasets/](https://www.unb.ca/cic/datasets/), and [http://odds.cs.stonybrook.edu/](http://odds.cs.stonybrook.edu/)
#### 5.1.2 Competitors
We employ ten anomaly detection models from three categories as competing methods of RoSAS:
* _Semi-supervised Anomaly Detector_: Five deep semi-supervised anomaly detection methods including PReNet (Pang et al., 2023), FeaWAD (Zhou et al., 2021), DevNet (Pang et al., 2019, 2021a), Deep SAD (DSAD for short) (Ruff et al., 2020), and BiGAN (Tian et al., 2022) are used. TiWS-iForest (WSIF for short) (Barbariol and Susto, 2022) is an enhanced version of (Liu et al., 2008), which leverages weak supervision to improve detection performance. These competitors fall into different categories of existing techniques, representing the state-of-the-art performance of this semi-supervised setting.
* _PU learning-based Method_: Learning from positive and unlabeled data (PU learning) is also a related field if we treat anomalies as positive data. We choose a representative PU learning-based anomaly detector (Zhang et al., 2017) as our competitor, which combines the two-stage strategy and the cost-sensitive strategy (TSS for short).
* _Unsupervised Anomaly Detector_: DIF (Xu et al., 2023a), IF (Liu et al., 2008), and COP (Li et al., 2020) are employed. DIF is an isolation-based method that is empowered by deep representation ensemble. IF is a popular anomaly detection algorithm that is broadly used in many industrial applications, and COP is the latest probability-based approach. Note that they are only used as baselines to examine whether our method and other semi-supervised approaches obtain significantly improved performance.
#### 5.1.3 Parameter Settings and Implementations
In RoSAS, the learning rate is set as 0.005, intermediate representation dimension \(H\) is 128. As for the parameters in the loss function, we use \(k=2\), \(\alpha=0.5\), and \(e=1\). Smooth-\(\ell_{1}\) loss function is adopted as the base regression loss in \(L\). The batch size \(b\) is 32. RoSAS uses the Adam optimizer with an \(\ell_{2}\)-norm weight decay regularizer. The temperature \(T\) in dynamic weight averaging is 2. RoSAS uses a multi-layer perceptron network structure since the used experimental datasets are multi-dimensional data. The representation module and the scoring module both adopt a one-hidden-layer structure. The number of hidden units in the representation network is set as \(h_{1}=D+\lfloor\frac{1}{2}(H-D)\rfloor\), and the scoring network uses \(h_{2}=\lfloor\frac{1}{2}H\rfloor\). We use LeakyReLU activation in the hidden layers and the tanh function to normalize final anomaly scores.
All the detectors are implemented in Python. The implementations of PReNet, DevNet, FeaWAD, DSAD, WSIF, and BiGAN are released by their original authors. The source code of TSS is publicly available. RoSAS, DSAD, and BiGAN employ the PyTorch framework, and PReNet, DevNet, and FeaWAD are based on Keras. We use implementations of COP and IF the pyod(Zhao et al., 2019) package.
#### 5.1.4 Performance Evaluation Metrics and Computing Infrastructure
Following the popular experiment protocol of anomaly detection studies (Pang et al., 2019, 2023; Ruff et al., 2020; Xu et al., 2021, 2023a), two performance evaluation metrics, i.e., the Area under the Precision-Recall Curve (AUC-PR) and the Area under the Receiver Operating Characteristic Curve (AUC-ROC), are used. ROC curve indicates true positives against false positives, while points in PR curve are pairs of precision value and recall value of the anomaly class given different thresholds. These two metrics range from 0 to 1. Higher values indicate better performance. AUC-PR is more practical in real-world applications because it directly relates to benefits and costs of detection results, and achieving high AUC-PR is more challenging. Therefore, we take AUC-PR as the main detection performance metric in the following experiments. We report the average AUC-PR and AUC-ROC scores on each dataset over ten independent runs. Additionally, we employ the paired _Wilcoxon_ signed-rank test to determine if the AUC-ROC/AUC-PR of RoSAS and each of its contenders are significantly different. It can examine the statistical significance of the improvement of RoSAS against existing state-of-the-art performance.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Data** & **Abbr.** & **Domain/Task** & \(D\) & \(N\) & \(\delta\) \\ \hline CIC-DoHBrW2020 & DoH & Intrusion Detection & 29 & 1,167,136 & 21.4\% \\ CIC-IDS2017 WebAttack & WebAttack & Intrusion Detection & 78 & 700,284 & 0.3\% \\ CIC-IDS2017 PortScan & PortScan & Intrusion Detection & 78 & 816,385 & 19.5\% \\ UNSW-NB15 Exploit & Exploit & Intrusion Detection & 196 & 96,000 & 3.1\% \\ UNSW-NB15 Backdoor & Backdoor & Intrusion Detection & 196 & 95,329 & 2.4\% \\ Thyroid disease & Thyroid & Disease Diagnosis & 21 & 7,200 & 7.4\% \\ KDD Cup 2014 Donors & Donars & Funding Prediction & 10 & 619,326 & 5.9\% \\ Credit card fraud detection & Fraud & Fraud Detection & 29 & 284,807 & 0.2\% \\ Covertype & Cover & Ecosystem & 10 & 286,048 & 1.0\% \\ Letter recognition & Letter & Recognition & 32 & 1,600 & 6.3\% \\ Pen-based recognition & Pendigits & Recognition & 16 & 6,870 & 2.3\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset information. Abbr. is the dataset abbreviation used in the following experiments. \(D\) and \(N\) denote data dimensionality and data size per dataset, respectively. \(\delta\) indicates the anomaly ratio.
All the experiments are executed at a workstation with Intel Xeon Silver 4210R CPU, a single NVIDIA TITAN RTX GPU, and 64 GB RAM.
### Effectiveness
#### 5.2.1 Anomaly Detection Performance on Real-world Datasets
Following (Pang et al., 2019, 2023; Wu et al., 2021; Zhou et al., 2021), we randomly select 30 true anomalies from the training data per dataset as anomaly examples and the remaining training data as the unlabeled set. RoSAS and its five contenders are trained on training sets and used to measure abnormal degrees of data samples in testing sets. Labels of testing sets are strictly unknown to anomaly detectors and are only employed in the evaluation phase. As has been done in (Pang et al., 2019, 2023; Zhou et al., 2021), we also execute controlled experiments w.r.t. anomaly contamination rate. Each dataset is pre-processed by removing/injecting anomalies such that anomalies account for 2% of the unlabeled set. Specifically, the injected anomaly examples are obtained by replacing the values of 5% random features of a randomly selected real anomaly with the corresponding feature values of another real anomaly. This presents a simple and effective way to guarantee the presence of diverse and genuine (or weakly augmented) anomalies in the unlabeled data. This pre-processing step can cancel out the influence of different contamination ratios such that the performance of these anomaly detectors is comparable across datasets from various domains. Please note that we also examine the performance w.r.t. a wide range of contamination ratios in the following experiment.
Table 3 shows the AUC-PR and the AUC-ROC performance of RoSAS and its competing methods. RoSAS achieves the best AUC-PR or AUC-ROC performance on all the datasets. According to the p-values in the _Wilcoxon_ signed-rank test, RoSAS significantly outperforms its ten competitors w.r.t. both AUC-PR and AUC-ROC at the 98% confidence level. Averagely, RoSAS obtains a substantial performance leap (approximate 20%-30% AUC-PR improvement) over exiting state-of-the-art competing methods PReNet, DevNet, FeaWAD, DSAD, and TSS. WSIF is a non-deep method, which is inferior to these deep state-of-the-art semi-supervised methods on complicated real-world datasets. BiGAN is originally designed for images, and its performance on tabular data might be downgraded. Benefiting from a few labeled anomalies, the average AUC-ROC performance of many semi-supervised methods exceeds 0.9, and RoSAS still gains 4%-7% improvement over state-of-the-art competitors. The performance of unsupervised anomaly detectors DIF, IF, and COP is distinctly inferior to all the semi-supervised approaches, which validates the importance of fully exploiting these readily accessible anomaly examples in real-world applications.
\begin{table}
\begin{tabular}{|l|l l l l l l l l l l l l l|} \hline \multicolumn{1}{c}{**Data**} & \multicolumn{1}{c}{**RaSAS**} & \multicolumn{1}{c}{**PreNet**} & \multicolumn{1}{c}{**DevNet**} & \multicolumn{1}{c}{**FeaWAD**} & \multicolumn{1}{c}{**DSAD**} & \multicolumn{1}{c}{**TSS**} & \multicolumn{1}{c}{**WSIF**} & \multicolumn{1}{c}{**BiGAN**} & \multicolumn{1}{c}{**DF**} & \multicolumn{1}{c}{**IF**} & \multicolumn{1}{c}{**COP**} \\ \hline \multirow{3}{*}{} & DoH & **0.893\({}_{\text{-}0.005}\)** & 0.712\({}_{\text{-}0.021}\) & 0.628\({}_{\text{-}0.005}\) & 0.741\({}_{\text{-}0.068}\) & 0.842\({}_{\text{-}0.017}\) & 0.610\({}_{\text{-}0.002}\) & 0.546\({}_{\text{-}0.024}\) & 0.561\({}_{\text{-}0.005}\) & 0.396 & 0.427 & 0.340 \\ & WebAttack & **0.781\({}_{\text{-}0.051}\)** & 0.223\({}_{\text{-}0.005}\) & 0.220\({}_{\text{-}0.008}\) & 0.320\({}_{\text{-}0.001}\) & 0.458\({}_{\text{-}0.116}\) & 0.136\({}_{\text{-}0.005}\) & 0.026\({}_{\text{-}0.001}\) & 0.050\({}_{\text{-}0.004}\) & 0.003 & 0.004 & 0.004 \\ & PortsScan & **0.999\({}_{\text{-}0.000}\)** & 0.983\({}_{\text{-}0.005}\) & 0.973\({}_{\text{-}0.000}\) & 0.995\({}_{\text{-}0.005}\) & 0.994\({}_{\text{-}0.001}\) & 0.990\({}_{\text{-}0.001}\) & 0.585\({}_{\text{-}0.113}\) & 0.650\({}_{\text{-}0.172}\) & 0.181 & 0.180 & 0.135 \\ & Exploit & **0.740\({}_{\text{-}0.025}\)** & 0.560\({}_{\text{-}0.005}\) & 0.450\({}_{\text{-}0.004}\) & 0.510\({}_{\text{-}0.007}\) & 0.623\({}_{\text{-}0.007}\) & 0.523\({}_{\text{-}0.007}\) & 0.872\({}_{\text{-}0.008}\) & 0.199\({}_{\text{-}0.003}\) & 0.272\({}_{\text{-}0.001}\) & 0.255 & 0.060 & 0.083 \\ & Backdoor & 0.877\({}_{\text{-}0.021}\) & 0.882\({}_{\text{-}0.005}\) & **0.884\({}_{\text{-}0.015}\)** & 0.793\({}_{\text{-}0.005}\) & 0.666\({}_{\text{-}0.006}\) & 0.860\({}_{\text{-}0.018}\) & 0.437\({}_{\text{-}0.143}\) & 0.405\({}_{\text{-}0.152}\) & 0.394 & 0.052 & 0.069 \\ & Thyroid & **0.839\({}_{\text{-}0.004}\)** & 0.436\({}_{\text{-}0.024}\) & 0.252\({}_{\text{-}0.003}\) & 0.322\({}_{\text{-}0.003}\) & 0.304\({}_{\text{-}0.009}\) & 0.197\({}_{\text{-}0.010}\) & 0.537\({}_{\text{-}0.008}\) & 0.083\({}_{\text{-}0.006}\) & 0.074 & 0.131 & 0.134 & 0.134 \\ & Donars & **1.000\({}_{\text{-}0.000}\)** & 0.973\({}_{\text{-}0.001}\) & 0.999\({}_{\text{-}0.000}\) & 0.999\({}_{\text{-}0.000}\) & 0.999\({}_{\text{-}0.000}\) & 0.999\({}_{\text{-}0.001}\) & 0.982\({}_{\text{-}0.007}\) & 0.780\({}_{\text{-}0.003}\) & 0.873\({}_{\text{-}0.075}\) & 0.115 & 0.238 & 0.242 \\ & Fraud & **0.831\({}_{\text{-}0.003}\)** & 0.803\({}_{\text{-}0.008}\) & 0.808\({}_{\text{-}0.004}\) & 0.596\({}_{\text{-}0.261}\) & 0.438\({}_{\text{-}0.109}\) & 0.800\({}_{\text{-}0.005}\) & 0.080\({}_{\text{-}0.003}\) & 0.664\({}_{\text{-}0.009}\) & 0.335 & 0.328 & 0.270 \\ & Cover & **0.983\({}_{\text{-}0.003}\)** & 0.957\({}_{\text{-}0.008}\) & 0.907\({}_{\text{-}0.005}\) & 0.939\({}_{\text{-}0.003}\) & 0.939\({}_{\text{-}0.001}\) & 0.922\({}_{\text{-}0.005}\) & 0.916\({}_{\text{-}0.003}\) & 0.673\({}_{\text{-}0.154}\) & 0.603\({}_{\text{-}0.122}\) & 0.191 & 0.063 & 0.061 \\ & Letter & **0.501\({}_{\text{-}0.025}\)** & 0.332\({}_{\text{-}0.025}\) & 0.153\({}_{\text{-}0.037}\) & 0.277\({}_{\text{-}0.047}\) & 0.067\({}_{\text{-}0.069}\) & 0.427\({}_{\text{-}0.032}\) & 0.163\({}_{\text{-}0.022}\) & 0.583\({}_{\text{-}0.117}\) & 0.161\({}_{\text{-}0.003}\) & 0.067\({}_{\text{-}0.008}\) & 0.099 & 0.084 & 0.061 \\ & Pentigidis & **
RoSAS achieves substantially superior detection performance with the help of the proposed contamination-resilient continuous supervision and feature learning-based regularization. The robustness of RoSAS is enhanced to better exploit the contaminated unlabeled set. RoSAS is also with direct fine-grained guidance to optimize anomaly scores more accurately. Therefore, RoSAS can better leverage dozens of anomaly examples and large-scale unlabeled data, resulting in effective semi-supervised anomaly detection. Note that PReNet obtains relatively better performance because it can resist small anomaly contamination thanks to its data combination operation. Other competitors do not consider the interference from noisy hidden anomalies and treat the whole unlabeled set as normal data. Also, all of these competing methods are only optimized by discrete supervision information that fails to indicate continuous abnormal degrees, resulting in suboptimal learning of anomaly scores.
#### 5.2.2 Capability of Handling Different Types of Anomalies
We further investigate whether RoSAS can identify different types of anomalies. Anomalies can be classified into _clustered anomalies_ and _scattered anomalies_ according to the intra-class proximity (Xu et al., 2019; Zhou et al., 2022). Clustered anomalies (e.g., diseases and fraudulent activities) share similar behaviors, while scattered anomalies (e.g., exceptions in industrial systems) randomly appear out of the inlier distribution and have weak or even no connections with other individual samples. Besides, in the semi-supervised setting, there might be some _novel anomalies_ that are different from labeled anomalies that appear during training. Novel anomalies are critical in real-world applications; for example, some advanced new attacks may pose severe threats to network security, but they are very different from those known intrusions. Due to the difficulty of knowing specific anomaly types in real-world datasets, we create three synthetic cases to validate the capability of handling these anomaly types. Training and testing data distributions of these three cases are demonstrated in Figure 3. Case 1 and Case 2 respectively contain clustered anomalies and scattered anomalies, and there is a cluster of novel anomalies in the testing set of Case 3.
Figure 3 further illustrates the detection results of RoSAS. By setting the threshold according to the size of true anomalies, we report both predicted anomaly scores and corresponding binary labels. We respectively analyze the detection results of the three cases below.
_Case 1._ In terms of clustered anomalies in Case 1, although the abnormal region is contaminated by unlabeled anomalies that are used as normal data, this region can be covered by new data samples that are augmented by our mass interpolation method. Labeled anomalies are over-sampled during training, and the unlabeled anomalies are still rare compared to genuine normal data because anomalies themselves are rare. The contamination can be corrected by new data generated via the interpolation of data combinations with correct labels, and thus RoSAS can effectively identify these clustered anomalies during inference.
_Case 2._ As for scattered anomalies in Case 2, unlabeled anomalies may not largely influence the training process. However, one key issue of this case is the "manifold intrusion" problem. For instance, the interpolation into labeled anomalies may create augmented data samples in the normal distribution, but they are labeled by high abnormal degrees. To alleviate this problem, RoSAS uses Beta distribution with \(\alpha\) = 0.5 in the mass interpolation process, thereby making most interpolation located in the local regions of original samples. This may still raise an inevitable limitation. Namely, RoSAS gives slightly higher anomaly scores to some margin points of the normal manifold, and there are two false positives as shown in the binary prediction results.
_Case 3._ RoSAS is also applicable to identify novel anomalies that do not appear during training, as validated in Case 3. This advantage owes to the feature learning module of RoSAS. The learning objective posed upon the representation space judges whether labeled anomalies are effectively separated by introducing a reference divergence degree between unlabeled data. That is, this learning objective not only repels anomalies from the normal manifold but pulls unlabeled samples together. Therefore, during inference, novel anomalies can be far away from the normal manifold in the representation space.
### Robustness w.r.t. Anomaly Contamination Levels
This experiment evaluates the robustness of RoSAS w.r.t. different anomaly contamination ratios (i.e., the proportion of anomalies in the unlabeled set \(\mathcal{X}_{U}\)). As anomalies are rare events in practical scenarios, we vary the contamination level from 0% up to 8%, and all the contamination levels use 30 random true anomalies as labeled data.
Figure 4 (a) shows the AUC-PR results on all the eleven real-world datasets with varying anomaly contamination levels. Anomaly detection performance generally decreases when the contamination level increases. Nevertheless, in the vast majority of cases, RoSAS is more robust than the competitors. It is noteworthy that, in some datasets (e.g., _DoH_, _PortScan_, and _Pendigits_), most anomaly detectors are stable when increasing the contamination rate. It might be because the increased anomalies are isolated data samples, and anomaly detection models can easily filter the interference. The competitors can also obtain very competitive performance on these datasets. However, in complicated datasets like _Exploit_, _Backdoor_, _Thyroid_, and _Fraud_, these anomalies that are hidden in the unlabeled set greatly blur the boundary between normal data and anomalies. RoSAS can consistently obtain better performance than the competitors in challenging noisy environments with high contamination levels.
### Data Efficacy of Labeled Anomalies
This experiment estimates the data efficacy of different numbers of labeled anomalies in terms of the value they bring to semi-supervised anomaly detection. In other words, this experiment examines whether RoSAS achieves more significant performance improvement than its competing methods when more labeled anomalies are available. The number of labeled anomalies is increased from 10 to 90, and the contamination level is maintained at 2%.
Figure 4 (b) shows the AUC-PR results of RoSAS and its contenders w.r.t varying numbers of labeled anomalies. Semi-supervised anomaly detectors generally perform better when more labeled anomalies are accessible. However, this law is not always true in practice. Some anomaly detectors also present fluctuation trends on some datasets. These increased labeled anomalies may have heterogeneous behaviors and carry conflicting information which imposes negative effects on anomaly detectors, as has been explained in (Pang et al., 2019). By contrast, our method obtains
Figure 3: Three toy cases with different anomaly types. Each row indicates a case. The top and medium cases respectively contain _clustered anomalies_ and _scattered anomalies_. The testing data of the bottom case has _novel anomalies_ (the anomaly cluster on the far right) that do not appear in the training set. The panels in the left two columns show the data distribution of training/testing data, and the anomaly detection results of RoSAS including predicted anomaly scores and binary labels are visualized in the right two columns.
more stable and superior performance by fully utilizing limited labeled anomalies. It is noteworthy that some detectors also do not perform better when more labeled anomalies are available. It might be because these increased labeled anomalies have very similar behaviors and fail to bring useful information related to the anomaly distribution.
### Scalability Test
This experiment evaluates the scalability of RoSAS. Nine datasets are created with the same data size (i.e., 5,000) and dimensionality increasing in multiples of 2 from 16 to 4,096. Another nine datasets are generated with varying data sizes increasing from 4,000 to 1,024,000 with fixed dimensions (i.e., 128). For the sake of comparison fairness, we employ deep semi-supervised anomaly detection methods (i.e., PReNet, DevNet, FeaWAD, DSAD, TSS, and BiGAN) as counterparts in this experiment. We use the same training configuration for these methods including the size of mini-batches (32) and the number of mini-batches per training epoch (20). We report the execution time including the training time of 10 epochs and the inference time. Scalability test results are reported in Figure 5. RoSAS and its counterparts can efficiently handle high-dimensional data thanks to the parallel accelerators in the mini-batch calculation of GPU. RoSAS only takes less than 10 seconds when handling 4,096-dimensional data. In terms of the scale-up test w.r.t. data size, RoSAS, DSAD, and TSS have comparably good efficiency. In comparison,
Figure 4: AUC-PR results of RoSAS and its semi-supervised competing methods on datasets with **(a)** different anomaly contamination levels (i.e., ratios of anomalies in the unlabeled set \(\mathcal{X}_{U}\)) and **(b)** varying numbers of labeled anomalies (i.e., the size of labeled anomaly examples \(\mathcal{X}_{A}\)).
FeaWAD and PReNet have complicated network structures that lead to significantly increased execution time. RoSAS uses about 20 seconds to handle the dataset containing 1,024,000 data samples.
### Ablation Study
This experiment is to validate the contribution of key designs in RoSAS. We set five ablated versions. The changes in these variants are introduced as follows, and other parts are the same as RoSAS.
* \(\mathbf{L\rightarrow}\mathbf{L_{dis}}\) discretizes the generated continuous supervision targets used in the anomaly scoring loss function \(L\).
* \(\mathbf{L\rightarrow}\mathbf{L_{dev}}\) uses state-of-the-art anomaly scoring loss function used in DevNet (Pang et al., 2019, 2021) to replace \(L\).
* \(\mathbf{L\rightarrow}\mathbf{L_{reg}}\) uses a bare regression loss function used in RoSAS (i.e., smooth-\(\ell_{1}\) loss) to replace \(L\).
* **w/o \(\mathbf{L^{\prime}}\)** removes the feature learning-based regularizer \(L^{\prime}\).
* **w/o \(\mathbf{L_{c}}\)** removes the consistency learning part in \(L\).
The first three variants (\(\mathbf{L\rightarrow}\mathbf{L_{dis}}\), \(\mathbf{L\rightarrow}\mathbf{L_{dev}}\), and \(\mathbf{L\rightarrow}\mathbf{L_{reg}}\)) only take _discrete supervision information_ to optimize anomaly scores, which are used to verify the significance of continuous supervision-guided anomaly score optimization. The ablated variants **w/o \(\mathbf{L^{\prime}}\)** and **w/o \(\mathbf{L_{c}}\)** measure the contributions of the feature learning-based regularization \(L^{\prime}\) and the consistency constraint in \(L\).
The AUC-PR performance of RoSAS and its five ablated variants is shown in Table 4. RoSAS significantly outperforms its three ablated versions \(\mathbf{L\rightarrow}\mathbf{L_{dis}}\), \(\mathbf{L\rightarrow}\mathbf{L_{dev}}\), and \(\mathbf{L\rightarrow}\mathbf{L_{reg}}\) at 98% confidence level. More than 7% average improvement rate is achieved. These three variants use various objectives with only discrete supervision information.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**Data** & **RoSAS** & \(\mathbf{L\rightarrow}\mathbf{L_{dis}}\) & \(\mathbf{L\rightarrow}\mathbf{L_{dev}}\) & \(\mathbf{L\rightarrow}\mathbf{L_{reg}}\) & **w/o \(\mathbf{L^{\prime}}\)** & **w/o \(\mathbf{L_{c}}\)** \\ \hline DoH & 0.893\({}_{\pm 0.05}\) & 0.894\({}_{\pm 0.006}\) (-0.1\%) & 0.888\({}_{\pm 0.011}\) (**0.6\%**) & 0.867\({}_{\pm 0.008}\) (**3.0\%**) & 0.892\({}_{\pm 0.008}\) (**0.1\%**) & 0.891\({}_{\pm 0.005}\) (**0.2\%**) \\ WebAttack & 0.781\({}_{\pm 0.051}\) & 0.623\({}_{\pm 0.146}\) (**25.4\%**) & 0.508\({}_{\pm 0.136}\) (**53.7\%**) & 0.434\({}_{\pm 0.000}\) (**80.0\%**) & 0.542\({}_{\pm 0.001}\) (**44.1\%**) & 0.638\({}_{\pm 0.139}\) (**22.4\%**) \\ PortsScan & 0.999\({}_{\pm 0.000}\) & 0.999\({}_{\pm 0.000}\) (0.0\%) & 0.999\({}_{\pm 0.000}\) (0.0\%) & 0.997\({}_{\pm 0.001}\) (**0.2\%**) & 0.997\({}_{\pm 0.001}\) (**0.2\%**) & 0.999\({}_{\pm 0.000}\) (0.0\%) \\ Exploit & 0.740\({}_{\pm 0.02}\) & 0.727\({}_{\pm 0.011}\) (**1.8\%**) & 0.674\({}_{\pm 0.003}\) (**0.9\%**) & 0.632\({}_{\pm 0.009}\) (**17.1\%**) & 0.694\({}_{\pm 0.004}\) (**16.6\%**) & 0.721\({}_{\pm 0.004}\) (**2.6\%**) \\ Backdoor & 0.877\({}_{\pm 0.021}\) & 0.873\({}_{\pm 0.002}\) (**0.5\%**) & 0.858\({}_{\pm 0.004}\) (**2.2\%**) & 0.888\({}_{\pm 0.001}\) (-1.2\%) & 0.892\({}_{\pm 0.000}\) (-1.7\%) & 0.877\({}_{\pm 0.020}\) (0.0\%) \\ Thyroid & 0.839\({}_{\pm 0.006}\) & 0.623\({}_{\pm 0.106}\) (**34.7\%**) & 0.543\({}_{\pm 0.190}\) (**54.5\%**) & 0.777\({}_{\pm 0.012}\) (**8.0\%**) & 0.812\({}_{\pm 0.000}\) (**33.3\%**) & 0.833\({}_{\pm 0.004}\) (**0.7\%**) \\ Donars & 1.000\({}_{\pm 0.000}\) & 1.000\({}_{\pm 0.000}\) (0.0\%) & 1.000\({}_{\pm 0.000}\) (0.0\%) & 1.000\({}_{\pm 0.000}\) (0.0\%) & 1.000\({}_{\pm 0.000}\) (0.0\%) & 1.000\({}_{\pm 0.000}\) (0.0\%) \\ Fraud & 0.831\({}_{\pm 0.003}\) & 0.830\({}_{\pm 0.006}\) (**0.1\%**) & 0.824\({}_{\pm 0.008}\) (**0.8\%**) & 0.820\({}_{\pm 0.000}\) (**1.3\%**) & 0.819\({}_{\pm 0.15}\) (**1.5\%**) & 0.830\({}_{\pm 0.004}\) (**0.1\%**) \\ Cover & 0.983\({}_{\pm 0.003}\) & 0.980\({}_{\pm 0.005}\) (**0.3\%**) & 0.982\({}_{\pm 0.004}\) (**0.1\%**) & 0.973\({}_{\pm 0.005}\) (**1.0\%**) & 0.985\({}_{\pm 0.002}\) (-0.2\%) & 0.983\({}_{\pm 0.004}\) (0.0\%) \\ Letter & 0.501\({}_{\pm 0.025}\) & 0.433\({}_{\pm 0.005}\) (**15.7\%**) & 0.417\({}_{\pm 0.009}\) (**20.1\%**) & 0.333\({}_{\pm 0.004}\) (**50.5\%**) & 0.372\({}_{\pm 0.005}\) (**34.7\%**) & 0.494\({}_{\pm 0.004}\)(**1.4\%**) \\ Pendigits & 0.995\({}_{\pm 0.013}\) & 0.990\({}_{\pm 0.001}\) (**0.5\%**) & 0.987\({}_{\pm 0.015}\) (**0.8\%**) & 0.978\({}_{\pm 0.0011}\) (**1.7\%**) & 0.999\({}_{\pm 0.001}\) (0.4\%) & 0.994\({}_{\pm 0.012}\) (**0.1\%**) \\ \hline _Average_ & 0.858\({}_{\pm 0.018}\) & 0.816\({}_{\pm 0.041}\) (**7.1\%**) & 0.789\({}_{\pm 0.022}\) (**13.0\%**) & 0.791\({}_{\pm 0.025}\) (**14.7\%**) & 0.918\({}_{\pm 0.021}\) (**8.0\%**) & 0.842\({}_{\pm 0.030}\) (**2.5\%**) \\ _p-value_ & - & 0.013 & 0.008 & 0.014 & 0.126 & 0.018 \\ \hline \hline \end{tabular}
\end{table}
Table 4: AUC-PR results of RoSAS and its five ablated versions with the improvement rates of RoSAS compared to its variants per dataset. Positive rates are boldfaced.
Figure 5: Scalability test results.
This comparison result validates the significance of using continuous supervision signals in anomaly score optimization. Anomalies are with various abnormal degrees, and anomaly scores naturally take on a continuous distribution. It is hard for discrete supervision information to accurately describe such consecutive trends in continuous distribution, resulting in suboptimal optimization of anomaly scores in these real-world datasets. Our work reveals this significant limitation in current anomaly detection studies and devises a simple but effective solution. On the other hand, RoSAS outperforms **w/o \(L^{\prime}\)** by 8% and at 85% confidence level, which verifies the complementary robustness enhancement effect brought by the feature learning-based regularization. Compared to **w/o \(L_{\mathbf{c}}\)**, the average improvement is above 2% and the confidence interval is 98%, which quantitatively measures the contribution of the consistency learning in producing smoother anomaly scores.
### Sensitivity Test
We investigate the influence of different settings of key hyper-parameters in RoSAS, i.e., \(\alpha\) in the Beta distribution, \(k\) in the mass interpolation, \(e\) in the regularizer, and the intermediate representation dimension \(H\). These hyper-parameters are tuned in turn and other parameters are kept the same as previously reported. RoSAS is performed 10 times on each hyper-parameter setting. The box plot of 10 AUC-PR values per dataset is illustrated in Figure 6. We show four representative datasets, and the other seven datasets are with similar or stable trends. As analyzed before, \(\alpha\) may influence the detection performance to some extent, and we use \(\alpha=0.5\) considering the "manifold intrusion" problem. Besides, we can safely use a margin \(e=1\) in the feature learning-based regularizer. The choice of \(k\) might considerably influence the detection performance, and \(k=2\) is more stable. Lower representation dimension \(H\) fails to convey sufficient information to the downstream anomaly scoring process, and thus 128 is recommended.
Figure 6: AUC-PR results of RoSAS with different settings of four key hyper-parameters (\(\alpha\), \(e\), \(k\), and \(H\)).
## 6 Discussion and Implications
### Key contributions
This study first summarizes the prior arts of this research field by giving a general semi-supervised anomaly detection framework. This framework contributes a unifying view of this research line, and we theoretically show how representative existing methods are instantiated from this framework. It may also offer valuable insights into the design of new semi-supervised anomaly detection models. More importantly, we uncover the key limitations of supervisory signals directly supplied by the semi-supervised setting and broadly used in existing methods. Motivated by these problems, we further propose a concrete anomaly detection method, and specifically, we make the following technical contributions.
This study contributes to the semi-supervised anomaly detection literature by taking into account the anomaly contamination problem. Arguably, many prior works directly use the whole unlabeled set as normal data for training their models (see Table 1 and Section 3.3), and their performance is considerably downgraded by these noisy points (as illustrated in the toy case in Figure 1 and real-world datasets in Table 3). Our method RoSAS is shown to be a simple yet effective solution to address this limitation. Instead of directly feeding original flawed supervision into the learning model, we propose new supervision containing augmented data with more reliable label information, resulting in stronger robustness than existing state-of-the-art methods when the training set is with high contamination level (see empirical results in Figure 4).
We consider our work as a starting point for leveraging continuous supervision information to optimize continuously distributed anomaly scores. To the best of our knowledge, we are the first to raise this issue in anomaly detection. We empirically show the advantage of using continuous supervision over discretized ones (see Table 4). Continuous supervision can lead to significant performance gain at 98% confidence level. Also, we pose a consistency constraint to further enhance the capability of producing smoother anomaly scores, which brings about 5% performance improvement. These findings may foster future theoretical research or inspire new optimization mechanisms of anomaly scores.
To sum up, different from current studies that rely on _contaminated discrete supervision_, our core novelty is a new kind of _contamination-resilient continuous supervision_. This supervision better conforms to the real abnormal-normal distribution and offers significantly better guidance to the optimization of the end-to-end anomaly scoring neural network.
### Practical Implications
Albeit a plethora of unsupervised anomaly detection models, many real-world systems are looking for anomaly detectors that can exploit their historical anomalies, and this study adds a new competitive option to the list that currently only contains limited choices. We show that only 30 anomalies can bring drastically improved performance than unsupervised models that work on unlabeled data only (e.g., our approach RoSAS achieves 0.999 of AUC-PR on an intrusion detection dataset _PortScan_ while unsupervised performance is only as low as 0.1). Given such huge benefits, instead of digging into the design of unsupervised anomaly detection models, one quick way to boost detection performance might be to transfer the unsupervised setting to the semi-supervised paradigm by feeding a few anomaly examples.
There are also many research and development fronts that we are pursuing in the future to further enhance the practical impact of this research. On one hand, this study can be extended to applications in different fields. We employ eleven datasets mainly from three domains including cybersecurity, medicine, and finance, and our approach also has the potential to identify system faults in AIOps or attacks in AI safety. On the other hand, by plugging in advanced network structures, our approach can be also applied to handle different data types (e.g., Transformer for sequential data, graph neural networks for graph data, and convolutional networks for images).
## 7 Conclusions
This paper first presents a general framework of deep semi-supervised anomaly detection to summarize this research line and reveal two key limitations of current studies. We then propose RoSAS, a concrete deep semi-supervised anomaly detection method. By optimizing the detection model using the mass-interpolation-based continuous supervision that explicitly indicates faithful abnormal degrees, RoSAS learns accurate and noise-tolerate anomaly scores.
Through extensive empirical results, we show two key advantages of using our continuous supervisory signals compared to the current discrete one: 1) our approach is substantially more robust w.r.t. anomaly contamination, especially on challenging cases with high contamination levels; 2) it is more data-efficient, that is, different numbers of labeled anomalies can be fully leveraged. These advantages are the main drivers of the overall superior performance of RoSAS that achieves about 20%-30% AUC-PR improvement over state-of-the-art semi-supervised anomaly detection approaches on 11 real-world datasets.
## Acknowledgments
Hongzuo Xu, Yijie Wang, Songlei Jian, Ning Liu, and Yongjun Wang are supported in part by the National Key R&D Program of China under Grant 2022ZD0115302, in part by the National Natural Science Foundation of China under Grants 62002371 and 61379052, in part by the Science Foundation of Ministry of Education of China under Grant 2018A02002, in part by the Postgraduate Scientific Research Innovation Project of Hunan Province under Grants CX20210049 and CX20210028, in part by the Natural Science Foundation for Distinguished Young Scholars of Hunan Province under Grant 14JJ1026, and the Foundation of National University of Defense Technology under Grant ZK21-17. Guansong Pang is supported in part by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 under Grant 21SISSMU031.
We also thank the referees for their comments, which helped improve this paper considerably.
|
2307.04745 | Uncovering Exceptional Contours in non-Hermitian Hyperbolic Matter | Hyperbolic lattices are starting to be explored in search of novel phases of
matter. At the same time, non-Hermitian physics has come to the forefront in
photonic, optical, phononic, and condensed matter systems. In this work, we
introduce non-Hermitian hyperbolic matter and elucidate its exceptional
properties in depth. We use hyperbolic Bloch theory to investigate band
structures of hyperbolic lattices in the presence of non-Hermitian on-site gain
and loss as well as non-reciprocal hopping. Using various analytical and
numerical approaches we demonstrate widely accessible and tunable exceptional
points and contours in {10,5} tessellations, which we characterize using phase
rigidity, energy scaling, and vorticity. We further demonstrate the occurrence
of higher-order exceptional points and contours in the {8,4} tessellations
using the method of Newton polygons, supported by vorticity and phase rigidity
computations. Finally, we investigate the open boundary spectra and densities
of states to compare with results from band theory, along with a demonstration
of boundary localisation. Our results unveil an abundance of exceptional
degeneracies in hyperbolic non-Hermitian matter. | Nisarg Chadha, Awadhesh Narayan | 2023-06-28T05:31:24Z | http://arxiv.org/abs/2307.04745v2 | # Uncovering Exceptional Contours in non-Hermitian Hyperbolic Matter
###### Abstract
Hyperbolic lattices are starting to be explored in search of novel phases of matter. At the same time, non-Hermitian physics has come to the forefront in photonic, optical, phononic, and condensed matter systems. In this work, we introduce non-Hermitian hyperbolic matter and elucidate its exceptional properties in depth. We use hyperbolic Bloch theory to investigate band structures of hyperbolic lattices in the presence of non-Hermitian on-site gain and loss as well as non-reciprocal hopping. Using various analytical and numerical approaches we demonstrate widely accessible and tunable exceptional points and contours in {10,5} tessellations, which we characterize using phase rigidity, energy scaling, and vorticity. We further demonstrate the occurrence of higher-order exceptional points and contours in the {8,4} tessellations using the method of Newton polygons, supported by vorticity and phase rigidity computations. Finally, we investigate the open boundary spectra and densities of states to compare with results from band theory, along with a demonstration of boundary localisation. Our results unveil an abundance of exceptional degeneracies in hyperbolic non-Hermitian matter.
Introduction
Spaces with negative curvature emerge naturally in general relativity [1], and also find applications in graph theory [2], random walks [3], complexity theory [4], and quantum information theory [5]. The enhanced bulk connectivity of the system [6] makes hyperbolic surfaces an efficient candidate for data storage and communication, making these geometries ubiquitous in data science and electrical engineering. Recent works on networks of coupled microwave resonators and superconducting qubits [7; 8] provide more tangible insight into these ideas from the perspective of band theory.
The theoretical extension of ideas from band theory and topology to hyperbolic geometries [9] and their experimental realisations [7; 8] on tabletop experimental platforms have propelled hyperbolic lattices to the forefront in the search for novel phases of matter. Progress in physical realisations through circuit quantum electrodynamics has made hyperbolic lattices a readily available platform for exploring band theories on these models, therefore being a direct test of the influence of the geometry and the metric itself on the properties of the system. The deviation from the Euclidean metric due to negative curvature of hyperbolic surfaces endows interesting properties to the behaviour and interactions of the entities constrained to reside on them, such as the finite ratio of boundary sites to the total sites, the non-Abelian nature of the translation group, and the subsequent higher dimensional quotient group of the manifold in \(k\)-space.
The implications of the curvature of space on the dimensionality and the topology of the Brillouin zone have been recently studied in order to come up with a comprehensive picture of a hyperbolic band theory [9]. Very recent work on investigating the counterparts to well-known Euclidean models such as the Haldane, Kane-Mele [10] and the Qi-Wu-Zhang models [11] has catalogued and contrasted the behaviour of curved and flat spaces. The use of real-space invariants on flat projections of hyperbolic tessellations has shown topological phase transitions characteristic of Chern insulator phases accompanied by edge modes and a quantized conductance [11]. The topological phase has been shown to be robust to disorder, and topological Anderson insulator phase transitions are also shown through numerical computations of topological invariants in the presence of disorder. Furthermore, higher-order topological insulator phases have been discovered in hyperbolic tessellations in two dimensions [12; 13]. These phases show zero-dimensional corner edge states whose degeneracy
depends on the symmetry of the crystal. The realisation of lattices with arbitrary rotational symmetries allows these corner edge states to have higher degeneracies than possible for Euclidean systems.
For real-space computation, the boundary sites play a much more prominent role in hyperbolic space than in flat space. Open boundary spectra show considerable deviations from the bulk spectra due to the macroscopic fraction of boundary sites, even in the thermodynamic limit. To circumvent this, Refs. [14] and [15] implemented a compactification of the hyperbolic manifold using regular maps to identify boundary sites and create a finite graph whilst removing the influence of dangling boundary sites. Remarkably, a universality in the shape of the Hofstadter butterfly is obtained when the unit cells in the compactified manifold are threaded by a flux [15]. These Hofstadter butterfly patterns are shown to be independent of the coordination number \(q\) for tessellations of the same\(p\)-gons.
In a parallel development, non-Hermitian systems have been in the limelight in recent years due to their intriguing fundamental properties, with a potential for interesting applications [16; 17; 18; 19; 20]. Beginning with the pioneering ideas of Bender and co-workers [21], the interplay with topology has reinvigorated the field and has led to several seminal discoveries. The role of topology in non-Hermitian systems is now at the forefront of research, with
Figure 1: **Visualising the hyperbolic plane in two dimensions through the PoincarΓ© disk model.** (a) The {8,3} tessellation with different coloured sites denoting different epochs generated recursively. The area in green shows the 16-site unit cell, also called the Bolza lattice, which itself is an {8,8} tessellation. (b) The {10,5} tessellation up to the first four epochs, along with a zoomed-in image showing the sites near the boundary.
implications for photonics, optics, metamaterials, topoelectric circuits and many others. Exceptional points are an important feature of non-Hermitian systems. These are singularities where eigenvalues and eigenvectors become identical [22; 23; 24], and have no counterpart in Hermitian systems. Not only are they interesting from a fundamental point of view - with remarkable associated properties such as Berry phases, Riemann sheet structures, and bulk Fermi arcs - they are also beginning to be exploited in applications. For instance, EPs have been used to design sensors with enhanced sensitivity [25; 26; 27]. Another intriguing phenomenon which has been recently identified in non-Hermitian systems is the non-Hermitian skin effect (NHSE), where a macroscopic fraction of states localize at the boundary [28; 29; 30; 31; 32]. NHSE is another topic of very active research in the past couple of years, whose implications are only beginning to be understood. It features intriguing connections to spectral topology and a plethora of experimental platforms have been used to implement its phenomenology [33; 32].
In this work, we explore the interplay of non-Hermiticity and hyperbolic geometry by using analytic band theory as well as numerical tight-binding calculations. We find that the higher order irreducible representations for the translational group lead to a greater degree of freedom in parameter space, which can allow tuning of parameters to readily obtain EPs as well as their higher dimensional analogues, exceptional contours. We investigate the behaviour of EPs in the presence of different kinds of non-Hermitian terms and characterise their properties. For ease of analytic computation, we use the {10,5} tessellation due to its crystalline symmetry resulting in a two-site unit cell [34]. The {10,5} tessellation lies in the same infinite crystalline family of {\(2(2g+1),2g+1\)} tessellations (with \(g\) being the genus) as the honeycomb lattice in Euclidean space, and has been termed "hyperbolic graphene" [34]. We introduce on-site gain and loss as well as non-reciprocal hopping to add non-Hermiticity and unveil the behaviour of the resulting exceptional contours using phase rigidity and vorticity to visualise the effects on the spectrum upon approaching such exceptional regions. The winding of eigenvalues near exceptional contours is calculated to display the inter-band vorticity intrinsic to the topology around an EP. Furthermore, we consider a model for an {8,4} tessellation to show the occurrence of higher-order EPs upon introducing gain and loss in the system. We also utilize the recently proposed method of Newton polygons to obtain an analytic understanding of such higher-order EPs. Finally, we construct the real-space lattice using recursive circular inversions and use exact diagonalisation to obtain the energy
spectra and the densities of states. We compare the results obtained using the hyperbolic band theory and show deviations from exact diagonalisation. We also present boundary localisation effects in real space under non-reciprocal hopping. With greater freedom to tune parameters, along with current progress in circuit quantum electrodynamics and photonics, we hope our results can motivate creation of experimental platforms for realising sensors and other potential applications based on the synergy between hyperbolic connectivity and non-Hermiticity.
## II Understanding hyperbolic geometry
We briefly summarize here essential concepts in hyperbolic geometry as a foundation for the rest of our results. We will be interested in space-filling tilings of regular \(p\)-gons, with each vertex having a coordination number of \(q\). This is denoted in the Schlafli notation as a {\(p\),\(q\)} tessellation. In this notation, the square lattice is a {4,4} tessellation, and similarly, the honeycomb and triangular lattices are the {6,3} and {3,6} tessellations, respectively. Due to the restriction imposed by the angle sum property, these are the only permissible tessellations for Euclidean space. The absence of this restriction allows hyperbolic systems to have an infinite number of realisations of tessellations with higher \(p\) and \(q\) indices. These tessellations are projected onto flat space using the Poincare disk model on a unit circle with distances measured in the Poincare metric to obtain the lattices shown in Fig. 1 for the {8,3} and {10,5} tessellations.
For hyperbolic lattices, the lattice translation operators on the tessellation are elements of the discrete Fuchsian symmetry group [35], whose exact forms depend on the representation used for the lattice geometry. The discrete spatial symmetries of the tessellation allow the introduction of \(k\)-space momenta and automorphic Bloch wavefunctions for a hyperbolic band theory [9] in the same way as for Euclidean lattices. However, the elements of the Fuchsian group do not commute, and the non-Abelian nature of the Fuchsian group marks a striking deviation from Euclidean geometry. An automorphic band theory is expected to be incomplete for a hyperbolic lattice since the \(U(1)\) phase attachment for automorphic wave functions forms a lower dimensional irreducible representation for the non-Abelian Fuchsian group. We shall subsequently visit the deviation between results from open boundary numerical diagonalisation and automorphic Bloch theory for different non-Hermitian param
eters.
### Generators of Translations in Hyperbolic Tessellations
Understanding the algebra of the translational operators is a crucial first step for a band theoretic description of hyperbolic lattices. For Euclidean geometries, the group of lattice translations forms a normal subgroup for the manifold whose quotient group gives the unit cell. Due to the commutative nature of the lattice translations, the d-dimensional Brillouin zone obtained is a \(\mathcal{S}^{d}\) surface with one-dimensional irreducible representations. As a result, using a \(U(1)\) phase to describe systems with discrete translational symmetry gives the Bloch theorem, which describes the band structure for periodic systems in flat space.
On the other hand, for a hyperbolic surface, the quotient group of the hyperbolic manifold under the non-Abelian Fuchsian translational symmetry group gives a higher genus Riemann surface. This leads to a higher genus representation for the hyperbolic Brillouin zone, which is shown to be the Jacobian of the resulting Riemann surface [9]. This hyperbolic Brillouin zone has higher dimensional irreducible representations and cannot be described exactly by a \(U(1)\) Bloch theorem.
The explicit form for the translation group operators \((\hat{\gamma}_{1},\hat{\gamma}_{2},...\hat{\gamma}_{n})\) is given in Ref. [8]. We will use the same convention to attach the Bloch phase for inter-unit cell hoppings. According to the Bloch ansatz, a translation from a unit cell site \(i\) to a unit cell \(f\) carried out by subsequent applications of \(\hat{\gamma}_{n_{1}},\hat{\gamma}_{n_{2}},...\hat{\gamma}_{n_{m}}\) results in an addition of a \(U(1)\) phase \(\phi=\sum_{i=1}^{m}k_{n_{i}}\), where \(k_{n_{i}}\in[0,2\pi]\) is the phase associated with the application of \(\hat{\gamma}_{n_{i}}\), with \(n=1,2,...m\). The number of independent group generators is determined by the exact structure (\(p\) and \(q\)) of the lattice. For the case of the {10,5} tessellation, we have four independent generators \((\hat{\gamma}_{1},\hat{\gamma}_{2},\hat{\gamma}_{3},\hat{\gamma}_{4})\), resulting in a four-dimensional Brillouin zone with a genus of 2.
## III The Hyperbolic Tight-Binding Model
### Description of the model in \(k\)-space
As elucidated in Ref. [34], the {10,5} tessellation has a two-site unit cell and is part of the \(\{2(2g+1),2g+1\}\) infinite family of tessellations. We obtain two sublattices A and B,
similar to that for graphene, with each sublattice forming a \(\{2g+1,2(2g+1)\}\) tessellation of its own. To proceed with the hyperbolic Bloch ansatz in a tight binding model, we choose a unit cell and identify the nearest neighbours for the A sublattice in terms of the translation operators for the B sites. For a particular unit cell, the nearest neighbours of site A can be expressed in terms of the coordinates of site B in the same unit cell,
Figure 2: **Visualising the energy spectra for Hermitian hyperbolic graphene.** (a) The energy spectrum with \(M=0\) produces the characteristic Dirac cones when \(k_{3}=0\), \(k_{4}=\pi\) in which case \(h(\mathbf{k})=1+e^{ik_{1}}+e^{ik_{2}}\) becomes the phase factor for Euclidean graphene. (b) Nodal lines are obtained with \(M=0\) for \(k_{3}=2\pi/3\) and \(k_{4}=4\pi/3\), and the condition for band touching becomes \(k_{1}=k_{2}\pm\pi\). Modulating \(k_{3}\), \(k_{4}\) gives different shapes of nodal surfaces. (c) The surface represents the allowed values of \(k_{1},k_{2},k_{3}\) at which there is a \(k_{4}\) that produces a node. The lines shown in red along the surface are the values of \(k_{1},k_{2},k_{3}\) where \(k_{4}=0\) gives a node. (d) Adding an on-site potential (\(M\neq 0\)) opens a gap in the system. For \(k_{3}=0\), \(k_{4}=\pi\) a gapped spectrum (\(\Delta E_{g}=2M\)) is obtained. The linear dispersion near the extrema is replaced by a quadratic dispersion (\(|E|=\frac{(\Delta k)^{2}}{2M}\)). (e) For \(k_{3}=2\pi/3\), \(k_{4}=4\pi/3\), a gapped system is obtained with the linear scaling being replaced by a quadratic scaling as in (d). (f) Energy spectrum through the Brillouin zone. The points defined here are \(\Gamma\)(0,0,0,0), A(\(\pi,0,0,0\)), B(\(\pi,0,\pi,-\pi\)), C(\(2\pi/3,-2\pi/3,0,\pi\)), and D(\(\pi,\pi,\pi,\pi\)).
acted on by the translation operators \(\gamma_{1,2,3,4}\) to get the relative translation operations \(\mathbb{I}\), \(\gamma_{1}\gamma_{2}^{-1}\),\(\gamma_{2}\gamma_{3}^{-1}\),\(\gamma_{1}\gamma_{4}\gamma_{3}^{-1},\gamma_{2}\gamma_{3}^{-1}\gamma_{4}\gamma_{3}^{-1}\). Since we are concerned only with the \(U(1)\) phases for a hyperbolic Bloch theory, we can use a linear change of basis for the \(\mathbf{k}\)-vector corresponding to the four-dimensional wave vector parametrising the Brillouin zone to get a simpler form for the Bloch phases. Using this transformation, the Bloch phases acquired due to the translation become \(1,e^{ik_{1}},e^{ik_{2}},e^{ik_{3}},e^{ik_{4}}\), respectively [8].
In terms of the Bloch phases, we can write a tight binding Hamiltonian in the sublattice basis as
\[\hat{H}_{0}=\begin{pmatrix}V_{AA}&-t_{0a}-t_{1a}e^{ik_{1}}-t_{2a}e^{ik_{2}}-t_ {3a}e^{ik_{3}}-t_{4a}e^{ik_{4}}\\ -t_{0b}-t_{1b}e^{-ik_{1}}-t_{2b}e^{-ik_{2}}-t_{3b}e^{-ik_{3}}-t_{4b}e^{-ik_{4}} &V_{BB}\end{pmatrix}, \tag{1}\]
where \(V_{AA}\) and \(V_{BB}\) are the on-site potentials at sublattice A and B, \(t_{ia}\) is the hopping from different B sites to the A site, and \(t_{ib}\) represents hopping from different A sites to the B site. We shall take the hermitian hoppings to be real and equal to unity. The usual flat space graphene can be obtained from this model by setting \(k_{3}=0\) and \(k_{4}=\pi\). We note that tuning \(k_{3}\) and \(k_{4}\) provide additional degrees of freedom to the off diagonal term \(h(\mathbf{k}):=1+e^{ik_{1}}+e^{ik_{2}}+e^{ik_{3}}+e^{ik_{4}}\). We will be treating the \(\mathbf{k}\)-vectors as parameters in order to study the positions of EPs and nodal points in cross-sections of the Brillouin zone for ease of representation. The simplicity of the Hamiltonian allows us to obtain analytical expressions for various quantities, which will be useful for analysing exceptional contours and other features upon introducing non-Hermiticity.
In the second quantized form, the Hamiltonian \(H\) can be written in terms of the creation and annihilation operators for the unit cell with the sublattice orbitals included through the Pauli matrices.
\[H=\sum_{i}V_{0}c_{i}^{\dagger}\sigma_{0}c_{i}+\sum_{i}Mc_{i}^{\dagger}\sigma_ {z}c_{i}+\sum_{\langle i,j\rangle}tc_{i}^{\dagger}\sigma_{x}c_{j}, \tag{2}\]
In terms of the \(k\)-space Hamiltonian \(\hat{H}_{0}\) defined earlier, \(V_{0}=(V_{AA}+V_{BB})/2\), and \(M=(V_{AA}-V_{BB})/2\) define the chemical potential and the sublattice potential, respectively. Since \(V_{0}\) is simply a constant shift in the spectrum, we will set \(V_{0}=0\), such that the sublattice potential (\(\pm M\)) is the only diagonal term. The hopping is restricted to be between nearest
neighbour unit cells.
In the sublattice basis, the time-reversal symmetry operator is just the complex conjugation operator \(\mathcal{T}=\mathcal{K}\). The tight-binding Hamiltonian obeys time-reversal symmetry since \(\mathcal{T}H\mathcal{T}^{-1}=H\). On the other hand, the particle-hole symmetry interchanges the sublattice basis, and the operator has the form \(\mathcal{P}=\sigma_{x}\mathcal{K}\). \(\mathcal{P}H\mathcal{P}^{-1}\neq-H\), and hence the Hermitian system only has time-reversal symmetry.
### Introducing non-hermiticity
We will introduce non-hermiticity of various kinds through on-site gain and loss (terms proportional to \(\sigma_{z}\)), and non-reciprocal hopping (terms proportional to \(\sigma_{x,y}\)), using terms of the form \(i\delta\sigma_{z}\), \(i\Omega\sigma_{y}\), and \(i\Gamma\sigma_{x}\). For simplicity, we will consider the non-Hermitian terms to be independent of \(k\). All of these terms will break the time-reversal symmetry of the Hermitian model, with \(\mathcal{T}H(\mathbf{k})\mathcal{T}^{-1}\neq H(-\mathbf{k})\). We will see how this anisotropy in the non-Hermiticity gives a richer complex energy spectrum with interesting windings of energy bands around EPs. Further, the high dimensionality of the \(k\)-space endows greater freedom in tuning the parameters of the system, allowing us to realise rich non-Hermitian phenomena. This freedom will have an even greater significance in obtaining higher-order EPs for tessellations with larger unit cells, such as the four band {8,4} model, which will be discussed later.
## IV Model Diagnostics
### Hermitian regime
We first briefly summarize the properties of the Hermitian model. In the Hermitian regime, the parameters at our disposal are the on-site potential \(\pm M\) and the hopping strength \(t\) (assumed to be isotropic). The energy spectrum is similar to Euclidean graphene, with the additional freedom furnished by \(k_{3}\) and \(k_{4}\) providing additional tuning parameters on the \(k_{1}\)-\(k_{2}\) cross-section. Plotting cross-sections for the energy spectrum for different choices of \(k_{3},k_{4}\) can give nodal points as well as nodal lines as shown in Fig. 2(a)-(b). The nodal surface is visualised in Fig. 2(c), where those values of \(k_{1},k_{2},k_{3}\) are plotted for which there exists a node for some value of \(k_{4}\). Fig. 2(f) shows the variation of the energy spectrum through different points in the four-dimensional Brillouin zone.
Figure 3: **Energy spectra for the non-Hermitian cases with gain and loss (\(\delta\)), and non-reciprocity (\(\Gamma\)).** (a)-(c) The shape of the nodal contour on the projected \(k_{1}-k_{2}\) plane with varying \(k_{3}\) and \(k_{4}\) for the case of gain and loss. (a) For \(\delta=1\), \(k_{3}=k_{4}=0\) gives the limiting case of a single band intersection point \(\vec{k_{0}}=(\pi,\pi,0,0)\). The dispersion near this point is quadratic in \(\mathbf{k}-\mathbf{k_{0}}\). (b) A single nodal loop is obtained for \(\delta=1,k_{3}=k_{4}=\pi\). The spectrum scales as \(E\thicksim(|\mathbf{k}-\mathbf{k_{0}}|)^{\frac{1}{2}}\) near the nodal contour. (c) Nodal contours obtained for \(\delta=1,k_{3}=7\pi/6,k_{4}=\pi/4\) (d) The energy spectrum on the same path (but with \(\delta=1\)) as in Fig. 2 (f). The points where both the energy levels become degenerate are marked with hollow circles, and occur when \(|h(\mathbf{k})|=\delta\). (e)-(g) The energy bands and corresponding nodal contours for different choices of parameters \((\Gamma,k_{3},k_{4})\), i.e., non-reciprocal hopping case. (e) Similar to (a), one obtains a single nodal point where the bands touch. The choice of parameters is \(\Gamma=0.5,k_{3}=\arccos(3/4),k_{4}=2\pi-k_{3}\). The dispersion is quadratic in \(k\) near the nodal point. (f) Absence of nodal points, with a finite complex gap in the spectrum for parameter values \(\Gamma=0.7,k_{3}=k_{4}=\pi/2.\) (g) For \(\Gamma=0.5,k_{3}=4\pi/3,k_{4}=4\pi/5\), open nodal contours are obtained, where the spectrum scales as \(E\thicksim(|\mathbf{k}-\mathbf{k_{0}}|)^{\frac{1}{2}}\), where \(k_{0}\) is a point on the nodal contour. (h) The energy spectrum is shown on the above-mentioned path, with \(\Gamma=1\). There are two nodal points where both the real and imaginary parts of the eigenvalues become equal.
A sublattice potential \(M\) introduces a gap (equal to \(2M\)) in the system, and the linear dispersions at the Dirac points are replaced by quadratic dispersions at the band extrema, as shown in Fig. 2(d)-(e).
### Non-Hermiticity and Phase Rigidity
We will introduce non-Hermiticity in two ways - through an imaginary on-site gain and loss and through non-reciprocal hopping within the unit cell. For the non-Hermitian Hamiltonian, the usual orthogonality of eigenvectors is replaced by the weaker condition of _biorthogonality_[36]. This gives us the eigenvectors for \(H\) (right eigenkets \(|R_{i}\rangle\)) and \(H^{\dagger}\) (left eigenkets \(|L_{i}\rangle\)) obeying the biorthogonality condition
\[\langle L_{i}|R_{j}\rangle=\delta_{ij}, \tag{3}\]
where \(\delta_{ij}\) is the Kronecker delta function. Recall that at EPs, two or more eigenvectors coalesce and hence the above relation falls through since if \(|R_{i}\rangle=|R_{j}\rangle\) then their inner products with \(\langle L_{i}|\)(expected to be 1 and 0 respectively) must also be equal, contrary to the
Figure 4: **Exceptional nodal Surfaces for non-Hermitian hyperbolic graphene.** (a) The exceptional nodal surface values of \(k_{1},k_{2},k_{3}\) for which there is a \(k_{4}\) that results in a node in the four-dimensional Brillouin zone for \(\delta=0.5\) is shown in pink. The underlying blue region is the nodal contour for when \(k_{4}=\pi/2\). (b) The nodal contour for \(\Gamma=0.5\), where the contour becomes a two-dimensional surface compared to the three-dimensional volume in (a). This is due to the additional constraint placed by the \(2i\Gamma\text{Im}(h)\) term in Eq. 9, thus reducing the degrees of freedom of the spectrum.
statement of the condition. We will use this as a measure of the proximity to an EP using the phase rigidity, \(r\), which measures the extent of mixing of the wave functions around an EP through the deviation of the normalisation from unity [37]. The phase rigidity for a band is defined as
\[r_{j}=\frac{\langle L_{j}|R_{j}\rangle}{\langle R_{j}|R_{j}\rangle}. \tag{4}\]
This phase rigidity provides a quantitative measure of the biorthogonality. Furthermore, the vanishing of the phase rigidity near an EP allows defining an exponent around the
Figure 5: **Diagnosing exceptional points using phase rigidity.** (a) The energy spectrum as a function of \(\delta\) for \(\mathbf{k}=(\pi/2,\pi/3,3\pi/4,\pi)\). The bands touch each other when \(\delta=|h(\mathbf{k})|\approx 2.58145\). (b) The scaling of the energy very close to the EP (\(\delta=|h(\mathbf{k})|+\epsilon\), where \(\epsilon\) is a perturbation from the EP). The inset shows the logarithmic scale plot, which has a slope of \(0.5\), revealing a square root dependence on \(\epsilon\) characteristic of second-order EPs. (c) The magnitude of the phase rigidity, \(r\), as a function of \(\delta\) for the same \(\mathbf{k}\). (d) For a system with both non-reciprocity (\(\Gamma\)) and on-site gain and loss (\(\delta\)) parametrised by \(\delta=r_{c}\cos\theta,\Gamma=r_{c}\sin\theta\), the spectrum is independent of \(\theta\). For \(k_{1}=4\pi/3,k_{2}=0,k_{3}=2\pi/3,k_{4}=0\), the condition for obtaining a gapless point is \(r_{c}=2\). The phase rigidity is shown to go to zero as a function of \(r_{c}\), with \(\theta=\pi/4\). (e) The circular exceptional contour in the \(\delta-\Gamma\) parameter space for \(\mathbf{k}=(0,0,2\pi/3,4\pi/3)\).
EPs. For instance, the scaling exponents around an \(N\)-th order EP can be \((N-1)/N\) or \((N-1)/2\)[38]. Remarkably, phase rigidity is not just a theoretically defined quantity, but has also been experimentally measured [39]. We will use phase rigidity to diagnose the EPs in our hyperbolic lattice models.
#### iii.1.1 On-Site Gain and Loss
First, we add an on-site gain and loss of strength \(\delta\), which amounts to adding \(i\delta\sigma_{z}\) to the Bloch Hamiltonian. As a result, the spectrum is no longer entirely real. In fact, the eigenvalues \(\lambda=\pm\sqrt{|h|^{2}-\delta^{2}}\) will be purely real or purely imaginary.
For a non-zero \(M\), the square root has a constant non-zero imaginary part which prohibits any band touchings, even in the individual real and imaginary spectra. The locus of the real and imaginary energies (\(E_{r}\) and \(E_{i}\)) is the intersection of three hyperbolic surfaces, given by,
\[E_{r}E_{i} =M\delta, \tag{5}\] \[E_{r}^{2}-E_{i}^{2} \leq M^{2}+25-\delta^{2},\] (6) \[E_{r}^{2}-E_{i}^{2} \geq M^{2}-\delta^{2}. \tag{7}\]
Evidently, increasing the non-Hermiticity strength \(\delta\) will lead to a more prominent imaginary spectrum. For \(M=0\), we find that either the real or the imaginary part of the eigenvalues will vanish, and we can recover nodal contours where the eigenvalues are zero, with the cross-sections shown in Fig. 3(a)-(d) for some choices of \(k_{3},k_{4}\). The nodal contours mark the transition from purely real to purely imaginary eigenvalues. In Fig. 3(d), a one-dimensional contour is parametrised in the Brillouin zone, which shows the appearance of EPs marked in black. Contours connecting these EPs are characterised by purely real or purely imaginary energy eigenvalues, and are called non-Hermitian Fermi arcs [18]. While these arcs occur trivially due to the behaviour of the band structure for this case, we will observe their existence even for non-reciprocal hopping in the next section. In the limit \(\delta>5t\), the spectrum becomes purely imaginary, and the bands do not touch in the four-dimensional Brillouin zone.
#### iii.1.2 Non-Reciprocal Hopping
Another way to introduce non-Hermiticity is through non-reciprocal hopping within the unit cell. This can be done by adding terms of the form \(i\Gamma\sigma_{x}\) or \(i\Omega\sigma_{y}\). These terms produce an imaginary \(k\)-dependent term in the expression for the eigenvalues, given by
\[i\Omega\sigma_{x}:E_{\pm}=\pm\sqrt{|h|^{2}+M^{2}-\Omega^{2}+2i \Omega\text{Re}(h)}, \tag{8}\] \[i\Gamma\sigma_{y}:E_{\pm}=\pm\sqrt{|h|^{2}+M^{2}-\Gamma^{2}-2i \Gamma\text{Im}(h)}. \tag{9}\]
To obtain a node, both the real and imaginary parts in the square root must go to zero. This adds additional structure to the nodes which is sensitive to the presence of the non-Hermitian term. The shape of the nodal surface drastically changes upon the addition of a
Figure 6: **Inter-Band winding for the {10,5} tessellation.** (a) Illustration of the winding of energy levels for the {10,5} tessellation with the on-site gain and loss. For the parameter choice \(k_{1}=2\pi/3,k_{2}=\pi/3,k_{3}=0,k_{4}=0\) and \(\delta=2\), the Hamiltonian is not defective, and we get trivial winding of the energy levels with respect to a parametric perturbation \(e^{i\theta}\sigma_{x}\). (b) \(\delta=2\sqrt{3}\) makes the Hamiltonian rank-deficit and gives an EP at the chosen \(\vec{k}\)-point. In this case, we can see the inter-band winding visualised through the exchange of the energy levels in one complete cycle of the perturbation.
very small non-reciprocity and cannot be treated as a perturbation on the original surface, as shown in Fig. 4. Typical two-dimensional cross-sections of the eigenspectra are shown in Fig. 3(e)-(h), with different characters for the nodal regions, which could be a nodal point (e), a nodeless cross-section (f), or a nodal contour (g). Fig. 3(h) shows the variation of the spectrum on a one-dimensional contour in the Brillouin zone. The appearance of the nodal EPs is shown in black, where both the real and imaginary parts of the eigenvalues are equal. These nodes are connected through non-Hermitian Fermi arcs where the real part of the energy goes to zero, as shown by our choice of contour. For the case of the \(i\Gamma\sigma_{y}\), the condition for the appearance of nodes is obtained to be
\[|h|^{2}=\Gamma^{2}-M^{2},\] \[\Gamma\text{Im}(h)=0.\]
Therefore, for non-zero \(\Gamma\), we require \(h=\pm\sqrt{\Gamma^{2}-M^{2}}\). The requirement for the imaginary part to be zero results in lower dimensional nodal surfaces on the \(k_{1},k_{2}\) cross-section as shown in Fig. 4(b). Similarly, the nodal spectrum for non-reciprocity \(i\Omega\sigma_{x}\) is obtained by setting \(h=\pm i\sqrt{\Omega^{2}+M^{2}}\).
### Identifying Exceptional Points
At the EPs, the coalescing of the eigenvalues and eigenvectors causes the Hamiltonian to become defective due to the absence of a complete basis to span the Hilbert space [18]. EPs lead to degeneracies in the spectrum, which in our case can only occur if the eigenvalues are zero (since the spectrum is symmetric about \(E=0\)). Thus, the nodal surfaces that we obtained earlier correspond to the intersections of the Fermi arcs for the real and imaginary parts of the energy leading to second-order EPs. We will subsequently investigate the occurrence of higher-order EPs in a four-band model for the {8,4} tessellation.
The phase rigidity, \(r\), for an eigenvector measures the proximity to an EP. In the Hermitian limit, the left and right eigenkets are equal, and hence \(r=1\). As the non-Hermiticity is increased, the left and right eigenkets become unequal and their inner product starts to decrease as the eigenkets start to mix. At the EPs, the left and right eigenkets are orthogonal and \(r=0\). Therefore, analyzing the phase rigidity on momentum-space cross
sections indicates the proximity from an EP. For second-order EPs, the eigenspectrum shows a square-root scaling, which is indicated in Fig. 5(a)-(b). The complex eigenspectrum of the non-Hermitian system allows the definition of the winding of energy eigenvalues as a topological invariant [18]. One can define an analogous winding of the relative phase for the eigenvalues on a contour, \(\mathcal{C}\), encircling an EP, which will be quantised and indicates a coalescence of the eigenvectors. This vorticity \(\nu_{nm}\) for energy bands \(E_{n}\) and \(E_{m}\) can be defined as
\[\nu_{nm}=\frac{1}{2\pi}\oint_{\mathcal{C}}\nabla_{\mathbf{k}}\text{arg}(E_{n}( \mathbf{k})-E_{m}(\mathbf{k}))\cdot d\mathbf{k}, \tag{10}\]
For a contour \(\mathcal{C}\) encircling the EP, \(\nu_{nm}=\pm 1/2\) for the second order EP in our model. This is shown in terms of a parametrisation of a circular path centred at the EP in Fig. 6, and leads to an exchange of the eigenvalue and eigenvectors across the EP.
### Higher order Exceptional Points in {8,4} Tesselations
In recent years, higher-order EPs - where more than two eigenvalues and eigenvectors merge - have been of significant interest owing to their remarkable properties beyond second-order EPs. Here, we next show how hyperbolic lattice models allow such higher-order EPs in the presence of non-Hermiticity. As an example, we consider the {8,4} tessellation, which is part of the {4\(g\), 4} infinite family of tessellations with genus 2 [34]. The unit cell has four sites and the Bolza lattice is an {8,8} tessellation, with the Fuchsian group generated by four elements. We introduce the tight-binding Hamiltonian with balanced complex gain and loss, \(\pm i\delta\), on two sites
\[H_{0}(\mathbf{k})=\begin{pmatrix}i\delta&1+e^{i(k_{1}-k_{2})}&0&e^{ik_{1}}+e^{ -ik_{4}}\\ 1+e^{i(k_{2}-k_{1})}&0&1+e^{i(k_{2}-k_{3})}&0\\ 0&1+e^{i(k_{3}-k_{2})}&0&1+e^{i(k_{3}-k_{4})}\\ e^{-ik_{1}}+e^{ik_{4}}&0&1+e^{i(k_{4}-k_{3})}&-i\delta\end{pmatrix}. \tag{11}\]
To study the EPs generated in this model, we use the Newton polygon method elaborated in Ref. [38]. We begin by introducing a perturbation, \(\epsilon\), to the hopping from site A and B such that the perturbed Hamiltonian becomes
Figure 7: **Inter-band winding and higher-order EPs in {8,4} tessellation.** (a) The eigenvalue winding for the second order EP. The bands in red show an exchange whereas the bands in blue return to their original values under a cyclic perturbation with \(\epsilon=0.07e^{i\theta}\). This gives a second-order EP. The parameters taken here are \(k_{1}=\pi/4,k_{2}=2\pi/3,k_{3}=3\pi/5,k_{4}=61\pi/60\) and \(\delta\approx 2.07923\). (b) The fourth order EP, where \(k_{1}=k_{2}=0,k_{3}=k_{4}=\arccos{(2\sqrt{5}-5)}\) and \(\delta\approx 3.14461\). Panel (c) shows the Newton polygon diagram for the {8,4} model considered above. The hollow circles show the terms that are absent in the characteristic polynomial. The blue circles are the terms that need to vanish to obtain a second-order EP, whose bounding line is shown in blue. The red point is the additional term that needs to vanish to reach a fourth-order EP, whose bounding line is shown in red. Panels (d) and (e) show the energy spectrum near the EP for the second and fourth-order EPs, respectively. The scaling for the spectra is shown in (f) and (g), respectively. The lowest-order perturbation terms in the spectra have scaling coefficients 0.5 and 0.25, respectively.
\[H(\mathbf{k})=\begin{pmatrix}i\delta&1+e^{i(k_{1}-k_{2})}+\epsilon&0&e^{ik_{1}}+e^{ -ik_{4}}\\ 1+e^{i(k_{2}-k_{1})}+\epsilon&0&1+e^{i(k_{2}-k_{3})}&0\\ 0&1+e^{i(k_{3}-k_{2})}&0&1+e^{i(k_{3}-k_{4})}\\ e^{-ik_{1}}+e^{ik_{4}}&0&1+e^{i(k_{4}-k_{3})}&-i\delta\end{pmatrix}. \tag{12}\]
Briefly, we use a Puiseux series based approach to characterise the leading order expansion of a Hamiltonian in the vicinity of an EP. Using the leading order term in the expansion, we can find the order of an EP using the characteristic \(1/N\) scaling for an \(N\)-th order EP. For a non-trivial contour parametrised by some \(\epsilon\), the characteristic polynomial \(P(\omega)=\det(H-\omega\mathbb{I})=\sum_{m,n}h_{mn}\omega^{m}\epsilon^{n}\) for the Hamiltonian with a perturbation of strength \(\epsilon\) can be written as a binomial in terms of \(\omega\) and \(\epsilon\). Plotting the powers \((m,n)\) for the coefficients \(h_{mn}\omega^{m}\epsilon^{n}\) in the binomial gives us the order of the EP using the slope of the segment with all the points on or to the right of it.
The Newton polygon diagram for our Hamiltonian is shown in Fig. 7 (c). Using the analytic form for the characteristic polynomial, we can make different parameter choices to eliminate the (0,0) and (1,0) points on the diagram to obtain a second-order EP in the system. Subsequently, we can also remove the (2,0) point to obtain a fourth-order EP. The energy spectra for the four bands are shown in Fig. 7(d)-(e) for the second and fourth-order EPs, respectively. The \(E_{3},E_{4}\) bands in Fig. 7(d) do not mix and remain purely real. The scaling for \(E_{1},E_{2}\) bands is shown in Fig. 7(f), revealing a square root dependence on the parameters.
Similarly, for the fourth-order EP, all four bands participate in the mixing process and scale more steeply as a function of \(\delta\). The logarithmic scaling shows a \(\delta^{1/4}\) dependence on the parameters, as expected. The expected scaling is shown for the spectrum near the EPs in both cases in Fig. 7(f) and (g), thereby confirming the higher order EPs in this model.
## V Implications in real space: Poincare disk
Till now, we have focused on the reciprocal space picture of the exceptional contours. Next, we construct a real space system for the {10,5} tessellation using the process of circular inversion as outlined in Ref. [11]. This procedure allows the construction of successive epochs of the hyperbolic lattice recursively. The number of sites in each successive epoch increases
exponentially, and we limit the system size to four epochs (which amounts to 7040 lattice sites in real space).
Defining the sublattice and unit cells is a recursive variation of the dimer covering problem on the hyperbolic lattice and results in a few dangling sites at the boundary which are not paired into any unit cell. This manifests itself as a perturbation that breaks the tenfold rotational symmetry of the lattice.
Figure 8: **Energy spectra and densities of states.** (a) The density of states in a cumulative probability distribution for the \(k\)-space model with periodic boundaries and the open boundary system. (b) The complex energy spectra with on-site gain and loss (\(\delta=0.2\)). The open boundary spectrum is a subset of the periodic boundary condition spectrum. (c) The cumulative density of states for the system with \(\delta=0.2\). The density is measured with respect to only the real part of the energy, leading to a spike in the probability distribution at \(E=0\) corresponding to the spectrum along the imaginary axis. (d) A kernel density estimate for the density of states with \(\delta=0.2\). The corresponding histogram for the finite distributions is smoothened to give the darker outlines. (e) The cumulative density of states for the system with non-reciprocity \(\Omega=0.2\), with the corresponding density of states profile shown in (f).
We numerically compute the density of states with respect to the real part of the energy. Fig. 8(a) shows the comparison of the density of states obtained for the Hermitian system under open boundaries and in the \(k\)-space. The deviation between the two is due to the macroscopic fraction of boundary sites for hyperbolic tessellations. We note that recently the process of using regular maps to obtain a boundary-removed periodic spectrum has been proposed [8; 15] to get better agreements with the \(k\)-space density of states. However, we will use open boundary spectra to probe localisation effects in the non-Hermitian hyperbolic system. Remarkably, boundary removed spectra for {\(p\),\(q\)} tessellations with higher '\(q\)', including the {10,5}, have been shown to display considerable deviation from the band theoretic predictions [8].
The spectra and density of states for the system with gain and loss are shown in Fig. 8(b)-(d). The open boundary spectrum is a subset of the periodic boundary condition spectrum as seen in Fig. 8(b) - this is consistent with the theorem outlined in Ref. [40]. Whereas the local density of states profile does not show any localisation features, the density of states
Figure 9: **Localisation effects with non-reciprocal hopping.** The density of states summed over all the eigenfunctions for the system with two different values of the non-reciprocity parameter (a) \(\Omega=0.2\) and (b) \(\Omega=0.4\). Although localisation can be seen at the outer epochs, the profile is not tenfold rotationally symmetric due to the asymmetry while defining unit cells for non-Hermitian hopping. The extent and magnitude of boundary localisation increase with \(\Omega\). The asymmetry in the localisation is more evident for larger values of \(\Omega\). (c) The inverse participation ratio (IPR) for eigenvectors with different values of non-reciprocity. The macroscopic fraction of boundary sites leads to a high IPR even for states of the Hermitian system. The absence of an extensive shift of boundary localised modes with non-reciprocity rules out the possibility of a skin effect in this class of models.
shows a jump at zero energy due to a large number of states with purely imaginary energy.
Upon adding non-reciprocal hopping, we notice a localisation at the outer epoch of the system, as shown in Fig. 9(a)-(b). The localisation increases with non-reciprocity and has an asymmetric profile due to the method of defining the unit cells, as discussed above. We use the inverse participation ratio (IPR) to study this localisation by measuring the fraction of wave-function probability at the boundary. The IPR for an eigenket \(\ket{\psi_{n}}\), is defined as
\[IPR=\frac{\sum_{r\in\text{boundary}}|\bra{\mathbf{r}}\ket{\psi_{n}}|^{2}}{\sum _{r}|\bra{\mathbf{r}}\ket{\psi_{n}}|^{2}}. \tag{13}\]
The numerator calculates the total occupation probability of boundary sites, normalised by the total probability density. Plotting the IPR for different values of non-reciprocity in Fig. 9(c) reveals that the non-reciprocity does not cause a skin effect in the Hamiltonian and is simply a perturbation to the density profile, enhanced by the microscopic probability density in the wave-function density profile. In the Hermitian limit under the Bloch ansatz itself, having a macroscopic number of sites at the boundary leads to states with a large IPR, as can be seen from the plot with \(\Omega=0.0\) in Fig. 9(c). So, we can rule out the possibility of a skin effect in this class of models, which is also consistent with the absence of a point gap in the periodic boundary condition energy spectrum.
## VI Summary and outlook
In summary, we have proposed introducing non-Hermiticity in hyperbolic lattices to obtain a rich platform for exploring exceptional degeneracies. By means of a range of analytical and numerical tools, we demonstrated highly tunable exceptional points and contours in {10,5} tessellations in the presence of on-site gain and loss and non-reciprocal hopping. We used phase rigidity, energy scaling and vorticity to diagnose these exceptional structures. Further, using the {8,4} tessellation lattice as an example, we showed the appearance of higher-order exceptional contours by means of the recently proposed Newton polygon approach. We further examined the open boundary spectra in the Poincare disk and showed the localisation at the boundaries. Given the recent experimental realization of both hyperbolic lattices [7; 8] and non-Hermitian phases [25; 26; 27] in a plethora of platforms, we are hopeful that our predictions of exceptional degeneracies can be readily realized in state-of-the-art experimental setups. We hope our work motivates further theoretical and experimental
explorations of non-Hermitian hyperbolic matter.
_Note Added:_ During the final stages of this work, we came across a complementary preprint, which studies the skin effect in hyperbolic topological lattice models [41].
###### Acknowledgements.
We thank Adhip Agarwala and Vijay Shenoy for illuminating discussions. N.C. acknowledges a fellowship from the Kishore Vaigyanik Protsahan Yojana (KVPY). A.N. is supported by the Indian Institute of Science.
|
2303.07963 | RoCNet: 3D Robust Registration of Point-Clouds using Deep Learning | This paper introduces a new method for 3D point cloud registration based on
deep learning. The architecture is composed of three distinct blocs: (i) an
encoder composed of a convolutional graph-based descriptor that encodes the
immediate neighbourhood of each point and an attention mechanism that encodes
the variations of the surface normals. Such descriptors are refined by
highlighting attention between the points of the same set and then between the
points of the two sets. (ii) a matching process that estimates a matrix of
correspondences using the Sinkhorn algorithm. (iii) Finally, the rigid
transformation between the two point clouds is calculated by RANSAC using the
Kc best scores from the correspondence matrix. We conduct experiments on the
ModelNet40 dataset, and our proposed architecture shows very promising results,
outperforming state-of-the-art methods in most of the simulated configurations,
including partial overlap and data augmentation with Gaussian noise. | Karim Slimani, Brahim Tamadazte, Catherine Achard | 2023-03-14T15:07:51Z | http://arxiv.org/abs/2303.07963v2 | # RoCNet: 3D Robust Registration of Point-Clouds using Deep Learning
###### Abstract
This paper introduces a new method for 3D point cloud registration based on deep learning. The architecture is composed of three distinct blocs: (_i_) an encoder composed of a convolutional graph-based descriptor that encodes the immediate neighbourhood of each point and an attention mechanism that encodes the variations of the surface normals. Such descriptors are refined by highlighting attention between the points of the same set and then between the points of the two sets. (_ii_) a matching process that estimates a matrix of correspondences using the Sinkhorn algorithm. (_iii_) Finally, the rigid transformation between the two point clouds is calculated by RANSAC using the \(K^{c}\) best scores from the correspondence matrix. We conduct experiments on the ModelNet40 dataset, and our proposed architecture shows very promising results, outperforming state-of-the-art methods in most of the simulated configurations, including partial overlap and data augmentation with Gaussian noise.
Point clouds, Registration, Deep Learning, Attention Mechanisms, Pose Estimation
## I Introduction
Point cloud registration is a widespread problem and a key task in 3D pose estimation in robotics and computer vision, with applications in autonomous driving [1], simultaneous localization and mapping (SLAM) [2], etc. The registration process involves matching points between the input and target point clouds, eliminating outliers, and estimating the rigid transformation parameters that align one point cloud to the other. Traditional algorithms, such as Iterative Closest Point (ICP) [3], alternate between matching and aligning iterations. Recently, neural network-based techniques, such as [4; 5; 6], are increasingly used to address these problems. These techniques typically encode each point and its neighbourhood through a learned descriptor, such as [7; 8], and use a transformer module [2; 5] to propagate local and global information between the two points sets. Point matching is often performed based on similarities in the descriptor space, and the transformation matrix can be estimated by integrating differentiable Singular Value Decomposition (SVD) into the network. Alternatively, WsDesc [9] suggested tackling the matching problem by looking, for each pair point, their nearest neighbour in the target point cloud during the training phase. Furthermore, recent work [2] suggested using an optimization procedure on the scoring matrix using the iterative Sinkhorn algorithm [10]. Therefore, the estimation of the rigid transformation can be achieved in two different ways, either end-to-end learning by integrating differentiable SVD [11] into the network as proposed in [5], or by applying a simple SVD or a RANSAC based on feature matching.
Certainly, the learning-based approaches have made significant progress and have led to overcome the numerous limitations of iterative methods, such as converging to a local minimum, conditioned by a correct initialization, and may find it difficult to estimate large transformations. Additionally, most of these methods extract point-wise features using the neighbourhood of each point by applying learning-based descriptors only. This makes these methods more sensitive to noise and outliers. The methods that apply RANSAC to overcome this problem may need a large number of iterations (e.g., 50, 000 iterations) to reach a correct estimation of the transformation.
In this paper, we proposed a new architecture (Fig. 1), called RoCNet, which includes three main blocks: 1) a descriptor that is composed of a convolutional graph-based network that encodes the immediate neighbourhood of each point and an attention mechanism that encodes the variations of the surface normals, 2) a matching module that estimates a matrix of correspondences using the Sinkhorn algorithm, 3) a RANSAC module which computes the rigid transformation using the \(K^{c}\) (e.g., 256) best matches with a limited number of iterations (e.g., 500 iterations). The proposed architecture was assessed using ModelNet40 dataset [4] in different favourable and unfavourable conditions. It has been demonstrated that our method outperformed the related state-of-the-art algorithms, especially in unfavourable conditions, e.g., with noisy data and partial occlusions.
## II Related Work
This section provides a review of the state-of-the-art regarding the main methods for 3D pose estimation and point cloud registration. We begin by introducing some interesting descriptors for 3D points cloud. Then, the main registration approaches are presented, which can be classified into three
Fig. 1: Overall concept of the proposed RoCNet method.
main categories: 1) iterative methods, 2) matching learning 3) transformation learning. For each category, descriptors can be handcrafted or learned.
### _3D Point Cloud Descriptors_
The first iterative methods presented below, use the 3D coordinates points directly as input to their system or handcrafted features like Fast Point Feature Histograms (FPFH) [12]. Since the rise of deep learning (DL), several methods have been developed to learn 3D point cloud descriptors. For instance, Qi _et al._[13] proposed a neural architecture named PointNet describing unordered 3D points for tasks such as classification, segmentation, and registration. An extension named PointNet++ [8], which exploits metric space distances, has then been proposed by the same authors. Wang _et al._[7] proposed the Dynamic Graph CNN (DGCNN) descriptor based on a module called EdgeConv that acts on graphs computed in each layer of the network. It captures semantic characteristics over potentially long distances as shown in segmentation and classification tasks. This descriptor is also used for registration tasks as reported in [4]. It is important to note that both previous descriptors are learnt on segmentation or classification tasks before being used, without new learning, on registration. In [2], the authors build a feature with two main components, a descriptor encoder that highlights handcrafted features obtained with FPFH and a positional encoder that highlights spatial properties of the point cloud. A multiplex dynamic graph attention network is added to reinforce the matching power of the descriptor. This descriptor is learnt conjointly with the matching process, in an end-to-end way.
### _Iterative Methods_
The ICP [3] is probably the most popular method to address a point cloud registration problem. Given two sets of 3D points, the purpose of the algorithm is to minimise the Euclidean distance between the points. At each iteration, a mapping of the two sets of points and the computation of the 3D rigid transformation using an SVD are performed. This procedure is repeated until convergence. In RANSAC [14], the two-point clouds are randomly split into subsets on which a transformation is estimated. The final transformation is chosen among them using a criterion such as the weighted error on the set of points [15] or by selecting the transformation generating the largest number of inliers. Both methods have been associated with neural network based learned descriptors like in [5; 16] for ICP and in [9] for RANSAC. A most recent approach called Fast Global Registration (FGR) [17] operates on candidate matches that cover the surfaces. The surface alignment is defined with an objective optimized thanks to an iterative procedure.
### _Methods based on Matching Learning_
Recent registration methods investigated DL architecture to match the points. Later, standard methods like RANSAC or SVD can be used to estimate the rigid transformation. For instance, Predator [18] is trained with three different weighted matching losses to be more robust to low partial overlap between the input point clouds. Alternatively, 3DFeatNet [19] train their architecture to detect key points and predict discriminative descriptors in a weakly supervised manner using the triplet loss [20], while D3Feat [21] combined two losses, one for the descriptor and the other for the detector. All these methods use a RANSAC module on feature matching to estimate the transformation parameters. In the MDGAT method [2], proposed to learn the matching using a new loss inspired by the triplet function. Roufosse _et al._[22] proposed an unsupervised matching approach by optimizing the global structural properties of functional map [23], such as their bijectivity or approximate isometry. Such properties allow the creation of a loss function that does not require the knowledge of the ground truth during the learning process.
### _Methods based on Transformation Learning_
In the Deep Closest Point (DCP) architecture [5], the descriptor is first performed using DGCNN and an attention-based module is introduced into a transformer. Then a soft SVD, where soft matching is used, allows computing the rigid transformation between both sets of points clouds. The training loss function is defined from this estimated rigid transformation that is compared to the ground truth one. In GeoTransformer [24], the input point clouds are down-sampled in several super-points (a subset of points) that are described using the Geometric Transformer which encodes intra-point-cloud and inter-point-cloud geometric structures. A matching module extracts super-point correspondences, each one being used to compute a soft SVD (from the subset of points corresponding to the super point) and estimate the transformation. The global transformation is the one that admits the most inlier matches over the transformation obtained for each super point. The loss function is composed of two terms: one measuring the alignment quality of super points and the other one measuring the alignment quality of the whole set of points. Wang _et al._ proposes PRNET [4] that, similarly to ICP, is designed to be applied iteratively to estimate the transformation between points clouds. The matching is performed using an approximately differentiable version of Gumbel-Softmax and the transformation is obtained using an SVD. Another approach was proposed in [9] which uses a differentiable nearest neighbour search algorithm in the descriptor space to match the points, and then proposes to relax the registration problem and seeks to estimate an affine transformation matrix computed by a least squares optimisation. An end-to-end architecture [6] includes a descriptor based on PointNet [13] and a neural network version of the Lucas \(\&\) Kanade algorithm that allows incrementally updating the transformation.
## III Method
### _Problem Statement_
Let us start by defining a common problem of 3D point cloud registration. Considering two-point clouds \(\mathbf{X}\) and \(\mathbf{Y}\) such that: \(\mathbf{X}=\{\mathbf{x}_{1},...,\mathbf{x}_{i},...,\mathbf{x}_{M}\}\subset\mathbb{R}^{3\times M}\) and \(\mathbf{Y}=\{\mathbf{y}_{1},...,\mathbf{y}_{j},...,\mathbf{y}_{N}\}\subset\mathbb{R}^{3\times N}\). It is assumed that the two sets at least partially overlap, so that there are \(K\) pairs of
matches between \(\mathbf{X}\) and \(\mathbf{Y}\), with \(K\leq min(M,N)\). The two subsets containing the matching points in the first and second point clouds are defined by: \(\bar{\mathbf{X}}\subset\mathbb{R}^{3\times K}\) and \(\bar{\mathbf{Y}}\subset\mathbb{R}^{3\times K}\), respectively. Note that the set \(\bar{\mathbf{Y}}\) is obtained by applying a rotation \(\mathbf{R}\in SO(3)\) and a translation \(\mathbf{t}\in\mathbb{R}^{3}\) of the set \(\bar{\mathbf{X}}\). Both the rotation matrix \(\mathbf{R}\) and translation vector \(\mathbf{t}\) define the \(4\times 4\) rigid transformation we are looking for, noticed \(\mathbf{T}\).
### _Descriptor_
One of the most fundamental components in a point cloud registration problem is the relevance and quality of the descriptor used to encode the points. Therefore, we proposed a new descriptor by projecting the initial sets of points \(\mathbf{X}\) and \(\mathbf{Y}\) in a new base of higher dimension, de facto, more discriminating than the initial spatial representation, and as invariant as possible to rotations and translations. It combines a geometrical-based descriptor and a normal-based one, followed by an attention mechanism.
#### Iii-B1 Geometrical-based descriptor
Different types of descriptors, that learn local geometrical properties around each point, were reported in the literature such as PointNet [13], PointNet++ [8] or DGCNN descriptor [7]. We integrated DGCNN as a part of our descriptor because it better captures local geometric features of point clouds while still maintaining permutation invariance. It consists of mainly _EdgeConv_ convolution layers where the points represent nodes connected by arcs to their \(k\) nearest neighbours in the encoding space to build graphs that express the local geometric structure surrounding each point and then spread dynamically the information at a higher level (global encoding). Lets us denote \(\mathbf{f}_{i}^{X}\) the extracted feature vector of dimension \(d\) for point \(\mathbf{x}_{i}\).
#### Iii-B2 Normal based descriptor
The main idea of this descriptor is to better encode the surface around each point using the variation of the normals of points in the neighbourhood: in a flat surface, there is no variation of the normals, along a ridge, the normals vary only in one direction, whereas on a summit, the normals vary in all directions. Thus, the variation of the angle of the normals in a neighbourhood is informative about the type of surface.
The normals are estimated using Principal Component Analysis (PCA). Indeed, for each point \(\mathbf{x}_{i}\in\mathbf{X}\), a local neighbourhood subset of points \(S_{i}=\left\{\mathbf{x}_{j}/\left\|\mathbf{x}_{j}-\mathbf{x}_{i}\right\|^{2}\leq r\right\}\) is defined while delimiting the size of the set points with \(|S_{i}|<K_{nn}\). \(r\) is the radius of a sphere centred in \(\mathbf{x}_{i}\) and \(K_{nn}\) the maximum number of points included in the set \(S_{i}\). The eigenvalue decomposition of the covariance matrix \(Cov(S_{i})\) allows defining the normal \(\mathbf{n}_{i}\) as the vector associated with the smallest eigenvalue. The covariance matrix \(Cov(S_{i})\) is expressed as follows:
\[Cov(S_{i})=\frac{1}{|S_{i}|}\sum_{\mathbf{x}_{j}\in S_{i}}(\mathbf{x}_{j}-\mathbf{x}_{i}) (\mathbf{x}_{j}-\mathbf{x}_{i})^{\top} \tag{1}\]
where \(|S_{i}|\) represents the number of points in \(S_{i}\).
Since the PCA does not inherently determine the direction of the normal vector which can point in either direction, we propose to address the ambiguity of the sign by using a new vector \(\mathbf{z}_{i}\). It is colinear to \(\mathbf{n}_{i}\) and is defined by ensuring that it points towards the side of the surface with a higher point density. This means that the normal vector should be pointing away from sparse areas and towards denser surface areas. Similar to [9], we solve this ambiguity thanks to the:
\[\mathbf{z}_{i}=\begin{cases}\mathbf{n}_{i},&\text{if}\quad\sum\limits_{\mathbf{x}_{j}\in S _{i}}\mathbf{n}_{i}^{T}\ (\mathbf{x}_{i}-\mathbf{x}_{j})\geq 0\\ -\mathbf{n}_{i},&\text{otherwise}\end{cases} \tag{2}\]
Finally, we build the final encoding based on [24] and [25] using sinusoidal functions of different frequencies. Knowing the angle between the normals of two points \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) noted \(\angle(\mathbf{z}_{i},\mathbf{z}_{j})\), the vector \(\mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}\) encoding the normals is given by:
\[\begin{cases}\mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}^{2ind}=\sin\left(\frac{\angle(\mathbf{ z}_{i},\mathbf{z}_{j})}{\tau\times 1000\mathbf{z}^{ind}}\right)\\ \mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}^{2ind+1}=\cos\left(\frac{\angle(\mathbf{z}_{i},\mathbf{z }_{j})}{\tau\times 1000\mathbf{z}^{ind}}\right)\end{cases} \tag{3}\]
where \(ind\) is the current value index of \(\mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}\), \(\tau\) a normalisation coefficient and \(d\) the dimension of the descriptor \(\mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}\) fixed to the same size as the geometrically based descriptor DGCNN. A fully connected layer is then applied to \(\mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}\) to obtain the final embedding
\[\mathbf{e}_{i,j}^{X}=\mathbf{g}_{\mathbf{x}_{i},\mathbf{x}_{j}}\mathbf{W}_{E}^{\text{s}} \tag{4}\]
where \(\mathbf{W}_{E}^{\text{s}}\in\mathbb{R}^{d\times d}\) is a learned projection matrix.
#### Iii-B3 Attention mechanism
A key point of recent descriptors of points cloud is the introduction of an attention mechanism that highlights some features dynamically. SuperGlue [26] uses a module based on attention graphs that alternately stacks'self-attention' and 'cross-attention' layers. The former links all the nodes of a point cloud to each other, while the latter links each point of set \(\mathbf{X}\) to all points of set \(\mathbf{Y}\). Contrary to SuperGlue [26] or MDGAT [2], which compute the attention weights on the encoding vectors, some methods propose adding information on local inter-points geometry at the entry of the mechanism. For instance, [27] associates the 3D coordinates of each point with the descriptor, while GeoTransformer [24] proposes to use the distances and angles between each point and its \(k\) nearest neighbours. Alternatively, in our approach, we propose the use of four attention heads with geometric self-attention inside each set \(\mathbf{X}\) and \(\mathbf{Y}\) integrating the associated normals embeddings \(\mathbf{e}^{X}\) and \(\mathbf{e}^{Y}\), respectively, followed by cross-attention between the two sets of points and then alternate between them for \(L\) times.
#### Iii-B4 Self-attention
This type of layer predicts an attention-based feature \(\bar{\mathbf{f}}_{i}\) for each point of a point cloud (\(\mathbf{X}\) or \(\mathbf{Y}\)), paying attention to all the other points of the same cloud. In the following, the algorithm is detailed for a point \(\mathbf{x}_{i}\in\mathbf{X}\), the same is used for all the points in \(\mathbf{X}\) and \(\mathbf{Y}\). Thus, an attention weight is then obtained for each query/key pair:
\[\alpha_{ij}^{X}=softmax\!\left(\frac{(\mathbf{f}_{i}^{X}\mathbf{W}_{Q}^{\text{s}})( \mathbf{f}_{j}^{X}\mathbf{W}_{K}^{\text{s}}+\mathbf{e}_{i,j}^{X}\mathbf{W}_{R}^{T})}{ \sqrt{d}}\right) \tag{5}\]
where \(\mathbf{W}_{Q}^{\text{s}}\), \(\mathbf{W}_{K}^{\text{s}}\) and \(\mathbf{W}_{R}^{\text{s}}\in\mathbb{R}^{d\times d}\) are the learnt projection matrices for queries, keys and normal-based embeddings, \(d\) is the dimension of the features \(\mathbf{f}_{i}^{X}\) and \(\mathbf{e}_{i,j}^{X}\). These weights are used to rate which elements we have to pay attention to, and to obtain the final self-attention-based feature \(\bar{\mathbf{f}}_{i}^{X}\):
\[\bar{\mathbf{f}}_{i}^{X}=\sum_{j=1}\alpha_{ij}\mathbf{v}_{j} \tag{6}\]
with,
\[\mathbf{v}_{j}=\mathbf{f}_{j}^{X}\mathbf{W}_{V}^{s} \tag{7}\]
where \(\mathbf{W}_{V}^{s}\in\mathbb{R}^{d\times d}\) is the learnt projection matrix for values.
#### Iii-B5 Cross-Attention
A _cross-attention_ layer is used to propagate the local information between the two previously obtained representations \(\mathbf{\tilde{f}}_{i}^{X}\) and \(\mathbf{\tilde{f}}_{j}^{Y}\) of \(\mathbf{x}_{i}\) and \(\mathbf{y}_{j}\) belonging respectively to point clouds \(\mathbf{X}\) and \(\mathbf{Y}\). Formally, it works similarly as for the _self-attention_ layer, except for the estimation of the attention key, which now uses a point in the second point cloud. The final encoding for any point \(\mathbf{x}_{i}\) (or \(\mathbf{y}_{j}\)) is given by:
\[\mathbf{h}_{i}^{X}=\sum_{j=1}^{|\mathbf{Y}|}\Bigg{(}\underset{j=1}{softmax}\Big{(} \frac{(\mathbf{\tilde{f}}_{i}^{X}\mathbf{W}_{Q}^{c})(\mathbf{\tilde{f}}_{j}^{Y}\mathbf{ W}_{K}^{c})^{T}}{\sqrt{d}}\Big{)}\Bigg{)}\Bigg{(}\mathbf{\tilde{f}}_{j}^{Y} \mathbf{W}_{V}^{c}\Bigg{)} \tag{8}\]
where \(\mathbf{W}_{Q}^{c}\), \(\mathbf{W}_{K}^{c}\) and \(\mathbf{W}_{V}^{c}\in\mathbb{R}^{d\times d}\) are the learnt projection matrices for queries, keys and values in the cross-attention layers.
### _Point Matching_
The second step of the proposed algorithm is the matching procedure. We first estimate a score matrix \(\mathbf{C}\in\mathbb{R}^{M\times N}\) between each point \(x_{i}\in\mathbf{X}\) and \(y_{j}\in\mathbf{Y}\):
\[\mathbf{C}_{i,j}=\mathbf{h}_{i}^{X}{}^{\top}\mathbf{h}_{j}^{Y} \tag{9}\]
where \(\mathbf{h}_{i}^{X}\) and \(\mathbf{h}_{j}^{Y}\) are the final encoding of the points \(\mathbf{x}_{i}\) and \(\mathbf{y}_{j}\) defined previously. To build a matrix of correspondence probabilities \(\bar{\mathbf{C}}\), we first augment the dimensions of \(\mathbf{C}\) to \(M+1\) and \(N+1\) respectively, such that the non-matched points will explicitly be assigned to the last dimensions. We then use the differentiable _Sinkhorn Algorithm_[10] which is widely used in optimal transport and graph-matching problems.
As all the previous steps are differentiable, the weights of the networks can be learnt by introducing a loss function. To do so, we follow [2] and adopt the gap loss function (10) which allows enlarging the assignment scores difference between the true matches and the wrong matches. It is expressed as follows:
\[\begin{split}& L_{Gap}=\sum_{i=1}^{M}\log\Bigg{(}\sum_{n=1}^{N+1}[ \max((-\log\bar{\mathbf{C}}_{i,\bar{i}}+\log\bar{\mathbf{C}}_{i,n}+\alpha),0) ]+1\Bigg{)}\\ &+\sum_{j=1}^{N}\log\Bigg{(}\sum_{n=1}^{M+1}[\max((-\log\bar{ \mathbf{C}}_{j,\bar{j}}+\log\bar{\mathbf{C}}_{n,j}+\alpha),0)]+1\Bigg{)}\end{split} \tag{10}\]
where \(\alpha\) is a positive scalar having a value of 0.5, \(\bar{\mathbf{C}}_{i,\bar{i}}\) and \(\bar{\mathbf{C}}_{j,\bar{j}}\) are the scores for the ground truth true matches of the points \(\mathbf{x}_{i}\) and \(\mathbf{y}_{j}\), respectively.
### _Pose Estimation_
In the evaluation phase, we build a hard assignment binary matrix \(\mathbf{A}\) thanks to the following algorithm:
\[\mathbf{A}^{\mathbf{1}}_{i,j}=\begin{cases}1&\text{if }\bar{\mathbf{C}}_{i,j}= \max_{n}(\bar{\mathbf{C}}_{i,n})\\ 0&\text{otherwise,}\end{cases} \tag{11}\]
\[\mathbf{A}^{\mathbf{2}}_{i,j}=\begin{cases}1&\text{if }\bar{\mathbf{C}}_{j,i}= \max_{n}(\bar{\mathbf{C}}_{j,n})\\ 0&\text{otherwise,}\end{cases} \tag{12}\]
\[\mathbf{A}_{i,j}=\mathbf{A}^{\mathbf{1}}_{i,j}\times\mathbf{A}^{\mathbf{2}}_{i,j} \tag{13}\]
The matrix \(\mathbf{A}\) gives us the two final sets of matched points \(\bar{\mathbf{X}}\in\mathbb{R}^{K}\) and \(\bar{\mathbf{Y}}\in\mathbb{R}^{K}\) by re-indexing the original point clouds \(\mathbf{X}\in\mathbb{R}^{M}\) and \(\mathbf{Y}\in\mathbb{R}^{N}\) with the row and column indices of the non-zero values of the matrix \(\mathbf{A}\) respectively. An example of a performed matching is depicted in Fig. 3. Once the sets of matched points are built, different techniques can be used to determine the rigid transformation. A classical SVD to the cross-covariance matrix between the centred subsets \(\bar{\mathbf{X}}\) and \(\bar{\mathbf{Y}}\) is used in MDGAT [2], while DCP [5] suggested a differentiable and soft SVD where the weights of each point are determined by applying a _Softmax_ to the score matrix \(\mathbf{C}\). An alternative method is to apply a RANSAC technique based on feature matching as reported in [18; 9]. In our method, we propose to use RANSAC based on our predicted correspondences to reduce the computational cost. Moreover, instead of considering all the \(K\) matched points, we only use the \(K^{c}\) most relevant ones allowing us to filter the outliers before the first iteration such that the transformation is performed in 500 iterations maximum.
## IV Experiments
### _Dataset and Parametrisation_
To assess the RoCNet, we opted for the ModelNet40 dataset [4]. It is a synthetic database representing 40 classes of objects designed using Computer-Aided Design (CAD) software. The database consists of 12,311 3D point clouds divided into 9,843 sets for training and 2,468 for testing. Each of these point clouds is scaled to fit inside a sphere of radius \(r=1\) m. For each initial point cloud called \(\mathbf{X}\), a new point cloud \(\mathbf{Y}\) is created by applying a random grid transformation with a rotation ranging from 0\({}^{\circ}\) to 45\({}^{\circ}\) around each axis and a translation ranging from 0 cm to 50 cm in each direction, and finally, a random permutation of the points is performed. In the end, a point cloud of 1024 points is generated for each object. In addition, to be able to evaluate the robustness of registration or pose estimation methods, the initial points clouds are reduced by a range of points emulating partial occlusions.
The following lists the different configurations used to assess our method in terms of accuracy, robustness and generalization:
* _Partial overlap_: To simulate partial occlusions, 768 points are subsampled from the 1024 components of both the first and the second points clouds \(\mathbf{X}\) and \(\mathbf{Y}\), respectively. Specifically, between 512 and 768 points from \(\mathbf{X}\) have a match in \(\mathbf{Y}\).
* _Noisy data_: Clipped Gaussian noise with a range of \([-0.05,0.05]\), a mean of \(\mu=0\), and a variance of 0.01 is added to each point.
* _Partial overlap and noisy data_: The same type of noise is also added to the subsampled point clouds.
* _Unseen objects_: To test the generalization of the registration and pose estimation methods, the model is trained
only on the first 20 classes of objects and tested on the remaining 20 categories.
RoCNet is trained for 30 epochs on the clean data and for 80 epochs on the noisy data with a learning rate of \(10^{-4}\) in both cases. The number of predicted correspondences used as input in RANSAC is \(K^{c}=256\). The parameters \(d\) and \(L\) are set to \(96\) and \(6\) respectively.
### _Metrics_
To benchmark our RoCNet architecture against state-of-the-art methods, we opted for two metrics widely used in the literature which consist of the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE) used to compute the difference in rotation and translation between the estimated transformation and the provided ground truth. The RMSE and MAE are respectively defined as follows:
\[\text{RMSE}(\mathbf{p})=\Big{(}\frac{1}{M}\sum_{i=1}^{M}\left\|\mathbf{p}_{i}- \mathbf{p}_{i}^{GT}\right\|^{2}\Big{)}^{\frac{1}{2}} \tag{14}\]
\[\text{MAE}(\mathbf{p})=\frac{1}{M}\sum_{i=1}^{M}\left|\mathbf{p}_{i}-\mathbf{ p}_{i}^{GT}\right| \tag{15}\]
with, \(M\) is the number of pairs of point clouds in the test set, \(\mathbf{p}_{i}\) and \(\mathbf{p}_{i}^{GT}\) are the estimated rotation (Euler angles) or translation and the ground truth ones, respectively.
### _Results_
RoCNet is assessed in different configurations (i.e., favourable and unfavourable) and compared to the main DL-based methods of the state-of-the-art as well as with the traditional ICP approach. The recent VRNet [28] has summarised nicely the performance of most of the related methods reported in the literature. We use this performance as a basis for comparison to which we have added those of recently published methods such as WSDesc [9] and R-PointHop [16]. Note that in all the tables, \(\mathbf{R}\) is given in degrees, and \(\mathbf{t}\) is in meters, while the best results are highlighted in bold and the second ones are underlined.
#### Iii-C1 Model trained on all classes with clean data
The first configuration consists of the evaluation of RoCNet when the model is trained across all the provided 40 classes of objects. All the data are clean and do not involve occlusions.
Table I provides the results. It can be highlighted that our method outperforms the other methods in rotation with an improvement of a dozen per cent for the RMSE and MAE _versus_ the second best method VRNet [28]. However, VRNet remains the best in translation although the difference is slight compared to RoCNet, specifically for MAE(\(\mathbf{t}\)) where RoCNet is second. From a purely numerical point of view, RoCNet provides an RMSE(\(\mathbf{R}\)) of 0.082\({}^{\circ}\) (respectively, an MAE(\(\mathbf{R}\)) of 0.011\({}^{\circ}\)) (mean of the three Euler angle rotations) and an RMSE(\(\mathbf{t}\)) of \(47\times 10^{-5}\) m (respectively, an MAE(\(\mathbf{t}\)) of \(8\times 10^{-5}\) m) (mean of the three translations along \(x\), \(y\), and \(z\) axes).
#### Iii-C2 Model trained on all classes with noisy data
Table II depicts the results of RoCNet under the same conditions as the first assessment but with adding Gaussian noise to the initial data. RoCNet achieves the best performance on three of the four metrics with an improvement of \(25\%\), \(45\%\) and \(56\%\) for RMSE(\(\mathbf{R}\)), MAE(\(\mathbf{R}\)) and RMSE(\(\mathbf{t}\)) respectively, and ranked second in MAE(\(\mathbf{t}\)) behind R-PointHop. From a numerical point of view, RoCNet provides an RMSE(\(\mathbf{R}\)) of 1.92\({}^{\circ}\) (respectively, an MAE(\(\mathbf{R}\)) of 0.55\({}^{\circ}\)) and an RMSE(\(\mathbf{t}\)) of \(2.6\times 10^{-3}\) m (respectively, an MAE(\(\mathbf{t}\)) of \(1.8\times 10^{-3}\) m).
#### Iii-C3 Model trained on half classes with clean data
The third configuration consists of the evaluation of the generalisation capacity of RoCNet and the related methods when the models are trained on only half the data (i.e., 20 classes), and tested on the 20 remaining classes. The obtained performances are reported in Table III. RoCNet outperforms the state-of-the-art
\begin{table}
\begin{tabular}{l|l l|l l} \hline Method & RMSE(\(\mathbf{R}\)) & MAE(\(\mathbf{R}\)) & RMSE(\(\mathbf{t}\)) & MAE(\(\mathbf{t}\)) \\ \hline ICPβ92 [3] & 12.28 & 4.613 & 0.04774 & 0.00228 \\ PTILKβ19 [6] & 13.75 & 3.893 & 0.01990 & 0.00445 \\ DPCVβ2β19 [5] & 1.090 & 0.752 & 0.00172 & 0.00117 \\ PRNETβ19 [4] & 1.722 & 0.665 & 0.00637 & 0.00465 \\ R-PointHopβ22 [16] & 0.340 & 0.240 & 0.00037 & 0.00029 \\ VRNetβ22 [28] & 0.091 & 0.012 & **0.00029** & **0.00005** \\ \hline
**Ours** & **0.082** & **0.011** & 0.00047 & 0.0008 \\ \hline \end{tabular}
\end{table} TABLE I: Performances of RoCNet using all classes without noise and occlusions.
Fig. 3: Example of a performed 3D matching of point clouds in different configurations: (a) clean data, (b) partial overlap, and (c) noisy data and partial overlap. The green lines show the correct matches and the red lines show the wrong ones.
Fig. 2: Overview of the proposed RoCNet architecture.
method in one metric which is the MAE(**R**) and ranked second for both RMSE(**R**) and MAE(**t**). Overall, the performance of our method is similar to that of VRNet and R-PointHop.
#### Iv-B4 Model trained on all the classes with clean data and under partial occlusions
In this assessment, we evaluate the behaviour of our method and the other methods when only a part is shared by the two point clouds to be aligned. This simulates, for example, partial occlusions. From Table IV, it can be highlighted that RoCNet outperforms significantly the others methods in all the metrics. Overall, our method reduces the registration error by roughly half in comparison with the second-ranked methods, i.e., VRNet and R-PointHop.
#### Iv-B5 Model trained on all the classes with noisy data and under partial occlusions
The last configuration concerns the evaluation of the proposed method under partial occlusions (partial overlap) using noisy data. As can be seen in Table V, RoCNet outperforms the other methods on all metrics, both in rotation and translation. RoCNet allows significant enhancement of the registration error, ranging from two-thirds to one-quarter compared to the method ranked second, i.e., WsDesc and even more in comparison to VRNet. This can be explained by the robustness of RoCNet to partial occlusions or noise or both at the same time.
Finally, the performance of RoCNet and its ranking in the context of wider state-of-the-art, including the eleven best methods, can be seen in Fig. 5. It can be highlighted that our method outperforms all the methods when considering simultaneously performances in rotation and translation, this both on clean data (Fig. 5(a)) and on noisy data (Fig. 5(b)).
Figure 4 depicts some examples of aligned points clouds (objects) available in the ModelNet40 dataset. The first row shows the initial positions of the points cloud \(\mathbf{X}\) and \(\mathbf{Y}\) to be aligned, the second row shows the performed registrations and the third shows the ground truth ones.
Furthermore, to be able to assess visually the robustness ability of the proposed method, we performed different regis
\begin{table}
\begin{tabular}{l|l l|l l} \hline Method & RMSE(R) & MAE(R) & RMSE(**t**) & MAE(**t**) \\ \hline ICPβ92 [3] & 33.067 & 25.564 & 0.294 & 0.250 \\ PTLKβ19 [6] & 19.939 & 9.076 & 0.057 & 0.032 \\ DCP-Vβ2β19 [5] & 06.883 & 4.534 & 0.028 & 0.021 \\ PRNETβ19 [4] & 04.323 & 2.051 & 0.017 & 0.012 \\ VRNetβ22 [28] & 03.615 & 1.637 & 0.010 & 0.006 \\ WsDescβ22 [9] & 03.500 & 0.759 & 0.006 & 0.004 \\ \hline
**Ours** & **01.810** & **0.620** & **0.004** & **0.003** \\ \hline \end{tabular}
\end{table} TABLE V: Performances of RoCNet when the model is trained in all the classes with noisy data and partial occlusions.
\begin{table}
\begin{tabular}{l|l l|l l} \hline Method & RMSE(R) & MAE(R) & RMSE(**t**) & MAE(**t**) \\ \hline ICPβ92 [3] & 11.971 & 4.497 & 0.04832 & 0.00433 \\ PTLKβ19 [6] & 15.692 & 3.992 & 0.02395 & 0.00563 \\ DCP-Vβ2β19 [5] & 08.417 & 5.685 & 0.03183 & 0.02337 \\ PRNETβ19 [4] & 03.218 & 1.446 & 0.11178 & 0.00837 \\ R-PointHopβ22 [16] & 02.780 & 0.980 & 0.01400 & **0.00080** \\ VRNetβ22 [28] & 02.558 & 1.016 & 0.00570 & 0.00289 \\ \hline
**Ours** & **01.920** & **0.555** & **0.0026** & 0.00180 \\ \hline \end{tabular}
\end{table} TABLE II: Performances of RoCNet when the model is trained in all the classes with noisy data and without occlusions.
Fig. 4: Illustration of some examples of performed registrations using RoCNet in case of clean data and without occlusions _versus_ the ground truth registrations.
\begin{table}
\begin{tabular}{l|l l|l l} \hline Method & RMSE(R) & MAE(R) & RMSE(**t**) & MAE(**t**) \\ \hline ICPβ92 [3] & 33.0683 & 25.045 & 0.293 & 0.250 \\ PTLKβ19 [6] & 16.735 & 07.550 & 0.045 & 0.0250 \\ DCP-Vβ2β19 [5] & 06.709 & 04.448 & 0.027 & 0.0200 \\ PRNETβ19 [4] & 03.199 & 01.454 & 0.016 & 0.0100 \\ R-PointHopβ22 [16] & 01.660 & 00.350 & 0.014 & 0.0008 \\ VRNetβ22 [28] & 00.982 & 00.496 & 0.006 & 0.0039 \\ WsDescβ22 [9] & 01.187 & 00.975 & 0.008 & 0.0070 \\ \hline
**Ours** & **00.412** & **00.133** & **0.002** & **0.0002** \\ \hline \end{tabular}
\end{table} TABLE IV: Performances of RoCNet when the model is trained in all the classes with clean data and partial occlusions.
trations by progressively decreasing (from 95\(\%\) to 50\(\%\)) the rate of shared points between \(\mathbf{X}\) and \(\mathbf{Y}\). Figure 6 depicts the obtained results of one object. As can be seen, RoCNet can register point clouds even with only 50\(\%\) of the data without much difficulty. On the other hand, the method shows its limits for objects with perfect symmetry when the overlap is low.
## V Ablation Study
We conduct ablation studies on the three main blocks of the proposed architecture, i.e., the descriptor, the transformer, and the RANSAC-based estimation of the transformation.
### _Ablation of Descriptor_
To study the impact of our descriptor which uses both normals and DGCNN as inputs on the transformer, we compared the performance of our architecture to that of MDGAT [2] in case of point matching problem. For a proper comparison, both methods are trained in the same configuration and with the same number of epochs. Two types of data are used: 1) clean data and 2) noisy data, both with partial overlap. To achieve the comparison, we use the following metrics: Precision (**P**), Accuracy (**A**), Recall (**R**) and F1-score (**F1**). Table VI gives an insight into the ablation study of the descriptor. It can be underlined that our descriptor outperforms MDGAT one except for the Precision (**P**) in the case of clean data with partial overlap. The difference is substantial, in favour of our descriptor when it concerns noisy data with partial overlap.
### _Ablation of DGCNN and Transformer_
RoCNet is compared to other alternatives in which our descriptor and attention mechanism are changed to those proposed in [2]. In view of evaluating the contribution of the proposed attention mechanism, another architecture of the DGCNN is associated with a classical mechanism without the normals gradient embedding [26]. Table VIII reports the obtained results showing that RoCNet architecture is more relevant on three of the four used metrics emphasizing a significant contribution (about \(10\%\)) of the association of DGCNN and normals compared to a classical attention mechanism.
### _Ablation of RANSAC_
The last ablation study consists of the comparison of the contribution of an SVD _versus_ RANSAC to estimate the rigid transformation when the matching is performed. Table VII reports the performances of each alternative using: 1) clean data with full overlap (full), 2) noisy data with full overlap (noisy) and 3) clean data with partial overlap (partial). It can be seen that RANSAC approach outperforms slightly the SVD one.
## VI Conclusion
This paper presented a new 3D point cloud registration and pose estimation using a deep learning architecture. The proposed architecture is composed of three main blocks: 1) the newly designed descriptor which encodes the neighbourhood of each point and an attention mechanism that encodes the variations of the surface normals, 2) the matching method that estimates a matrix of correspondences using the Sinkhorn algorithm, and 3) the estimation of the rigid transformation using a RANSAC applied to the \(K^{c}\) best matches from the correspondence matrix. The proposed architecture was evaluated using the ModelNet40 dataset in different favourable and unfavourable configurations. It has been demonstrated that our method outperformed the related state-of-the-art algorithms, especially in unfavourable conditions, e.g., with noisy data and partial occlusions.
In the future, we intend to extend this work to a new approach where the descriptor will be expressed in the frequency range. This will certainly improve the accuracy of our architecture, but also its robustness to noise and partial occlusions.
|
2302.06940 | Teleportation-based error correction protocol of time-frequency qubits
states | We present a linear optical protocol for teleporting and correcting both
temporal and frequency errors in two time-frequency qubit states. The first
state is the frequency (or time-of-arrival) cat qubit, which is a single photon
in a superposition of two frequencies (or time-of-arrival), while the second is
the time-frequency Gottesman-Kitaev-Preskill (GKP) state, which is a single
photon with a frequency comb structure. The proposed optical scheme could be
valuable for reducing error rate in quantum communication protocols involving
one of these qubits. | Nicolas Fabre | 2023-02-14T09:53:29Z | http://arxiv.org/abs/2302.06940v1 | # Teleportation-based error correction protocol of time-frequency qubits states
###### Abstract
We present a linear optical protocol for teleporting and correcting both temporal and frequency errors in two time-frequency qubit states. The first state is the frequency (or time-of-arrival) cat qubit, which is a single photon in a superposition of two frequencies (or time-of-arrival), while the second is the time-frequency Gottesman-Kitaev-Preskill (GKP) state, which is a single photon with a frequency comb structure. The proposed optical scheme could be valuable for reducing error rate in quantum communication protocols involving one of these qubits.
## I Introduction
Quantum information can be encoded in various degrees of freedom of single photons, which can be described by either discrete or continuous variables (CV). Frequency (or energy) and time-of-arrival are natural pairs of conjugate quantum continuous variables in the single photon subspace, along with the transverse position and momentum degrees of freedom [1; 2; 3; 4]. Discretizing the frequency or time-of-arrival into temporal or frequency bins, or performing the mode decomposition of the the continuous variable distribution of the single photon, can be experimentally motivated due to the finite resolution of detection devices or specific requirements of a quantum protocol, such as in quantum metrology for super-resolution [5]. We should stress that in any dimensional single-photon encoding, photon losses do not correspond to a logical error. The second way of encoding information is through a particle-number sensitive encoding, which can be used to define physical systems with either discrete or continuous variables. In this encoding, CV corresponds to the quadratures of the electromagnetic field, _i.e_, the amplitude and phase of the quantum field, in a particular mode. With the particle-sensitive encoding, photon loss corresponds to a logical error. Mathematically, the quadrature of an electromagnetic field in a given mode can be treated as the continuous degree of freedom of a single photon [4], as long as an auxiliary discrete mode is occupied by only one single photon.
Error correction code for continuous variables encoding is defined by discretizing them. Three bosonic qubit codes have been studied, such as the cat-code [6; 7; 8], Gottesman, Kitaev and Preskill (GKP) code [9; 10; 11; 12; 13; 14; 15] and the binomial code [16; 17]. Cat and GKP codes are candidates for achieving universal quantum computation, see [7; 18] and [19; 20]. GKP codes could be employed for building quantum repeaters [21], and for sensing application [22; 23]. The mathematical analogy between time-frequency and quadrature CV allows defining time-frequency qubits states, called time-frequency cat state [2; 24] and time-frequency GKP state [1; 2; 25]. Both of these codes are ways to discretize time-frequency continuous variables at the single photon level to define a qubit. CV or time-frequency CV codes possess an equivalent mathematical structure, they are common eigenvectors of non-commuting displacements operators [1] and they are thus designed to be robust against small shift in one continuous variable (cat state) and the two canonically conjugated ones (GKP state).
In this paper, we start by reminding the mathematical structure of the time-frequency cat and GKP codes, and discuss the temporal and frequency errors from which they are designed to be robust against. We then analyse two different entanglement structures of time-frequency GKP state which can be generated experimentally, and can be interpreted as the entanglement of a noisy state of interest with a less noisy ancilla. However, these entanglement structures which lead to a natural error correction strategy for single photon encoding is difficult to realize experimentally with the current technology, since it requires to perform a frequency entanglement operation between two single photons. Therefore, the standard method for error correction for quadrature qubits cannot be applied straightforwardly. Inspired by the teleportation-based error correction protocol for quadrature variables [26; 27], we develop a teleportation-based error correction protocol of frequency qubits states, using only linear optical elements, and allows correcting both time and frequency variables at once. The error correction is naturally performed because the ancilla EPR state which assists to the teleportation is less noisy in the temporal and frequency domains compared to the state of interest. Since the protocol requires the use of Bell's measurement, we also describe how to experimentally implement such a measurement for the two type of frequency qubits. The proposed protocol is intrinsically probabilistic but consist on the teleportation of non-orthogonal states [28], instead of orthogonal ones [29; 30]. The non-orthogonality of the state reduces the efficiency of the teleportation protocol, and we mention that photon number resolving detectors can help increase the probability of success by reducing the number of rejected measurement events. The presented teleportation protocol is a new solution for correcting temporal broadening caused by dispersion effects affecting time-bin qubit states, thereby reducing the error rate of quantum communication protocols [31; 32; 33].
The paper is organized as follows. In Sec. II, we provide a reminder of the definition of the time-frequency cat and GKP states and the reason of the experimental difficulty behind the natural error correction scheme of the time-frequency GKP states which requires frequency entanglement gates. In Sec. III, we explain the optical equivalent of the polarizing beam-splitter, a Mach-Zehnder interferometer, which allows separating spatially the two logical states of the time-frequency cat and GKP qubits. Such an interferometer is crucial for implementing Bell's measurement in the time-frequency degree of freedom. In Sec. IV, we present a teleportation-based error correction protocol for the time-frequency GKP state, which makes use of the Bell's measurement. The protocol is probabilistic and can be achieved with current experimental devices. Finally, in Sec. V, we summarize our results and present new perspectives.
## II Time-frequency qubits states
### Time-frequency cat state
We will denote \(\ket{\Omega}\) the vacuum state. A single photon state at frequency \(\omega\) in the spatial port \(a\) is denoted as \(\ket{\omega}_{a}=\hat{a}^{\dagger}(\omega)\ket{\Omega}\). The frequency cat state as introduced in [24], is defined as the superposition of a single photon into two different frequency Gaussian distribution:
\[\ket{\psi}=N_{\alpha\beta}(\alpha\ket{\omega_{1}}_{a}+\beta\ket{ \omega_{2}}_{a})=N_{\alpha\beta}(\alpha\ket{0}_{a}+\beta\ket{1}_{a}), \tag{1}\]
where \(\ket{\omega_{1}}_{a}=\frac{1}{\sqrt{2\pi\sigma^{2}}}\int d\omega\text{exp}(-( \omega-\omega_{1})^{2}/2\sigma^{2})\ket{\omega}_{a}\) and the normalization of the state is given by \(1=N_{\alpha\beta}^{2}(\abs{\alpha}^{2}+\abs{\beta}^{2}+2\text{Re}(\alpha \beta^{*})e^{-(\omega_{1}-\omega_{2})^{2}/2\sigma^{2}})\). This is a non-orthogonal qubit state as their overlap is \({}_{a}\bra{0}_{a}=\exp(-(\omega_{1}-\omega_{2})^{2}/2\sigma^{2})\). Experimental proposal for manipulating such a state was proposed in [34; 35], with pulse shapers and electro-optic modulators were used in cascade. The results showed an operation fidelity close to 100% but the successive optical elements decrease drastically the probability of single photon detection. The wavefunction of the time cat state is defined as:
\[\ket{\psi}=N_{\alpha\beta}(\alpha\ket{t_{1}}_{a}+\beta\ket{t_{2}} _{a}). \tag{2}\]
Note that for avoiding to have a normalization constant depending in the coefficients \(\alpha,\beta\), and writing the wavefunction in an orthogonal basis, we can employ the Gram-Schmidt decomposition procedure. The normalized orthogonal basis \(\ket{a}\) and \(\ket{b}\) can be written as:
\[\ket{a}=\ket{0},\;\ket{b}=N[\ket{1}-\bra{0}\ket{1}\ket{0}], \tag{3}\]
where \(N=1/\sqrt{1-r^{2}}\) where \(r=\ket{\bra{0}\ket{1}}\). The GKP input state Eq. (II.1) can be written in the orthogonal basis as:
\[\ket{\psi}=(\alpha+\beta\bra{0}\ket{1})\ket{a}+\frac{\beta}{N}\ket{ b}, \tag{4}\]
where we have now \(\abs{(\alpha+\beta\bra{0}\ket{1})}^{2}+\abs{\frac{\beta}{N}}^{2}=1\).
The frequency entangled cat state, an EPR state could be written as:
\[\ket{\phi^{\pm}} =N_{\text{EPR}}(\ket{\omega_{1}\omega_{1}}_{ab}\pm\ket{\omega_{2} \omega_{2}}_{ab}) \tag{5}\] \[\ket{\psi^{\pm}} =N_{\text{EPR}}(\ket{\omega_{1}\omega_{2}}_{ab}\pm\ket{\omega_{2} \omega_{1}}_{ab}) \tag{6}\]
where \(N_{\text{EPR}}^{2}(2+2e^{-(\omega_{1}-\omega_{2})^{2}/2\sigma^{2}})=1\). When \(\omega_{1}-\omega_{2}\gg\sigma\) we recover the normalization of an EPR state composed of orthogonal qubits \(N_{\text{EPR}}=1/\sqrt{2}\). The frequency cat state can be produced by integrated optical waveguide [36], and bulk system [37]. The wavefunction of a temporal entangled EPR state Eq. (5) has the same mathematical structure, and such a quantum state can be produced by quantum dots for instance [38]. This type of quantum state has potential applications in quantum communications [39].
### Time-frequency GKP state
We define a frequency lattice of period \(\overline{\omega}\). Centered on each of this interval, we define the ideal time-frequency GKP state as the following frequency comb at the single photon level:
\[\ket{\overline{\mu}_{\omega}}_{a}=\sum_{n\in\mathds{Z}}\abs{(2n+ \mu)\overline{\omega}}_{a} \tag{7}\]
where \(\mu=0,1\) index the two logical states. Note that the equal weight superposition of the zero and the one logical time-frequency GKP states in the frequency domain are the zero and one in the temporal domain:
\[\ket{\overline{+}_{\omega}}_{a} =\frac{1}{\sqrt{2}}(\ket{\overline{0}_{\omega}}_{a}+\ket{ \overline{1}_{\omega}}_{a})=\ket{\overline{0}_{t}}_{a} \tag{8}\] \[\ket{\overline{-}_{\omega}}_{a} =\frac{1}{\sqrt{2}}(\ket{\overline{0}_{\omega}}_{a}-\ket{\overline{ 1}_{\omega}}_{a})=\ket{\overline{1}_{t}}_{a}. \tag{9}\]
The periodicity of the state in the temporal domain: \(\overline{\omega}=2\pi/\overline{\omega}\). Such a state is not physical since the state is an infinite sum of monochromatic state and will require an infinite energy to prepare it. The physical time-frequency GKP state can be built upon this ideal state by applying time and frequency noise, which are frequency and time displacement operations multiplied by Gaussian distribution which is detailed in [1]. The wavefunction of the two logical states can be written as follows:
\[\ket{\mu_{\omega}}_{a}=N_{\mu}\sum_{n\in\mathds{Z}}\int d\omega G^{ \kappa}(\omega)G^{\sigma}(\omega-(2n+\mu)\overline{\omega})\ket{\omega}_{a} \tag{10}\]
where \(G\) are Gaussian functions representing the envelope of the comb of width \(\kappa\) and the peaks of the comb of width \(\sigma\). The frequency probability distribution of the grid state is represented in Fig. 1. Alternatively for large comb \(\overline{\omega}/\sigma\gg 1\)[40], we can write:
\[\ket{\mu_{\omega}}_{a}=N_{\mu}\sum_{n\in\mathds{Z}}c_{2n+\mu} \int d\omega G^{\sigma}(\omega-(2n+\mu)\overline{\omega})\ket{\omega}_{a} \tag{11}\]
with the envelope coefficients \(c_{n}=\exp(-(n\overline{\omega}/\kappa)^{2}/2)\). \(N_{\mu}\) is the normalization constant found thanks to the relation \(1=\left|{}_{a}\left\langle{\mu_{\omega}}{\left|{\mu_{\omega}}\right\rangle_{a}} \right|^{2}=N_{\mu}^{2}\sum_{n\in\mathds{Z}}\left|{c_{2n+\mu}}\right|^{2}\sqrt {\pi\sigma^{2}}\).
In general, the physical GKP state can be in a superposition of the two logical states:
\[\left|{\psi}\right\rangle=N_{\alpha\beta}(\alpha\left|{0_{\omega}}\right\rangle _{a}+\beta\left|{1_{\omega}}\right\rangle_{a}), \tag{12}\]
where \(N_{\alpha\beta}=(\left|{\alpha}\right|^{2}+\left|{\beta}\right|^{2}+2\mathrm{ Re}(\alpha^{*}\beta\left\langle{0}\right|1)))^{-1/2}\). The two logical states are not orthogonal when a Gaussian wavepacket enters in the frequency bin of its neighbour. The overlap \({}_{a}\left\langle{0_{\omega}}{\left|{1_{\omega}}\right\rangle_{a}}\right.\) is different than zero and is equal to:
\[{}_{a}\left\langle{0_{\omega}}{\left|{1_{\omega}}\right\rangle_{a}}=e^{- \overline{\omega}^{2}/4\sigma^{2}}\frac{\sum_{n}c_{2n}c_{2n+1}^{*}}{(\sqrt{ \sum_{n}\left|{c_{2n}}\right|^{2}\sum_{n}\left|{c_{2n+1}}\right|^{2}})}. \tag{13}\]
The full state is thus described by five imporations parameters. The complex parameters \(\alpha\) and \(\beta\) where the quantum information is encoded, the frequency width \(\sigma,\kappa\) and the periodicity of the state \(\overline{\omega}\). If the state is not too much noisy, meaning that \({}_{a}\left\langle{0_{\omega}}{\left|{1_{\omega}}\right\rangle_{a}}\right.\sim 0\), then the normalisation condition of Eq. (12) is given by \(\left|{\alpha}\right|^{2}+\left|{\beta}\right|^{2}=1\). Finally, in [1; 4], we define the time-of-arrival and frequency operators which do not commute and verify an Heisenberg algebra. This is mathematically equivalent to the non-commutativity of time-frequency displacement operators. This property leads to consider temporal and frequency bandwidth as quantum noise at the single photon level.
The GKP states which are defined as the sum of squeezed states in a given mode [9; 41], are designed to be robust against small shift in position and momentum, which can be caused by a Gaussian quantum channel, but they are also robust against photon losses [41]. On the other hand, time-frequency GKP states are designed to be robust against small shift in time and frequency. The major difference between GKP states and time-frequency GKP states is that photon losses do not result in errors for the latter. In general, temporal errors are the dominant source of errors, while frequency is considered a robust degree of freedom, as it is barely affected by linear physical processes. Additionally, GKP states can be used for fault-tolerant universal quantum computation [20]. Due to their mathematical similarity with time-frequency GKP states, it is expected that they would lead to the same mathematical result. However, the generation of non-Gaussian states using the degree of freedom of a single photon is relatively simple to implement experimentally. As a result, the experimental implementation of a time-frequency GKP state is considered straightforward. In contrast, creating entanglement gates between two single photons is a more challenging task. For particle-number sensitive encoding, non-Gaussian operations involving the quadrature degree of freedom can be difficult to implement, while two-mode Gaussian operations, such as with a beam splitter, are easier to perform.
### Sources of time-frequency noise
In this section, we discuss the physical processes that lead to temporal-spectral broadening or distorsion. Coherent and incoherent errors will lead to either pure and mixed state respectively.
Temporal errors for both codes arise from temporal spreading of each wavepacket composing the state due to linear dispersion effect, described by a coherent model (see for instance [42] for an example in the single photon regime). After such second order dispersion effect, the temporal width of the Gaussian wavepacket becomes \(\tau=\tau_{0}\sqrt{1+\tau_{c}^{4}/\tau_{0}^{4}}\) where \(\tau_{0}\) is the initial width of the pulse and \(\tau_{c}=\sqrt{\beta_{2}L}\), \(L\) being the length of the dispersive medium and \(\beta_{2}\) the dispersion coefficient. Such a dispersion process is described by an unitary operation, and it can in principle be undone by a reverse transformation. However, it requires the knowledge of the full characterization of the propagation channel. For the time-frequency GKP state, the dispersive effect not only lead to a temporal spreading of each wavepacket, but also lead to the formation of replica temporal images, called the temporal Talbot effect (see for instance [43]). One has to consider specific length of the fiber or dispersive coefficient to recover the initial state, which will be also temporally broadened. Polarizing mode dispersion is one of incoherent temporal broadening [44; 45; 46]. Due to a coupling between the polarization and frequency degree of freedom, if the polarization is not measured, the single photon state becomes mixed in frequency [47]. Thus, the error correction protocol that will be presented in Sec. II.4 becomes particularly relevant because we can not cancel the error simply by a unitary operation.
Frequency noise that causes spectral broadening while the spectral distribution remains Gaussian, is not typically dominant at the single-photon level. Spectral broadening induced by the self-phase modulation effect [48] results from the accumulated phase \(\phi_{NL}(t)=\frac{2\pi}{\lambda}n_{2}I(t)L\), where \(n_{2}\) is the non-linear refractive index, \(L\) is the length of the medium, and depends on the intensity \(I(t)\) of the field, leading to a non-Gaussian spectral distribution. At the single-photon level, this non-linear process does not occur naturally. In [1], we also argue that frequency noise arises from frequency broadening caused by the generation of photon pairs itself, and we describe one method to correct such a noise. There are numerous processes that can distort the spectral distribution of single photons, such as distortions caused by frequency shifts induced by electro-optic modulators [49; 50], or the presence of a filter during single photon heralding.
An error correction protocol is implemented to mitigate the effects of potential errors from various sources. Its objective is to restore the Gaussian distribution of each frequency peak. This is because Gaussian probability distributions are better understood for setting confidence intervals and error thresholds, as discussed in [52]. Both qubits are sensitive to temporal and frequency broadening. While the time-frequency cat state is designed to be robust against errors over one variable, the time-frequency GKP state can correct errors along both orthogonal variables. It is not necessary to use the time-frequency GKP state if the main error is in the temporal domain. We now develop two error correction methods, one based on a direct frequency entanglement between the noisy state of interest and a less noisy ancilla Sec. II.4, the other method by using only linear optics and also less noisy ancilla Sec. IV.
### Time-frequency entangled GKP state and error correction protocols
Error correction of continuous variables states can be done with a Steane error correction protocol, for the quadrature degree of freedom [53] and for the time-frequency one [1]. For the two encodings, the protocol consists of entangling the state of interest with one less noisy ancilla (a \(\ket{+}\) logical state in one variable, time or frequency), with a beam splitter (resp. with frequency beam-splitter that will be explicated below), and performing one homodyne (resp. single photon frequency measurement) detection at the spatial output of the ancilla for correcting the error along one variable. The protocol is repeated to correct errors in the orthogonally (or canonically) conjugated variable. To achieve this, one must entangle the state of interest with a less noisy state using a \(\ket{+}\) state in the canonically conjugated variable compared to the first step. Then, perform the entangling operation, project a measurement in the canonically conjugated variable (compared to step one), and finally conduct a conditional displacement operation. Important tools for quantifying the threshold of noise from which it is still possible to correct the GKP states, using such a Steane error correction protocol was done in [52]. Figures of merit for quantifying the probability of measuring the one logical state while this is the zero that should be obtained was done in [52; 54]. We will refer to these figures of merit, when we will note one logical state is more (or less) noisy than the other. From now on, we develop two entanglement structures of time-frequency GKP state that can be generated in the laboratory, and how to perform the error correction in each case.
The first entanglement structure of time-frequency GKP state that we can studied is the one obtained by using a spontaneous parametric down conversion process (SPDC) from a non-linear crystal placed into an optical cavity [1; 55]. The corresponding wavefunction can be cast as:
\[\ket{\psi}=\iint d\omega_{s}d\omega_{i}f_{+}(\omega_{+})f_{-}(\omega_{-})f( \omega_{s})f(\omega_{i})\ket{\omega_{s},\omega_{i}}. \tag{14}\]
The functions \(f_{\pm}\) model respectively the energy conservation and the phase-matching of the SPDC process, and
Figure 1: Probability distribution of the two time-frequency qubits. (a) Frequency cat state. \(\sigma\) is the half-width of the Gaussian peak, and \(\Delta\) is the spectral separation between the two logical states (b) Time-frequency GKP state. The width of each peak is \(\sigma\), the envelope \(\kappa\), and the periodicity of the comb state is \(\overline{\omega}\). In the temporal domain, the periodicity of the state is \(2\pi/\overline{\omega}\), the width of the envelope and the peak is \(\sigma\) and \(\kappa\) respectively. Frequency units are made dimensionless with respect to the spatial separation of the state (left) and the periodicity of the state (right).
\(f\) models the cavity function. The joint spectrum intensity is represented in Fig. 2(c). Instead of the traditional view where the wavefunction of two photons is written by applying a quadratic Hamiltonian on the vacuum state, it can also be expressed with a quantum circuit representation as shown in Fig. 2(a). This starts with two ideal separable GKP states (fictitious), which undergo initial frequency broadening, which can be interpreted as a frequency noise [1]. Then, the state is entangled through the gate performed by the non-linear crystal:
\[\hat{U}\ket{\omega_{s},\omega_{i}}=\left|\frac{\omega_{s}+\omega_{i}}{\sqrt{2}},\frac{\omega_{s}-\omega_{i}}{\sqrt{2}}\right\rangle \tag{15}\]
that we called a frequency beam-splitter by analogy with the beam-splitter which acts mathematically similarly into the quadrature position-momentum degree of freedom [56; 4]. The state then undergoes a temporal broadening which can be interpreted as a temporal noise, and a final frequency beam-splitter operation is performed. The mathematical reason behind these four successive operations is that the envelop of the grid state are function of the collective variables \(\omega_{\pm}\), while the cavity of the local variable \(\omega_{s,i}\). The form of the frequency entanglement between the two single photon grid state generated by the non-linear process (see Eq. (15)) allows for the reduction of the temporal broadening of one of the single photons by performing a temporally-resolved measurement of the other. The form of the pure wavefunction after the conditioned operation has been written in [1]. After correcting the single photon state in the temporal domain, the next step is to correct the state in the frequency domain. For that, we must first entangle the single photon to correct with a less noisy ancilla single photon state using the entanglement gate Eq. (15). Since now we have two separable single photon states, we can not directly entangle them by using a non-linear crystal, which is non-efficient. Nevertheless, such a frequency entangled gate could be implemented with a quantum emitter embedded into a waveguide, which assists to the interaction between the two single photons [57; 58].
The second entanglement structure of time-frequency GKP state that can be considered, is to start with two initially separable GKP states, with one being less noisy than the other. These states are then entangled using Eq. (15). The resulting wavefunction is:
\[\ket{\psi^{\prime}}=\iint d\omega_{s}d\omega_{i}f_{+}(\omega_{+})f_{-}(\omega_ {-})f(\omega_{+})f(\omega_{-})\ket{\omega_{s},\omega_{i}}. \tag{16}\]
In this new spectral function (see Fig. 2(d)), the cavity function is now dependent on the collective variables, thus the periodicity of the grid state is not the same as in Eq. (14). The corresponding quantum circuit representation is pictured in Fig. 2(b). The error correction protocol is followed by a temporally-resolved measurement followed by a conditional displacement operation to correct only the temporal noise in the state of interest. A second entanglement operation is performed using a less noisy ancilla state, followed by a resolved-frequency measurement followed by a conditional displacement operation to correct the frequency noise. The difference in the joint spectral amplitude of the photon pairs results - Eq. (14) and Eq. (16) - in a different wave function when one of the photons undergoes a temporally-resolved (and frequency) measurement, and a conditional displacement operation.
## III Spatial separation of the two logical time-frequency qubit states
In this section, we develop the equivalent of the polarizing beam-slitter operation for the time-frequency cat and GKP state, which is the crucial optical component for the teleportation-based error correction described in the next section. Such an optical element will be called the frequency qubit beam-splitter (FQBS) in what follows.
### Spatial separation of the two logical time-frequency cat states
In this section, we explicit how to separate spatially the two Gaussian wavepackets with linear optics, using a Mach-Zehnder interferometer. The frequency cat state as described by Eq. (1) is introduced into a balanced beam-splitter and the associated wavefunction is:
\[\ket{\psi}=\frac{1}{2}(\ket{\omega_{1}}_{a}+\ket{\omega_{2}}_{a}+\ket{\omega_{ 1}}_{b}+\ket{\omega_{2}}_{b}). \tag{17}\]
We assumed for simplicity that the two frequency state are well separated \(\omega_{1}-\omega_{2}\gg\sigma\). Then, a pulse shaper is placed at the spatial port \(b\), described by the following unitary operation:
\[\hat{U}\ket{\omega_{1}}_{b}=\ket{\omega_{1}}_{b},\ \hat{U}\ket{\omega_{2}}_{b}=e^{i \phi}\ket{\omega_{2}}_{b}. \tag{18}\]
Such an operation can be implemented for instance by mapping the spectral to the spatial degree of freedom with a grating, a spatial light modulator at the focal length of two lenses, and then performing back the mapping from the spatial to spectral degree of freedom [59; 2]. We assume that the frequency peaks are spaced enough so that the pulse shaper acts on each logical state independently. The two spatial paths are then recombined to another balanced beam-splitter. The output state has the final form:
\[\ket{\psi}=\frac{1}{\sqrt{2}}\ket{\omega_{1}}_{a}\ket{\Omega}_{b}+\frac{1}{2 \sqrt{2}}((1+e^{i\phi})\ket{\omega_{2}}_{a}\ket{\Omega}_{b}+(1-e^{i\phi}) \ket{\Omega}_{a}\ket{\omega_{2}}_{b}). \tag{19}\]
If \(\phi=\pi\), the two logical states are spatially separated \(\ket{\psi}=\frac{1}{\sqrt{2}}(\ket{\omega_{1}}_{a}\ket{\Omega}_{b}+\ket{ \Omega}_{a}\ket{\omega_{2}}_{b})\). This optical interferometer is the equivalent of the polarizing beam-splitter that separates the vertical and horizontal polarization of optical fields into two distinct spatial paths.
### Spatial separation of the two logical time-frequency GKP states
In this part, we introduce how to separate spatially the odd and the even frequencies with a Mach-Zehnder interferometer. Such a scheme was already proposed for manipulating large quadrature position-momentum continuous variables cluster states [60], and was described in [25; 2].
We start with a time-frequency GKP state with a finite envelop but with infinitely narrow frequency width \(\left|\tilde{+}\right\rangle=\frac{1}{\sqrt{2}}(\left|\tilde{0}\right\rangle_{a }+\left|\tilde{1}\right\rangle_{a})\), and is introduced into a balanced beam-splitter. The spatial output port of the beam-splitter is noted \(a\) and \(b\). A time-shift operation is performed in spatial port \(b\), and then the two spatial ports are recombined into a balanced beam-splitter (see Fig. 3). The final wave function can be written as:
\[\left|\psi\right\rangle=\frac{1}{2}\sum_{n\in\mathds{Z}}[c_{n}(e^{in\varpi t }-1)\left|n\overline{\omega}\right\rangle_{a}\left|\Omega\right\rangle_{b}-(e^ {in\varpi t}+1)\left|\Omega\right\rangle_{a}\left|n\overline{\omega}\right\rangle _{b}]. \tag{20}\]
If we set \(t=\pi/\overline{\omega}\), after the second beam-splitter the wavefunction is (see [2]):
\[\left|\psi\right\rangle=\frac{1}{\sqrt{2}}(-\left|1\right\rangle_{a}\left| \Omega\right\rangle_{b}+\left|\Omega\right\rangle_{a}\left|0\right\rangle_{b}). \tag{21}\]
The odd and even frequency components are spatially separated, allowing for individual manipulation, such as correcting the phase accumulation of the one logical state. It is possible to achieve the same result in the temporal domain by shifting the frequency instead of the time, as shown in Fig. 3.
If each frequency peak is not infinitely narrow, then the output wavefunction after the Mach-Zehnder interferometer can be written as:
\[\left|\psi\right\rangle=\frac{1}{2}\sum_{n\in\mathds{Z}}c_{n}[( \int d\omega(e^{i\pi\omega/\overline{\omega}}-1)G^{\sigma}(\omega-n\overline{ \omega})\left|\omega\right\rangle_{a}\left|\Omega\right\rangle_{b}\] \[-\left(\int d\omega(e^{i\pi\omega/\overline{\omega}}+1)G^{\sigma}( \omega-n\overline{\omega})\left|\Omega\right\rangle_{a}\left|\omega\right\rangle _{b}. \tag{22}\]
The corresponding probability frequency distribution at
Figure 2: Different joint spectral intensity \(\left|\left\langle\omega_{s},\omega_{i}|\psi\right\rangle\right|^{2}\) of time-frequency entangled GKP states (c),(d) that can be generated experimentally and their quantum circuit representation (a), (b). The situation (a) corresponds to a photon pair produced by a SPDC process, while (b) corresponds to two single photons with a frequency comb structure produced by two independent processes which are then frequency entangled. The position of frequency beam-splitter operations modifies the periodicity of the grid state from a factor \(\sqrt{2}\). \(\tilde{D}(\omega)\) and \(\tilde{\mathcal{D}}(t)\) are frequency and temporal displacement operations defined in [1].
spatial port \(a\) and \(b\) are
\[P_{a}(\omega) =\frac{1}{4}\Biggl{|}\sum_{n\in\mathds{Z}}c_{n}(e^{i\pi\omega/\overline {\omega}}-1)G^{\sigma}(\omega-n\overline{\omega})\Biggr{|}^{2}, \tag{23}\] \[P_{b}(\omega) =\frac{1}{4}\Biggl{|}\sum_{n\in\mathds{Z}}c_{n}(e^{i\pi\omega/ \overline{\omega}}+1)G^{\sigma}(\omega-n\overline{\omega})\Biggr{|}^{2}. \tag{24}\]
We represent in Fig. 4(a) (b) the spectral distribution of two \(\ket{+}\) states for \(\sigma=0.1\overline{\omega}\) and \(\sigma=0.2\overline{\omega}\) along with the output probability distribution after the Mach-Zehnder interferometer in Fig. 4(c),(d). We observe that when \(\sigma=0.1\overline{\omega}\), it is valid to consider the zero and one logical states independently since they do not interfere due to their central frequencies being too far apart for overlap. However, the spatial separation is imperfect, the one logical state does not emerge from the correct spatial port. When \(\sigma=0.2\overline{\omega}\) (see Fig. 4), there is an interference term between the zero and one logical state because they now overlap significantly. The resulting state is outside the GKP subspace, it can be seen with the distorsion and because the probability at the center of the odd and even frequency bins is zero. The addition of a periodic frequency filter can enhance the projection into the GKP subspace and the choice of a frequency width would be crucial for rejecting the logical states emerging from the incorrect spatial port.
In Appendix A, we also investigate the separation of the odd and even components of the comb when the two-photon state is an input of the frequency qubit beam-splitter. This is relevant to the teleportation-based error correction protocol because the state to be teleported is combined with a single photon from an EPR pair during the Bell measurement. The spatial separation is, in that case, also completely effective when each Gaussian distribution approaches a Dirac distribution, and not otherwise. Explicitly, for the two photons states we obtain: \(\ket{01}_{aa}\rightarrow\ket{0_{G}}_{a}\tilde{\ket{1}}_{G}{}_{a^{\prime}}+ \ket{0_{E}}_{a^{\prime}}\tilde{\ket{1}}_{E}{}_{a}\), whose the corresponding expressions in given in Appendix A. The case to combine two different single photon states also plays a role in quantum communication scenarios, where the second qubit is from an attacker which tries to collect the information about the qubit carrying the information of interest.
## IV Teleportation-based error correction of time-frequency qubits states
In this section, we propose a protocol for correcting and teleporting frequency qubit states without relying on frequency entangling operations. This protocol is similar to the one used for teleporting polarization qubit states as described in [61], and is inspired from the GKP analog [26; 27]. Since the EPR state has a lower level of noise in both temporal and frequency variables, the protocol includes an additional component for correcting errors in the state being teleported.
The single photon state \(\ket{\psi}\) to be teleported and corrected is described by the wavefunction \(\ket{\psi}=N_{\alpha\beta}(\alpha\ket{0}_{a}+\beta\ket{1}_{a})\), where the two logical state are either the time-frequency cat or GKP qubits (see Eq. (12)). The wavefunction of the entangled EPR time-frequency state in spatial port \(b\) and \(c\) which assists for the teleportation
Figure 3: Spatial separation of the odd and even peaks of the time-frequency GKP state with a Mach-Zehnder interferometer in the frequency (left) and in the temporal (right) domain. BS stands for balanced beam-splitter. The spatial separation is imperfect when the states are no longer perfectly monochromatic.
and the correction is:
\[\left|\phi^{+}\right\rangle_{bc}=N_{\text{EPR}}(\left|\tilde{0}\tilde{1}\right\rangle _{bc}+\left|\tilde{1}\tilde{0}\right\rangle_{bc}) \tag{25}\]
composed of logical states which are less noisy, indicated with the tilde notation, than the state to be teleported. Upon completion of the protocol, the single-photon state \(\ket{\psi}\) will be localized to the spatial port \(c\), and the correction is automatically performed since the EPR state is less noisy than the state of interest.
We now write the protocol for the error correction and teleportation of the time-frequency cat state, considering that the Bell's measurement perfectly separate the two logical states. This protocol allows correcting frequency errors affecting a frequency qubit states. The protocol is represented in Fig. 5, and we will now proceed to write the evolution of the wavefunction at each step of the protocol. The initial wavefunction, composed of the state to be corrected and teleported, and the EPR state, written in the Bell's basis is:
\[\ket{\psi} =\frac{N_{\alpha\beta}N_{\text{EPR}}}{2}(\ket{\omega_{1}\tilde{ \omega}_{1}}+\ket{\omega_{2}\tilde{\omega}_{2}})(\alpha\ket{\tilde{\omega_{1}} }_{c}+\beta\ket{\tilde{\omega_{2}}}_{c})\] \[\quad+(\ket{\omega_{1}\tilde{\omega}_{1}}-\ket{\omega_{2}\tilde{ \omega}_{2}})(\alpha\ket{\tilde{\omega_{1}}}_{c}-\beta\ket{\tilde{\omega_{2}}} _{c})\] \[\quad+(\ket{\omega_{1}\tilde{\omega}_{2}}+\ket{\omega_{2}\tilde{ \omega}_{1}})(\beta\ket{\tilde{\omega_{1}}}_{c}+\alpha\ket{\tilde{\omega_{2}}} _{c})\] \[\quad+(-\ket{\omega_{1}\tilde{\omega}_{2}}+\ket{\omega_{2}\tilde{ \omega}_{1}})(\beta\ket{\tilde{\omega_{1}}}_{c}-\alpha\ket{\tilde{\omega_{2}}} _{c}). \tag{26}\]
The single photon state and one member of the EPR pair are then combined into a beam-splitter, followed by two parity-frequency beam-splitter are placed in spatial port \(a^{\prime}\) and \(b^{\prime}\). The first and second Bell's state are transformed as:
\[\frac{N_{\text{EPR}}}{2}[\ket{\omega_{1}\tilde{\omega}_{1}}_{a}- \ket{\omega_{1}}_{a}\ket{\tilde{\omega}_{1}}_{b}+\ket{\omega_{1}}_{b}\ket{ \tilde{\omega_{1}}}_{a}-\ket{\omega_{1}\tilde{\omega_{1}}}_{b}\] \[\quad+\ket{\omega_{2}\tilde{\omega}_{2}}_{a}-\ket{\omega_{2}}_{a} \ket{\tilde{\omega_{2}}}_{b}+\ket{\omega_{2}}_{b}\ket{\tilde{\omega_{2}}}_{a} -\ket{\omega_{2}\tilde{\omega_{2}}}_{b}]. \tag{27}\]
The presence of a single photon in each port is a consequence of the distinguishability of the photons. While if the photons were indistinguishable, when the two logical states are orthogonal as it is the case for the polarization encoding, only bunching event will be measured. In order to suppress this coincidence events that leads to errors in the teleportation protocol, the
Figure 4: (a) Input state of the Mach-Zehnder interferometer for \(0.1\overline{\omega}=\sigma\) and \(\kappa=0.1\). (b), (c) Probability distribution in the spatial port \(b\) (resp. (a)) if we start with either the zero (blue) or one (one) logical state or with the equal superposition of the zero and one logical state. Their overlap is not important enough to observe an interference effect. (d) Input state of the Mach-Zehnder interferometer for \(0.2\overline{\omega}=\sigma\) and \(\kappa=0.1\). (e), (f) Probability distribution in the spatial port \(b\) (resp. (a)) if we start with either the zero (blue) or one (one) logical state or with the equal superposition of the zero and one logical state. Their overlap is important enough to observe an interference effect to see a distorsion of the state which leaks to the incorrect spatial port.
utilisation of frequency filters with the same frequency width (envelope and peak) as the EPR state is a potential solution. Explicitly, after the filtering operation, the state becomes \(\ket{\tilde{\omega_{1}}\tilde{\omega_{1}}}_{a}-\ket{\omega_{1}\tilde{\omega_{1}}}_{ b}+\ket{\tilde{\omega_{2}}\tilde{\omega_{2}}}_{a}-\ket{\omega_{2}\tilde{\omega_{2}}}_{b}\), which leads only to two bunching events, that are ignored if single photon detectors are used. The same analysis can be employed for the second Bell's state.
For the third and fourth Bell's state, they are transformed as:
\[\frac{N_{\text{EPR}}}{2}[\pm\ket{\omega_{1}\tilde{\omega_{2}}}_{ a^{\prime}}\mp\ket{\omega_{1}\tilde{\omega_{2}}}_{ab^{\prime}}\pm\ket{\omega_{1} \tilde{\omega_{2}}}_{ba^{\prime}}\mp\ket{\omega_{1}\tilde{\omega_{2}}}_{bb^{ \prime}}\] \[+\ket{\omega_{2}\tilde{\omega_{1}}}_{a^{\prime}a}-\ket{\omega_{2} \tilde{\omega_{1}}}_{a^{\prime}b}+\ket{\omega_{2}\tilde{\omega_{1}}}_{b^{ \prime}a}-\ket{\omega_{2}\tilde{\omega_{1}}}_{b^{\prime}b}]. \tag{28}\]
The use of frequency filter is again imperative, since the measurement of coincidence once filtered of \(a,a^{\prime}\) (or \(b,b^{\prime}\)) permits to ensure that the quantum state \(N_{\alpha\beta}(\beta\ket{\tilde{\omega_{1}}}+\alpha\ket{\tilde{\omega_{2}}})\) has been teleported. In the same way, the measurement of the coincidence of \(a,b^{\prime}\) (or \(b,a^{\prime}\)) allows to teleport the state \(N_{\alpha\beta}(\beta\ket{\tilde{\omega_{1}}}-\alpha\ket{\tilde{\omega_{2}}})\)). The receiver sends to spatial port \(c\) which detectors have measured coincidences, and then a product of Pauli matrix gates must be applied to recover the initial state of interest. Pauli matrices for the time-frequency cat states are frequency and temporal shifts operations [1], which can be implemented by either a electro-optical modulator [49; 50] and a delay line respectively. The full optical scheme of the teleportation-based error correction protocol is represented in Fig. 5, along with a illustration of the effect of the error correction for qubit cat states.
The corresponding probability of each event is \(P=\frac{1}{8(1+e^{-\Delta^{2}/2\sigma^{2}})}\), and the overall probability success of the teleportation is
\[P=\frac{1}{2(1+e^{-\Delta^{2}/2\sigma^{2}})}, \tag{29}\]
which is then lower than 50 %, since only linear optics is used [61], and because the non-orthogonality of the encoding further decrease the probability of success. Note that the use of photon-number-resolving (PNR) detectors will able to not discard the bunching events coming from the first and second Bell's state. With the use of such a PNR detector, the overall probability of success of the teleportation protocol is:
\[P_{\text{PNR}}=\frac{3}{4(1+e^{-\Delta^{2}/2\sigma^{2}})} \tag{30}\]
and thus allows to increase the probability of success of the protocol despite the non-orthogonality of the state. Experimentally, the choice of the PNR detector could be the one described in [62]. Reaching a 3/4 probability of success has also been found by using non-linear process or ancilla entangled states [63; 64].
We can formulate the previous protocol for time-frequency GKP states. If we employ a EPR state which is less noisy in the temporal and frequency domain, the teleported state is corrected in both temporal and frequency variables at once. It is in contrast with the error correction protocol based on frequency entanglement described in Sec. II.4, where we have to repeat twice the same protocol to correct both variables. In the Appendix B, we tackle the case of the teleportation error correction protocol when the spatial separation of the non-orthogonal qubit state is imperfect, discussing the special case of time-frequency GKP state and the Bell's measurement relying on the spatial separation of the two logical states described in Sec. III.2. The imperfect spatial separation affects both the efficiency and the fidelity of the teleportation protocol. The fidelity is always equal to one when the logical states are orthogonal, but this is not the case for non-orthogonal time-frequency qubit states. The use of frequency filters can eliminate detection events caused by imperfect spatial separation, but it comes at the cost of decreased brightness.
## V Conclusion
In this paper, we have analyzed the teleportation-based error correction protocol for two types of frequency qubit states: time-frequency cat states and time-frequency GKP states. This optical scheme has the same goal as a quantum relay, reducing the error rate of wrong detection by decreasing the overlap of the two logical states composing the qubit. We have discussed the experimental realization of Bell's measurement for these two types of qubits. The advantage of discretizing a grid state into a qubit state by combining the even and odd peaks, rather than considering the state as a time-frequency qudit, is convenient because it simplifies the optical implementation of grid state manipulations. When the states are not infinitely frequency narrowed, the Bell's measurement leads to wrong detection, as the spatial separation of the two logical states into two spatial ports is imperfect. This can be corrected by using frequency filters, at the cost of losing single photon detection events. To tackle this issue, the use of frequency resolved detection and the fault-tolerance threshold defined in [52] could be valuable for avoiding the use of frequency filters. We have illustrate that our protocol can correct the errors of the qubit composed of two colors, but it could be also done for correcting the temporal error coming from broadening and dispersion, of a qubit composed of two Gaussian centered at two temporal bins (see Eq. (2)). In this context, it is important to study and evaluate the overall effectiveness of the teleportation protocol, as well as the final quality of the state being corrected, given the level of accuracy in separating the two logical states.
## Acknowledgment
N. Fabre acknowledges useful discussions with Filip Rozpedek, Arne Keller and Perola Milman for the completion of this manuscript.
## Appendix A Spatial separation of a two-photon state
We show in this section that a two-photon state as input can also be separated into the even and the odd components in two distinct spatial ports. We consider an initial separable two photon idea time-frequency GKP state as:
\[\left|0\tilde{0}\right\rangle_{aa}=\sum_{n,m\in\mathds{Z}^{2}}c_{2n}\tilde{c}_{2 m}\hat{a}^{\dagger}(2n\overline{\omega})\hat{a}^{\dagger}(2m\overline{\omega}) \left|\Omega\right\rangle. \tag{30}\]
The tilde notation is here to indicate that the two states are not identical, one of them can be more noisy compared to the other. After the first beam-splitter and the time-displacement operator, the wave function of the two-photon state is:
\[\frac{1}{2}\sum_{n,m\in\mathds{Z}^{2}}c_{2n}\tilde{c}_{2m}(\hat{ a}^{\dagger}_{\tau}(2n\overline{\omega})\hat{a}^{\dagger}_{\tau}(2m\overline{ \omega})+\hat{b}^{\dagger}(2n\overline{\omega})\hat{a}^{\dagger}_{\tau}(2m \overline{\omega})\\ +\hat{a}^{\dagger}_{\tau}(2n\overline{\omega})\hat{b}^{\dagger}( 2m\overline{\omega})+\hat{b}^{\dagger}(2m\overline{\omega})\hat{b}^{\dagger}(2 n\overline{\omega}))\left|\Omega\right\rangle. \tag{31}\]
where \(\hat{a}^{\dagger}_{\tau}(2n\overline{\omega})=e^{i2n\overline{\omega}\tau}\hat {a}^{\dagger}(2n\overline{\omega})\). The output wave function after the second beam-splitter is:
\[\frac{1}{4}\sum_{n,m\in\mathds{Z}^{2}}c_{2n}\tilde{c}_{2m}(\hat{ a}^{\dagger}_{\tau}(2n\overline{\omega})+\hat{b}^{\dagger}_{\tau}(2n\overline{ \omega}))(\hat{a}^{\dagger}_{\tau}(2m\overline{\omega})+\hat{b}^{\dagger}_{ \tau}(2m\overline{\omega}))\\ +(\hat{a}^{\dagger}_{\tau}(2n\overline{\omega})-\hat{b}^{\dagger }_{\tau}(2n\overline{\omega}))(\hat{a}^{\dagger}_{\tau}(2m\overline{\omega})+ \hat{b}^{\dagger}_{\tau}(2m\overline{\omega}))\\ +(\hat{a}^{\dagger}_{\tau}(2n\overline{\omega})+\hat{b}^{\dagger }_{\tau}(2n\overline{\omega}))(\hat{a}^{\dagger}(2m\overline{\omega})-\hat{b}^ {\dagger}(2m\overline{\omega}))\\ +(\hat{a}^{\dagger}(2n\overline{\omega})-\hat{b}^{\dagger}(2n \overline{\omega}))(\hat{a}^{\dagger}(2m\overline{\omega})-\hat{b}^{\dagger}(2 m\overline{\omega}))\left|\Omega\right\rangle. \tag{32}\]
Figure 5: Schematic of the teleportation-based error correction of frequency qubit states. The state to be teleported and corrected is located in spatial port A. An EPR frequency qubit states which is less noisy than the state of interest, is at spatial ports B and C. FQBS stands for frequency qubit beam-splitter. \(F_{0,1}\) are frequency filters with central frequency matching either the zero and one logical state, and with a frequency width equal to that of the EPR state. Depending on which detectors have measured coincidences, Pauli operations \(\hat{X},\hat{Z}\) must be performed to recover the state of interest. The frequency qubit states before and after the teleportation are represented. The zero logical (resp. one) state has a blue (resp. red) color.
We rearrange and post-select only the coincidence terms:
\[\frac{1}{4}\sum_{n,m\in\mathds{Z}^{2}} c_{2n}\tilde{c}_{2m}(\hat{a}_{\tau}^{\dagger}(2n\overline{\omega})+ \hat{a}^{\dagger}(2n\overline{\omega}))\hat{b}_{\tau}^{\dagger}(2m\overline{ \omega})\] \[-(\hat{a}_{\tau}^{\dagger}(2n\overline{\omega})+\hat{a}^{\dagger} (2n\overline{\omega}))\hat{b}^{\dagger}(2m\overline{\omega})\] \[+(\hat{a}_{\tau}^{\dagger}(2m\overline{\omega})+\hat{a}^{\dagger }(2m\overline{\omega}))\hat{b}_{\tau}^{\dagger}(2n\overline{\omega})\] \[-(\hat{a}_{\tau}^{\dagger}(2m\overline{\omega})+\hat{a}^{\dagger }(2m\overline{\omega}))\hat{b}^{\dagger}(2n\overline{\omega})\left|\Omega \right>. \tag{10}\]
Let us first consider the ideal case, where the spectral distribution is a Dirac one \(G_{2n}(\omega)=\delta(\omega-2n\overline{\omega})\). We point out that \((\hat{a}_{\tau}^{\dagger}(2n\overline{\omega})+\hat{a}^{\dagger}(2n\overline{ \omega}))\left|0\right\rangle=(e^{2in\overline{\omega}\tau}+1)\left|2n \overline{\omega}\right\rangle\), with \(\tau=\pi/\overline{\omega}\), we have \((\hat{a}_{\tau}^{\dagger}(2n\overline{\omega})+\hat{a}^{\dagger}(2n\overline{ \omega}))\left|0\right\rangle=2\left|2n\overline{\omega}\right\rangle\). We have also \((\hat{b}_{\tau}^{\dagger}(2m\overline{\omega})-\hat{b}^{\dagger}(2m\overline{ \omega})\left|2n\overline{\omega}\right\rangle\left|0\right\rangle=0)\). We can verify that the others terms are zero. It means that there is no coincidence event which is the desired outcome.
We now rearrange and post-select only the bunching terms:
\[\frac{1}{4}\sum_{n,m\in\mathds{Z}^{2}} c_{2n}\tilde{c}_{2m}(\hat{a}_{\tau}^{\dagger}(2n\overline{\omega}) \hat{a}_{\tau}^{\dagger}(2m\overline{\omega})+\hat{a}^{\dagger}(2n\overline{ \omega})\hat{a}_{\tau}^{\dagger}(2m\overline{\omega})\] \[+\hat{a}_{\tau}^{\dagger}(2n\overline{\omega})\hat{a}^{\dagger}(2m \overline{\omega})+\hat{a}^{\dagger}(2n\overline{\omega})\hat{a}^{\dagger}(2m \overline{\omega})\] \[+\hat{b}_{\tau}^{\dagger}(2n\overline{\omega})\hat{b}_{\tau}^{ \dagger}(2m\overline{\omega})-\hat{b}^{\dagger}(2n\overline{\omega})\hat{b}_{ \tau}^{\dagger}(2m\overline{\omega})\] \[+\hat{b}_{\tau}^{\dagger}(2n\overline{\omega})\hat{b}^{\dagger}(2 m\overline{\omega})-\hat{b}^{\dagger}(2n\overline{\omega})\hat{b}^{\dagger}(2m \overline{\omega}))\left|\Omega\right>. \tag{11}\]
In the ideal case, the terms in the spatial port \(a\) remains, and the ones in the spatial port \(b\) interfere destructively.
For a time-frequency GKP state with a finite bandwidth, there is no longer a perfect destructive (or constructive) interference effect that separates the even and odd components of the comb perfectly. We analyze what happens to the state \(\left|0\hat{1}\right>_{aa}\). Post-selecting on the coincidence we have:
\[\left|0\hat{1}\right>_{aa}\rightarrow\left|0_{G}\right>_{a}\left|\tilde{1}_{G }\right>_{a^{\prime}}+\left|0_{E}\right>_{a^{\prime}}\left|\tilde{1}_{E} \right>_{a} \tag{12}\]
where we have defined:
\[\left|0_{G}\right>_{a}=\frac{N_{e}}{2}\sum_{n\in\mathds{Z}}c_{2n}\int d\omega (e^{i\omega\tau}+1)G_{2n}^{\sigma}(\omega)\left|\omega\right>_{a} \tag{13}\]
\[\left|\tilde{1}_{G}\right>_{a^{\prime}}=\frac{N_{o}}{2}\sum_{n\in\mathds{Z}}c_ {2n+1}\int d\omega(e^{i\omega\tau}-1)G_{2m+1}^{\tilde{\sigma}}(\omega)\left| \omega\right>_{a^{\prime}} \tag{14}\]
\[\left|\tilde{1}_{E}\right>_{a}=\frac{N_{o}}{2}\sum_{n\in\mathds{Z}}c_{2n+1} \int d\omega(e^{i\omega\tau}+1)G_{2m+1}^{\tilde{\sigma}}(\omega)\left|\omega \right>_{a} \tag{15}\]
\[\left|0_{E}\right>_{a^{\prime}}=\frac{N_{e}}{2}\sum_{n\in\mathds{Z}}c_{2n}\int d \omega(e^{i\omega\tau}-1)G_{2n}^{\sigma}(\omega)\left|\omega\right>_{a^{\prime }}. \tag{16}\]
The post-selection on coincidence results in only those events where both the even and odd components are in the correct (designated by \(G\)) and incorrect (designated by \(E\)) spatial ports. It should be noted that the output state is no longer in the GKP subspace and results in detection errors. These errors can be corrected through the use of frequency filters or by using frequency-resolved detection and setting a frequency threshold width to accept only certain events [52]. Additionally, the imperfect spatial separation described in Eq. (12) can also be interpreted as an attack to extract some information of the quantum state of interest in a quantum communication protocol.
## Appendix B Teleportation-based error correction protocol with physical time-frequency GKP state
In the following, we will employ the GKP coherent picture [53]. The GKP qubit defined by Eq. (12) is composed of an envelope of width \(\kappa\) and each peak has a width of \(\sigma\) (resp. \(\kappa_{1},\sigma_{1}\)), while the frequency widths of the EPR state will be noted \(\tilde{\kappa}\) and \(\tilde{\sigma}\).
As has been noted, when the time-frequency GKP state has a limited bandwidth, the frequency qubit beamsplitter is not able to perfectly separate the odd and even components of the comb. This results in some of the state leaking out of each spatial port, and not conforming to a GKP state. Furthermore, the state in the right port is also distorted. To address these two challenges, it is necessary to employ frequency filters that enable projection back into the GKP subspace and eliminate the undesirable components. Given that the state being teleported and the EPR state have different frequency widths, a frequency filter is placed prior to detection, which establishes the frequency bin and aligns with the reference EPR state. The projector modeling for the frequency filtering process has the following form:
\[\hat{\Pi}_{a}=N_{e}^{2}(\sigma,\kappa)\sum_{n\in\mathds{Z}}\tilde{c}_{2n}\int G _{2n}^{\tilde{\sigma}}(\omega)\left|\omega\right>\left<\omega\right| \tag{17}\]
\[\hat{\Pi}_{a^{\prime}}=N_{o}^{2}(\sigma,\kappa)\sum_{n\in\mathds{Z}}\tilde{c}_{2n+ 1}\int G_{2n+1}^{\tilde{\sigma}}(\omega)\left|\omega\right>\left<\omega\right| \tag{18}\]
The normalization constant is found by using \(\hat{\Pi}^{2}=\mathds{I}\) and \(\mathrm{Tr}(\hat{\Pi}^{2})=1\), \(N_{e}^{2}(\tilde{\sigma},\tilde{\kappa})=\sqrt{2\pi}/\tilde{\sigma}\sum_{n} \left|\tilde{c}_{2n}\right|^{2}\). In the large comb approximation, we have \(N_{e}(\tilde{\sigma},\tilde{\kappa})=N_{o}(\tilde{\sigma},\tilde{\kappa})\).
The coincidence destructive measurement is described by the positive operator value measurement:
\[\hat{\Pi}_{a,a^{\prime}}=\hat{\Pi}_{a}\hat{\Pi}_{a^{\prime}}\otimes\left|1 \right>\left<1\right|. \tag{19}\]
By assuming that the state is pure, the probability of coincidence in the spatial port \(a,a^{\prime}\),
\[\mathrm{Tr}(\hat{\Pi}_{a,a^{\prime}}\ket{\psi}\bra{\psi}\hat{\Pi}_{a,a^{ \prime}}^{\dagger})\text{ is:}\] \[P_{aa^{\prime}}(0,1)=\bigg{|}\frac{1}{2\sqrt{2}}\bigg{|}^{2} \big{|}a_{\sigma\bar{\sigma}}^{01}+b_{\sigma\bar{\sigma}}^{01}\big{|}^{2}\times\] \[(\int d\omega(|\alpha|^{2}\big{|}\big{\langle}\omega|\bar{0} \big{\rangle}\big{|}^{2}+|\beta|^{2}\big{|}\big{\langle}\omega|\tilde{1}\big{ }\big{\rangle}\big{|}^{2}+2\mathrm{Re}(\alpha^{*}\beta\bra{\omega|\bar{0}} \big{\rangle}\bra{\omega|\tilde{1}))) \tag{24}\]
where we have used \(\big{|}a_{\sigma\bar{\sigma}}^{01}+b_{\sigma\bar{\sigma}}^{01}\big{|}=\big{|}a_ {\sigma\bar{\sigma}}^{10}+b_{\sigma\bar{\sigma}}^{10}\big{|}\) which is shown afterward. For an EPR with a sufficiently narrow distribution, we assume that \(\big{\langle}\tilde{0}|\tilde{1}\big{\rangle}=0\), \(\int d\omega\big{|}\big{\langle}\omega|\bar{0}\big{\rangle}\big{|}^{2}=\int d \omega\big{|}\big{\langle}\omega|\tilde{1}\big{\rangle}\big{|}^{2}=1\). These two conditions are important, since the probability then does not depend on \(\alpha\) and \(\beta\),
\[P_{aa^{\prime}}(0,1)=\bigg{|}\frac{1}{2\sqrt{2}}\bigg{|}^{2}\big{|}a_{\sigma \bar{\sigma}}^{01}+b_{\sigma\bar{\sigma}}^{01}\big{|}^{2} \tag{25}\]
since otherwise, information about the quantum state could be extracted during the measurement.
The wavefunction of the state after the detection is: \(\ket{\psi}_{c}=\hat{\Pi}_{i,j}\ket{\psi}/\mathrm{Tr}(\hat{\Pi}_{i,j}\ket{\psi} \bra{\psi}\hat{\Pi}_{i,j}^{\dagger})\), where \(i,j=a,b;a^{\prime},b^{\prime}\). When coincidence is detected at the spatial port \(a\) and \(a^{\prime}\), the post-selected state is:
\[\ket{\psi}_{c}=\frac{1}{|a_{\sigma\bar{\sigma}}^{01}+b_{\sigma \bar{\sigma}}^{01}|}(\alpha(a_{0,\sigma}^{1,\bar{\sigma}}+b_{0,\bar{\sigma}}^{ 1,\sigma})\ket{\bar{0}}_{c}\\ +\beta(a_{0,\bar{\sigma}}^{1,\sigma}+b_{0,\sigma}^{1,\bar{\sigma} })\ket{\bar{1}}_{c}). \tag{26}\]
The expression is similar for the other coincidences events in the other spatial ports, and extra Pauli operations have to be performed. The coefficients \(a_{\sigma\sigma}^{10}\) and \(b_{\sigma\bar{\sigma}}^{10}\) have the expression:
\[a_{\sigma\sigma}^{10}=\frac{1}{4}N_{e}(\sigma,\kappa)N_{o}( \tilde{\sigma},\tilde{\kappa})N_{e}(\tilde{\sigma},\tilde{\kappa})N_{o}( \tilde{\sigma},\tilde{\kappa})\\ \times\sum_{n,m,k,k^{\prime}}c_{2n}\tilde{c}_{2m+1}\tilde{c}_{2k} \tilde{c}_{2k^{\prime}+1}\int d\omega(e^{i\omega\tau}+1)G_{2n}^{\sigma}(\omega )G_{2k}^{\tilde{\sigma}}(\omega)\\ \times\int d\omega(e^{i\omega\tau}-1)G_{2m+1}^{\tilde{\sigma}}( \omega))G_{2k^{\prime}+1}^{\tilde{\sigma}}(\omega) \tag{27}\]
\[b_{\sigma\bar{\sigma}}^{10}=\frac{1}{4}N_{e}(\tilde{\sigma},\tilde{\kappa})N_{ o}(\sigma,\kappa)N_{e}(\tilde{\sigma},\tilde{\kappa})N_{o}(\tilde{\sigma}, \tilde{\kappa})\\ \times\sum_{n,m,k,k^{\prime}}\tilde{c}_{2n}c_{2m+1}\tilde{c}_{2k} \tilde{c}_{2k^{\prime}+1}\int d\omega(e^{i\omega\tau}+1)G_{2m+1}^{\sigma}( \omega)G_{2k}^{\tilde{\sigma}}(\omega)\\ \times\int d\omega(e^{i\omega\tau}-1)G_{2n}^{\tilde{\sigma}}( \omega))G_{2k^{\prime}+1}^{\tilde{\sigma}}(\omega). \tag{28}\]
The \(b\) coefficient contains odd (resp. even) terms in a spatial port where the frequency filters is centered at even (resp. odd) frequencies. In the ideal case, namely if both the EPR and the teleported state are ideal time-frequency GKP state we remind that \(a_{\sigma\bar{\sigma}}^{01}=2\) and \(b_{\sigma\bar{\sigma}}^{01}=0\). After evaluation of the integrals, and assuming that \(2k=2n\), meaning that the temporal spreading only reach the next bin, we find that:
\[a_{\tilde{\sigma}\sigma}^{10}=\frac{\sqrt{\tilde{\sigma}\tilde{\sigma}}\sqrt{ \sigma\tilde{\sigma}}}{4\sqrt{\tilde{\sigma}^{2}+\sigma^{2}}\sqrt{\tilde{ \sigma}^{2}+\tilde{\sigma}^{2}}}(e^{-\frac{\pi^{2}a^{2}}{32^{2}}}+1)(-e^{- \frac{\pi^{2}a^{2}}{32^{2}}}-1) \tag{29}\]
\[b_{\sigma\bar{\sigma}}^{10}=\frac{\sqrt{\tilde{\sigma}}\tilde{\sigma}}\sqrt{ \sigma\tilde{\sigma}}\\ \times(e^{-\frac{\pi^{2}a^{2}}{22^{2}}}e^{-\frac{\pi^{2}}{2(\tilde{ \sigma}^{2}+\tilde{\sigma}^{2})}}e^{i\frac{\pi^{2}}{(\sigma^{2}+\tilde{\sigma }^{2})}}+1)\\ \times(e^{-\frac{\pi^{2}a^{2}}{22^{2}}}e^{-\frac{\pi^{2}}{2( \tilde{\sigma}^{2}+\tilde{\sigma}^{2})}}e^{i\frac{\pi^{2}}{(\sigma^{2}+\tilde{ \sigma}^{2})}}-1) \tag{30}\]
where we have defined that \(\alpha^{2}=\sigma^{2}\tilde{\sigma}^{2}/(\sigma^{2}+\tilde{\sigma}^{2})\). While \(a_{\sigma\bar{\sigma}}^{10}\) is real, \(b_{\sigma\bar{\sigma}}^{10}\) is a complex quantity. From these expressions, we point out that \(a_{0,\sigma}^{1,\bar{\sigma}}=a_{1,\sigma}^{0,\bar{\sigma}}\) and \(b_{0,\bar{\sigma}}^{1,\sigma}=b_{1,\bar{\sigma}}^{0,\sigma}\). When the width of the frequency filter is \(\tilde{\sigma}\to 0\), we have \(a_{\sigma\bar{\sigma}}^{01}=2\) and \(b_{\sigma\bar{\sigma}}^{01}=0\) which is as the ideal case and thus lead to a high fidelity of the state, but at the cost of losing many photons. As the imperfect spatial separation of the two logical states leads to a decreasing of the fidelity, it is a reminiscent fact coming that the state possesses continuous variables.
|
2306.05766 | Data-Link: High Fidelity Manufacturing Datasets for Model2Real Transfer
under Industrial Settings | High-fidelity datasets play a pivotal role in imbuing simulators with
realism, enabling the benchmarking of various state-of-the-art deep inference
models. These models are particularly instrumental in tasks such as semantic
segmentation, classification, and localization. This study showcases the
efficacy of a customized manufacturing dataset comprising 60 classes in the
creation of a high-fidelity digital twin of a robotic manipulation environment.
By leveraging the concept of transfer learning, different 6D pose estimation
models are trained within the simulated environment using domain randomization
and subsequently tested on real-world objects to assess domain adaptation. To
ascertain the effectiveness and realism of the created data-set, pose accuracy
and mean absolute error (MAE) metrics are reported to quantify the model2real
gap. | Sunny Katyara, Mohammad Mujtahid, Court Edmondson | 2023-06-09T09:04:35Z | http://arxiv.org/abs/2306.05766v1 | # Data-Link: High Fidelity Manufacturing Datasets for Model2Real Transfer under Industrial Settings
###### Abstract
High-fidelity datasets play a pivotal role in imbuing simulators with realism, enabling the benchmarking of various state-of-the-art deep inference models. These models are particularly instrumental in tasks such as semantic segmentation, classification, and localization. This study showcases the efficacy of a customized manufacturing dataset comprising 60 classes in the creation of a high-fidelity digital twin of a robotic manipulation environment. By leveraging the concept of transfer learning, different 6D pose estimation models are trained within the simulated environment using domain randomization and subsequently tested on real-world objects to assess domain adaptation. To ascertain the effectiveness and realism of the created data-set, pose accuracy and mean absolute error (MAE) metrics are reported to quantify the model2real gap.
## I Introduction
We are living in a world where vast amounts of data hold the power to revolutionize industries and shape the future of manufacturing. Datasets play a pivotal role in virtually every field within today's digital world, enabling data-driven decision-making. In the manufacturing industry, data-sets assume a critical position, offering invaluable insights to enhance product quality, optimize production processes, streamline supply chains, and achieve heightened operational efficiencies. However, creation of data-sets represents a laborious and time-consuming endeavour, necessitating the acquisition of high-quality, consistent, scalable, and adaptable data for process automation and optimization, particularly in the realms of robotic grasping and manipulation, assumes utmost significance.
The process of annotating 6D poses in data-sets for robotic grasping and manipulation represents a labour-intensive endeavour, surpassing the challenges encountered in 2D image labelling. To mitigate this challenge, a viable solution lies in the utilization of synthetic data, which offers meticulously annotated samples at a low cost for training pose estimation models [1][2]. However, the substantial disparities between synthetic (source) and real (target) models result in sub-optimal performance. To bridge this gap, a promising approach emerges, combining domain randomization and photo-realistic synthetic data [3][4], aiming to address the domain shift between the source and target domains. In our study, we adopt the real-sim-real transfer method as a means of domain adaptation to overcome sensor noise and realism issues.
While certain generic data-sets, such as YCB Videos [5], MVTech AD [6], and Dex-Net 2.0 [7], have been employed for training models in semantic segmentation, classification, and localization, their limitations become apparent in terms of restricted object variety and the absence of real-world manufacturing context. Consequently, our research proposes the creation of an extensive, high-fidelity data-set encompassing a range of 3D objects commonly employed within the manufacturing industry. As illustrated in Fig. 1, we discretize the captured 3D object, acquired using a high-resolution camera, into descriptive components, comprising texture, material, shape, and inertial dynamics. This process is facilitated by a customized neural network known as DiscNet, which accepts RGBD data and CAD models of the object of interest as inputs, enabling the extraction of desired object features. The proposed Disc-Net architecture incorporates two distinct neural network components: style extraction, encompassing texture and material, while the other focuses on shape and inertial parameters. These extracted features are subsequently transferred to synthetic models within the digital twin environment, where they are annotated using a standard bounding box annotator [8]. The resulting synthetic annotated data-set is then utilized for training various pose estimation networks including PoseCNN [5], PVNet [9], and DOPE [10] and assessing their performance within real-world settings, thereby providing a benchmark for evaluating the realism and effectiveness of generated data-set for sim2real transfer.
In summary, this research makes following contributions:
* We propose a robust pipeline that facilitates extraction of desired rendering and physics features from real objects, enabling their seamless transfer to synthetic models. This approach allows for low-cost augmentation and creation of a synthetic data-set with comprehensive 6D annotations, operating under the paradigm of domain adaptation.
* Use high-fidelity synthetic dataset to train different
Fig. 1: Data-Link pipeline for creating a high-fidelity dataset to bridge the domain gap for model2real transfer under industrial conditions.
PoseNets within designed digital twin under domain randomization and eventually evaluate their performance on real world setup using pose accuracy and mean absolute error (MAE) metrics to quantify domain gap.
## II Methodology and Discussion
The primary objective of this research is to leverage 6D pose estimation models trained on low-cost synthetic data for real-world settings, eliminating the need for laborious fine-tuning or expensive model retraining. The proposed methodology encompasses the extraction of style features, including texture and material properties, using StyleNet. StyleNet is an autoencoder architecture with an additional layer dedicated to semantic understanding of texture and material properties derived from latent space representation. In addition, shape and inertial parameters of 3D objects of interest are extracted using PhysNet, which combines a modified PointNet [11] with a customized NeRF architecture [12]. These two networks are integrated into a composite model architecture named DiscNet, enabling simultaneous inference. The StyleNet layer takes RGB data as input, while the PhysNet layer processes PointCloud data and CAD models of candidate objects, yielding descriptive descriptors encompassing material, texture, shape, and inertial properties from the composite model (DiscNet).
Using the extracted descriptors from the real world, virtual objects are developed within the Unity 3D engine to establish a digital twin environment for manipulation scenarios, serving as a domain adaptation technique with a specific focus on bin-picking tasks. To ensure clarity and comprehensiveness, we have modelled 60 distinct manufacturing object classes, ranging from motors and gears to wrenches and Allen keys. These models form the basis for generating a synthetic 6D annotated dataset, incorporating domain randomizations. The randomizations encompass variations not only in rendering-related factors, such as lighting, object positions, and camera locations, but also in physics parameters, including mass, friction, inertias, and other relevant factors. The dataset consists of 3000 training and 1800 test images, each with dimensions of 640 \(\times\) 480 pixels. These images are utilized to train three distinct PoseNets: PoseCNN, PVNet, and DOPE. Operating on RGB images as input, these PoseNets estimate 6D poses of objects of interest within the scene, leveraging 2D to 3D correspondences. The estimated poses from these models are subsequently utilized by the Moveit API for grasp planning, with evaluation conducted in both virtual and real setups.
The experimental findings indicate that despite being trained on synthetic data, all the networks exhibit comparable performance levels during inference for various objects within both the digital twin and real setups. This observation is substantiated by the outcomes presented in Fig. 2, illustrating the pose accuracy and MAE values. However, it is important to note that the grasp planner requires fine-tuning when adapting to real setups. This necessity arises due to unmodeled dynamics, including object slip, external disturbances, stochasticity, and inherent limitations in the planner's adaptability.
## III Conclusion
This research proposed a novel methodology for developing high-fidelity manufacturing data-set using domain adaptation and domain randomization strategies. The methodology leveraged digital twins of manipulation scenarios to eliminate the need for laborious manual 6D annotations. Our approach encompassed not only the extraction of style features but also the incorporation of inertial dynamics, aiming to achieve a heightened level of realism and model plausibility in rendering and object interactions. The evaluation results demonstrated that our method effectively bridges the domain gap, as evidenced by lower values of MAE and higher pose accuracy rates over sim2real tests. Importantly findings indicated that the proposed approach not only demonstrated superior performance for larger objects but also exhibits notable adaptability to smaller components, even when employing low-cost camera sensors.
This research, however, addresses the reality gap in perceptual pipelines by enabling them to adapt to diverse environments without the need for fine-tuning. Nonetheless, it is important to note that with domain changes, the motion planners also require tuning. In future endeavors, our objective is to extend this work by developing a control pipeline that harnesses the capabilities of reinforcement learning algorithms. The overarching goal is to devise manipulation sequences that are both robust and optimal, thereby facilitating enhanced adaptability and efficiency in manipulation tasks [13].
Fig. 2: Performance evaluation of PoseNets trained using synthetic data and tested on real-world object instances (a) Pose accuracy is higher than 90%, (b) MAE is less than 5% threshold.
## Acknowledgments
Authors would like to extend their sincere gratitude to the Industrial Steering Board (ISB) at IMR, Ireland, for generously providing the funding required to undertake this significant research endeavor. This study aims to address the gap between AI-driven academic research outcomes and the prevailing traditional industrial issues.
|
2302.07486 | Rees algebra of maximal order Pfaffians and its diagonal subalgebras | Given a skew-symmetric matrix $X$, the Pfaffian of $X$ is defined as the
square root of the determinant of $X$. In this article, we give the explicit
defining equations of the Rees algebra of a Pfaffian ideal $I$ generated by the
maximal order Pfaffians of a generic skew-symmetric matrix. We further prove
that all diagonal subalgebras of the corresponding Rees algebra of $I$ are
Koszul. We also look at Rees algebras of Pfaffian ideals of linear type
associated with certain sparse skew-symmetric matrices. In particular, we
consider the tridiagonal matrices and identify the corresponding Pfaffian
ideals to be of Gr\"obner linear type and as the vertex cover ideals of unmixed
bipartite graphs. As an application of our results, we conclude that all their
ordinary and symbolic powers have linear quotients. | Neeraj Kumar, Chitra Venugopal | 2023-02-15T06:21:27Z | http://arxiv.org/abs/2302.07486v2 | # Rees algebra of maximal order Pfaffians and its diagonal subalgebras
###### Abstract.
Given a skew-symmetric matrix \(X\), the Pfaffian of \(X\) is defined as the square root of the determinant of \(X\). In this article, we give the explicit defining equations of Rees algebra of Pfaffian ideal \(I\) generated by maximal order Pfaffians of generic skew-symmetric matrices. We further prove that all diagonal subalgebras of the corresponding Rees algebra of \(I\) are Koszul. We also look at the Rees algebra of Pfaffian ideals of linear type associated to certain sparse skew-symmetric matrices. In particular, we consider the tridiagonal matrices and identify the corresponding Pfaffian ideal to be of Grobner linear type and as the vertex cover ideal of an unmixed bipartite graph. As an application of our results, we conclude that all their powers have linear resolutions.
Key words and phrases:Pfaffians, Koszul algebra, Cohen-Macaulay, Diagonal Subalgebra 2020 Mathematics Subject Classification: Primary 13C40, 13D02; Secondary 13H10
## Introduction
Let \(A\) be a graded Noetherian ring and \(I\) be a homogeneous ideal of \(A\). Then Rees algebra of \(I\) denoted by \(\mathcal{R}(I)\) is a bigraded algebra defined as \(\oplus_{i\geq 0}I^{i}\). Rees algebras form a principal class of bigraded algebras and contain a great deal of information about the powers of the ideal \(I\). Moreover geometrically, it corresponds to the blowup of \(\operatorname{Spec}(A)\) along the variety of \(I\). In general, corresponding to a homogeneous ideal \(I\) of a ring \(A\), finding the explicit defining equations of Rees algebra is not easy. Some study has been done for certain classes of ideals like perfect ideals of grade \(2\)[37, 38, 7, 23], perfect Gorenstein ideals of grade \(3\)[33, 37], determinantal ideals [8, 9] etc. An ideal \(I\) of a ring \(A\) is said to be of _linear type_ if the Rees algebra of \(I\) is isomorphic to it's symmetric algebra. We are interested in the study of Rees algebra of ideals generated by \(d\)-sequence (a notion introduced by Huneke in [24, 25]) as they form a class of ideals of linear type. The motivation to explore Rees algebra corresponding to \(d\)-sequence comes from the analogous study in [15, 41, 12, 31] for the ideals generated by regular sequence.
In this article, we look at a particular class of ideals called the Pfaffian ideals, which come corresponding to skew-symmetric matrices. Let \(X\) be a skew-symmetric matrix and let \(\det X\) denote its determinant. Then _Pfaffian of \(X\)_ denoted by \(\operatorname{Pf}(X)\) is defined as the square root of \(\det X\) i.e, \(\operatorname{Pf}(X)^{2}=\det X\) (cf. [3]). The _Pfaffian ideal of \(X\)_ denoted by \(\operatorname{Pf}_{n-1}(X)\) is the ideal obtained by considering Pfaffians of submatrices of order \(n-1\) obtained by deleting a row and the corresponding column of the matrix \(X\) (cf. [11]). In [11], Buchsbaum and Eisenbud proved that every Gorenstein ideal of codimension \(3\) in a commutative Noetherian ring \(A\) can be identified as the ideal of Pfaffians of order \((n-1)\) of some \(n\times n\) alternating matrix of rank \(n-1\). Under some assumptions on the entries of the skew-symmetric matrix, Pfaffian ideals are found to be of linear type (cf. [5, 18]). We attempt to study the diagonal subalgebras of the corresponding Rees algebra.
The notion of diagonal subalgebras was introduced by Simis, Trung and Valla in [41] generalizing the concept of Segre product of graded algebras. The diagonal subalgebras of certain classes of equigenerated homogeneous ideals of a standard graded polynomial ring can be viewed as the homogeneous coordinate ring of some rational varieties embedded in projective space (cf. [41]). It is also known that for \(c,e\in\mathbb{N}\) and a homogeneous ideal \(I\) of \(B=K[x_{1},\ldots,x_{n}]\), if \(I^{e}\) is generated by forms of degree \(d\leq c-1\), then \(K[(I^{e})_{c}]\) can
be identified as a diagonal subalgebra of Rees algebra in a natural way [41, 15]. One of the key challenges in the study of diagonal subalgebras is to find suitable conditions on a bigraded algebra \(R\) such that certain algebraic properties of \(R\) are inherited by \(R_{\Delta}\) (Definition 1.2).
We are interested in looking at the Koszulness and Cohen-Macaulay property of the diagonals of Rees algebras of equigenerated homogeneous ideals. There is a lot of literature on these properties of diagonals of bigraded algebras (cf. [41, 15, 34, 31, 1]).
A standard graded \(K\)-algebra \(A\) is _Koszul_ if the non-zero entries of matrices representing the maps in the minimal free resolution of \(K\) are homogeneous of degree \(1\). There are several articles that have discussed the Koszul property of diagonal subalgebras of the Rees algebras of ideals (cf. [15, 12, 21, 6, 31, 1]. Explicit lower bounds are known for residual intersections ([1]) and when the ideals are complete intersections [15, 31]. More generally, in [15] it has been proved that for any bigraded \(K\)-algebra \(R\), \(R_{\Delta}\) is Koszul for \(c,e\gg 0\).
In [41] for \(\Delta=(1,1)\), the authors discuss some classes of ideals for which the corresponding diagonal subalgebra of the Rees algebra is Cohen-Macaulay. Complete intersections and certain classes of straightening closed ideals in algebras with straightening law, like determinantal ideals generated by the maximal minors of a generic matrix, are some of the ideals looked into. In [15], Conca, Herzog Trung and Valla solve an open problem posed in [41] regarding the conditions on \((c,e)\) which guarantees Cohen-Macaulay property of \(K[(I^{c})_{c}]\) when \(I\subset B\) is a homogeneous complete intersection minimally generated by \(r\) forms of degree \(d_{1},\ldots,d_{r}\). In case of some classes of perfect ideals of height two as well, certain bounds on \(c,e\) are known for which the diagonals of the corresponding Rees algebras are Cohen-Macaulay (cf. [1]). In general, it is known that if a standard bigraded ring \(R\) is Cohen-Macaulay, then \(R_{\Delta}\) is Cohen-Macaulay for large integers \(c\gg e>0\) (cf. [34]).
In this article, we primarily look at equigenerated Pfaffian ideals so that the associated Rees algebra is standard bigraded, thus forming a standard graded K-algebra with respect to total degree. In particular, it makes sense to look at the Koszul property of such graded K-algebras.
Some of the important results discussed in this article are the following.
1. Let \(I=\)Pf\({}_{n-1}(X)\) where \(X\) is a generic skew-symmetric matrix of odd order \(n=2r+1\), \(r\in\mathbb{N}\) (defined in Section 1). Then we prove the following. 1. The \(c\)-th Veronese subalgebra of the corresponding Pfaffian ring is Koszul for \(c\geq n/4\). 2. \(\mathcal{R}(I)\) is generated by quadrics but need not be Koszul always. 3. All diagonals of \(\mathcal{R}(I)\) are Koszul.
2. For the Pfaffian ideal \(I\) coming from a tridiagonal matrix of the form mentioned in Theorem 3.2, \(\mathcal{R}(I)_{\Delta}\) is Koszul and Cohen-Macaulay for all \(\Delta\). In this case, Pf\({}_{n-1}(X)\) is an equigenerated monomial ideal of Grobner linear type (defined in Section 3), which can be identified as the vertex cover ideal of an unmixed bipartite graph, thereby giving information about the linear resolution of powers of the vertex cover ideal.
3. For \(I=\)Pf\({}_{n-1}(X)\) where \(X\) has the form given in Proposition 3.10, \(\mathcal{R}(I)_{\Delta}\) is Koszul and Cohen-Macaulay for all \(\Delta\).
We focus mainly on the maximal order Pfaffians, as in the non-maximal case, it is not always true that the defining ideal of Rees algebra is generated by quadrics, a property which is essential for the study of Koszulness of Rees algebra.
The reader may be familiar with some of the observations made in this article. However, in order to maintain the study self-contained, we reproduce some arguments and independently establish the results. All the computations in this article are done using Macaulay2 ([22]).
## 1. Preliminaries
We consider \(K\) to be a field of characteristic zero throughout the article. Consider the skew-symmetric matrix \(X=\begin{bmatrix}0&x_{12}&\ldots&x_{1\,n}\\ -x_{12}&0&\ldots&x_{2\,n}\\ \vdots&\vdots&&\vdots\\ -x_{1\,n}&-x_{2\,n}&\ldots&0\end{bmatrix}\) of odd order \(n=2r+1\), \(r\in\mathbb{N}\cup\{0\}\), where the entries \(x_{ij}\) for \(i=1,\ldots,n-1\), \(j=2,\ldots,n\) are indeterminates. This is the form of a generic skew-symmetric matrix of odd order. Let \(B=K[X]\) where \(K[X]\) is the polynomial ring with indeterminates being the non-zero entries of \(X\) and \(I=\mathrm{Pf}_{n-1}(X)=\mathrm{Pf}_{2r}(X)\). Then \(B/I\) is said to be a _Pfaffian ring_.
**Note 1.1**.: Let \(X\) be a skew-symmetric matrix of odd order. Then by \(\mathrm{Pf}_{I}(X)\) we mean the Pfaffian of the submatrix of \(X\) obtained by removing \(I^{th}\) row and corresponding column.
For a field \(K\), \(A\) is said to be a standard graded \(K\)-algebra if \(A=\oplus_{i\geq 0}A_{i}\) such that \(A_{0}=K\), \(A_{1}\) is a finite dimensional \(K\)-vector space and \(A_{i}A_{j}=A_{i+j}\) for every \(i,j\in\mathbb{N}\cup\{0\}\). Any standard graded \(K\)-algebra \(A\) can be identified as \(B/I\), where \(B\) is a standard graded polynomial ring over \(K\) (\(K\)-algebra) and \(I\) is it's homogeneous ideal. For an equigenerated ideal \(\mathfrak{I}\) of \(A\), the _Rees algebra of \(\mathfrak{I}\) in \(A\)_ is a standard bigraded \(K\)-algebra defined as \(\mathcal{R}(\mathfrak{I})=\bigoplus_{n\geq 0}\mathfrak{I}^{n}\), where standard bigraded means \(\mathcal{R}(\mathfrak{I})_{1}=\oplus_{i\geq 0}\mathcal{R}(\mathfrak{I})_{(i,0)}\) and \(\mathcal{R}(\mathfrak{I})_{2}=\oplus_{i\geq 0}\mathcal{R}(\mathfrak{I})_{(0,i)}\) are standard graded \(K\)-algebras. Similar to the graded case, \(\mathcal{R}(\mathfrak{I})\) has a presentation \(S/J\) as quotient of the standard bigraded polynomial ring \(S=K[X,Y]\) over a field \(K\), by a bihomogeneous ideal \(J\). An ideal \(\mathfrak{I}\) of a ring \(A\) is of linear type if the defining relations of \(\mathcal{R}(\mathfrak{I})\) are linear in the indeterminates \(Y\).
**Definition 1.2**.: For two integers \(c,e\geq 0\) with \((c,e)\neq(0,0)\) the \((c,e)\)-diagonal is the \(\Delta=\{(cs,es):s\in\mathbb{Z}\}\) of \(\mathbb{Z}^{2}\). The diagonal subalgebra of a bigraded algebra \(R\) along \(\Delta\) is defined as the graded algebra \(R_{\Delta}:=\bigoplus_{i\geq 0}R_{(ci,ei)}\) (cf. [41]).
It is analogous to the notion of Veronese subalgebras of graded algebras. For \(c\in\mathbb{N}\) and a standard graded \(K\)-algebra \(A,A^{(c)}=\oplus_{j\in\mathbb{N}}A_{cj}\) is defined as the \(c\)-th Veronese subalgebra of \(A\).
In [41], the authors have given the presentation of diagonal subalgebra of a bigraded algebra \(R\) in the following way. Consider \(R\cong S/J\), where \(S=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\) is a bigraded polynomial ring and \(J\) a bihomogeneous ideal of \(S\). Then
\[R_{\Delta}=S_{\Delta}/J_{\Delta}, \tag{1}\]
where \(S_{\Delta}\) is the Segre product of \(K[x_{1},\ldots,x_{n}]^{(c)}\) and \(K[y_{1},\ldots,y_{m}]^{(e)}\) and \(J_{\Delta}=\oplus_{i\geq 0}J_{(ci,ei)}\).
Now let \(\Delta=(1,1)\). Then \(S_{\Delta}=K[x_{i}y_{j}\,|\,1\leq i\leq n,\,1\leq j\leq m]\) and a presentation of \(S_{\Delta}\) can be seen as \(S_{\Delta}\cong K[T]/I_{2}(T)\) where \(T=(t_{ij})\) is an \(n\times m\) matrix of indeterminates for \(1\leq i\leq n\), \(1\leq j\leq m\) and \(I_{2}(T)\) is the ideal generated by the \(2\)-minors of \(T\). This presentation is obtained by mapping \(t_{ij}\) in \(K[T]\) to \(x_{i}y_{j}\) in \(S_{\Delta}\). The following lemma gives the form for the generators of \(J_{\Delta}\) in \(K[T]\) when \(\Delta=(1,1)\).
**Lemma 1.3**.: _([41, Lemma 2.1]) Let \(R\cong S/J\) be a standard bigraded algebra and \(J\) a bihomogeneous ideal of \(S\) generated by \(g_{1},\ldots,g_{r}\) with deg \(g_{i}=(a_{i},b_{i})\). Then for \(c_{i}=max\{a_{i},b_{i}\}\), the generators of \(J_{\Delta}\) has the form \(g_{i}m\) where \(m\) is a monomial of degree \((c_{i}-a_{i},c_{i}-b_{i})\), \(i=1,\ldots,r\)._
For an equigenerated ideal of a standard graded polynomial ring, if \(c\geq ed+1\), then the dimension of the corresponding diagonal subalgebra of Rees algebra is found to be independent of the diagonal.
**Lemma 1.4**.: _([15, Lemma 1.3(ii)]) Let \(I\) be an equigenerated ideal of a standard graded polynomial ring in \(n\) indeterminates with the degree of the generators being denoted by \(d\). If \(c\geq ed+1\), then \(\dim\mathcal{R}(I)_{\Delta}=n\) where \(\Delta=(c,e)\)._
Let \(A\) be a standard graded \(K\)-algebra where \(K\) is a field and \(M\) be a finitely generated \(A\)-module. Let \(t_{i}^{A}(M)=\sup\{j:\operatorname{Tor}_{i}^{A}(M,K)_{j}\neq 0\}\) with \(t_{i}^{A}(M)=-\infty\) if \(\operatorname{Tor}_{i}^{A}(M,K)_{j}=0\) for all \(j\geq 0\). Then regularity of \(M\) denoted by \(\operatorname{reg}_{A}(M)\) is defined as \(\operatorname{reg}_{A}(M)=\sup\{t_{i}^{A}(M)-i,\,i\geq 0\}\). Similarly, corresponding to a standard bigraded \(K\)-algebra, there is an analogous notion of \(x\)-regularity and \(y\)-regularity (refer [2, Section 2] for definitions).
Let \(M\) be an \(A\)-module generated by elements of same degree, say \(d\). Then \(M\) is said to have a _\(d\)-linear resolution_ over \(A\) if \(\operatorname{reg}_{A}(M)=d\). If the degree \(d\) of the generators of an \(A\)-module \(M\) is clear from the context, just the terminology 'linear resolution' is used. A standard graded \(K\)-algebra \(A\) is _Koszul_ if the minimal \(A\)-free resolution of the residue field is linear, that is \(\operatorname{reg}_{A}(K)=0\).
**Remark 1.5**.: We recall the following results related to Koszulness of bigraded algebras and linear resolutions of ideals for later use.
1. Let \(R\) be a bigraded \(K\)-algebra with the free modules in the minimal bigraded free resolution being denoted by \(F_{i}=\oplus_{(a,b)\in\mathbb{N}}S(-a,-b)^{\beta_{a,b}}\). Then for \(\Delta=(c,e)\), \(R_{\Delta}\) is Koszul if \(\max\{\frac{a}{c},\frac{b}{e}:\beta_{i,a,b}\neq 0\}\leq i+1\) for every \(i\) ([15, Theorem 6.2]). This shows how information about the shifts in the \(S\)-free resolution of \(\mathcal{R}(I)\) helps is obtaining lower bounds for \((c,e)\) for which \(\mathcal{R}(I)_{\Delta}\) is Koszul.
2. If \(R\) is a Koszul bigraded \(K\)-algebra then \(R_{\Delta}\) is Koszul for all \(\Delta\) ([6, Theorem 2.1]).
3. Let \(I\) be an equigenerated ideal of a standard graded ring \(A\). If \(\mathcal{R}(I)\) is Koszul then \(I^{n}\) has linear resolution for all \(n\geq 0\) ([6, Corollary 3.6]).
Similarly, there are results which help in getting bounds for \(\Delta\) from the \(S\)-free resolution of \(\mathcal{R}(I)\) such that \(\mathcal{R}(I)_{\Delta}\) is Cohen-Macaulay.
**Lemma 1.6**.: _[_15_, Lemma 3.10]_ _Let \(S=K[x_{1},\ldots x_{n},y_{1},\ldots,y_{m}]\) be a standard bigraded polynomial ring. Then for \(\Delta=(c,e)\) and \(a,b\in\mathbb{Z}\),_
1. \(\dim S(-a,-b)_{\Delta}=n+m-1\)_._
2. _If_ \(0<b<m\) _or_ \(0<a<n\)_, then_ \(S(-a,-b)_{\Delta}\) _is Cohen-Macaulay if_ \(c>\max\{-a,-n+a\}\) _and_ \(e>\max\{-b,-m+b\}\)_._
The following are some other results which will be used in this article repeatedly.
**Lemma 1.7**.: _[_40_, Lemma 2.2]_ _Let \(f_{1},\ldots,f_{r}\) be a sequence of elements in \(A\) such that with respect to some monomial order on \(A\), \((LT(f_{i}),LT(f_{j}))=1\) for all \(i,j\in\mathbb{N}\) with \(i\neq j\). Then \(f_{1},\ldots,f_{r}\) is a regular sequence in \(A\)._
**Lemma 1.8** (Depth Lemma).: _Let \(A\cong B/I\) where \(B\) is the standard graded polynomial ring and \(I\) a homogeneous ideal of \(B\), with the free modules in the minimal graded free resolution being denoted by \(F_{i}\), \(i=1,\ldots,\operatorname{proj}\dim_{B}A\). Then the following inequality holds,_
\[\operatorname{depth}\left(A\right)\geq\min\{\operatorname{depth}\left(F_{i} \right)-i;\,i\geq 0\}.\]
Proof.: It is obtained by applying depth lemma iteratively on the short exact sequence of modules.
Let \(B=K[x_{1},\ldots,x_{n}]\) and \(I=\langle u_{1},\ldots,u_{s}\rangle\) be a monomial ideal in \(B\). Let \(I_{r}\) denote the set of all sequences \(\alpha=(i_{1},\ldots,i_{r})\) in \([s]\) of length \(r\) such that \(i_{1}\leq i_{2}\leq\ldots\leq i_{r}\). For any \(\alpha\in I_{r}\), let \(u_{\alpha}=u_{i_{1}}u_{i_{2}}\ldots u_{i_{r}}\) and \(t_{\alpha}=t_{i_{1}}t_{i_{2}}\ldots t_{i_{r}}\) and for any \(\alpha,\beta\in I_{r}\), define \(t_{\alpha,\beta}=\frac{\operatorname{lcm}[u_{\alpha},u_{\beta}]}{u_{\beta}}t_ {\beta}-\frac{\operatorname{lcm}[u_{\alpha},u_{\beta}]}{u_{\alpha}}t_{\alpha}\). Then,
\[\mathcal{R}(I)\cong B[t_{1},\ldots,t_{s}]/J \tag{2}\]
where \(J=SJ_{1}+S(\cup_{r=2}^{\infty}J_{r})\) with \(J_{r}=\{t_{\alpha,\beta}:\alpha,\beta\in I_{r}\}\) (cf. [42]).
**Definition 1.9**.: Let \(A\) be a commutative ring and \(\{\mathbf{a}\}=\{a_{1},\ldots,a_{n}\}\) a sequence of elements in \(A\). Then for an \(A\)-module \(M\), \(\{\mathbf{a}\}\) forms a _\(d\)-sequence_ on \(M\) if the following holds:
1. \(a_{i}M\notin\langle a_{1},\ldots,\hat{a_{i}},\ldots,a_{n}\rangle M\) for \(i=1,\ldots,n\).
2. \((\langle a_{1},\ldots,a_{i}\rangle M:a_{i+1}a_{k}M)=(\langle a_{1},\ldots,a_{ i}\rangle M:a_{k}M)\) for all \(k\geq i+1\) and \(i=0,\ldots,n-1\).
## 2. Pfaffian ideals of generic skew-symmetric matrices
Assume \(X\) to be a generic skew-symmetric matrix of odd order \(n=2r+1\), \(r\in\mathbb{N}\) and \(I=\)Pf\({}_{n-1}(X)\). Then the ideal Pf\({}_{n-1}(X)\) generated by the maximal order Pfaffians are found to be of linear type (cf. [5]). Huneke proved that the Pfaffian sequence forms a weak \(d\)-sequence [26, 1.20] and further remarked that it seems to form a \(d\)-sequence [25]. Recently, we gave a proof to show that it indeed forms a \(d\)-sequence [32]. In fact from the proof it is not difficult to see that it forms an unconditioned \(d\)-sequence.
Structure theorem of ideals of codimension \(3\)[11] gives the minimal free resolution of Pfaffian ring in generic case corresponding to an alternating map. In the following lemma, we mention the differentials (explicitly) in the resolution which helps in studying the related Rees algebra.
**Lemma 2.1**.: _Let \(B=K[X]\) and \(I=\)Pf\({}_{n-1}(X)\). Then the minimal graded free resolution of \(B/I\) has the following form._
\[0\longrightarrow B(-2r-1)\stackrel{{ d_{3}}}{{ \longrightarrow}}B^{n}(-r-1)\stackrel{{ d_{2}}}{{ \longrightarrow}}B^{n}(-r)\stackrel{{ d_{1}}}{{ \longrightarrow}}B\longrightarrow 0. \tag{3}\]
_where_
\[d_{1}=\begin{bmatrix}Pf_{1}(X)&Pf_{2}(X)&\cdots&Pf_{n}(X)\end{bmatrix},\]
\[d_{2}=(a_{ij})=\left\{\begin{array}{ccc}(-1)^{i+j}x_{n+1-i\ n+1-j}&if&i>j\\ 0&if&i=j\\ (-1)^{i+j+1}x_{n+1-j\ n+1-i}&if&i<j.\end{array}\right.\]
_and_
\[d_{3}=\begin{bmatrix}Pf_{1}(X)\\ Pf_{2}(X)\\ \vdots\\ Pf_{n}(X)\end{bmatrix}\]
Proof.: In \((\ref{eq:3})\) we have \(d_{1}\circ d_{2}=0\) and \(d_{2}\circ d_{3}=0\). Let \(d_{2}(B^{n})=\langle g_{1},\ldots,g_{n}\rangle\) where \(g_{i}^{\prime}s\) form a minimal generating set of the image. Since the cardinality of the minimal generating set of \(d_{2}(B^{n})\) is \(n\) and considering the appropriate shifts, complex \((\ref{eq:3})\) can be seen as a graded free resolution of \(B/I\). Finally since \(d_{2}(B^{n})\subseteq\mathfrak{m}B^{n}\), we get the required minimal graded free resolution of \(B/I\).
**Remark 2.2**.: As a consequence of the above result, for a generic skew-symmetric matrix \(X\) and \(I=\)Pf\({}_{n-1}(X)\subseteq B=K[X]\), we observe the following.
1. For a standard graded \(K\)-algebra \(A\), it is known that \(A^{(c)}\) is Koszul for \(c\gg 0\) (cf. [4]). Using Lemma 2.1 and Remark 1.5(i) we have \((B/I)^{(c)}\) is Koszul for \(c\geq n/4\).
2. Since \(I\) is of linear type, the explicit defining relations of \(\mathcal{R}(I)\) in \(S=K[X,Y]\) will have the form \[\begin{bmatrix}y_{1}&y_{2}&\ldots&y_{n}\end{bmatrix}.\ d_{2}\text{ where }d_{2}=(a_{ij})=\left\{\begin{array}{ccc}(-1)^{i+j}x_{n+1-i\ n+1-j}&if&i>j\\ 0&if&i=j\\ (-1)^{i+j+1}x_{n+1-j\ n+1-i}&if&i<j.\end{array}\right.\]
Some interesting colon conditions are satisfied by the defining relations of Rees algebra which is discussed in the following Lemma.
**Lemma 2.3**.: _Consider the setup as in Remark 2.2\((ii)\). Let the defining relations of \(\mathcal{R}(I)=S/J\) obtained above be denoted by \(g_{1},\ldots,g_{n}\) where \(S=K[X,Y]\) and \(J=\langle g_{1},\ldots,g_{n}\rangle\), a homogeneous ideal of \(S\). Then the following holds._
1. \(g_{1},\ldots,g_{n-1}\) _forms an_ \(S\)_-regular sequence._
2. \((\langle g_{1},\ldots,g_{n-1}\rangle:\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(J^{\prime}=\langle g_{1},\ldots,g_{n-1}\rangle\). Clearly \((J^{\prime}:g_{n})\subseteq(J^{\prime}:{g_{n}}^{2})\). To see the other inclusion let \(\alpha\in(J^{\prime}:{g_{n}}^{2})\). Then \(\alpha\cdot g_{n}\in(J^{\prime}:g_{n})\). Since \((J^{\prime}:g_{n})\) is a prime ideal (from the proof of Lemma 2.3\((2)\)) and \(g_{n}\notin(J^{\prime}:g_{n})\), this implies \(\alpha\in(J^{\prime}:g_{n})\). Hence the equality follows.
Since the defining ideal is minimally generated by \(n\) elements, one more than the height of the ideal and the sequence forms a \(d\)-sequence, by [27, Lemma 4.2], it generates an almost complete intersection.
It is evident from Remark 2.2\((ii)\) that the defining relations of Rees algebra of maximal order Pfaffians of generic skew-symmetric matrix are generated by quadrics. So, it makes sense to ask whether something stronger holds. Therefore, we pose the following question.
**Question 2.5**.: Is it true that the Rees algebra of maximal order Pfaffians in generic case is always Koszul?
The study that follows indicate that it is not true in general.
Let \(X_{1}\) be generic skew-symmetric matrix of order \(3\) and consider \(B_{X_{1}}=K[X_{1}]\) with \(I_{1}=\operatorname{Pf}_{2}(X_{1})=\langle x_{12},x_{13},x_{23}\rangle\), the graded maximal ideal of \(B_{X_{1}}\). Then the defining ideal of \(\mathcal{R}(I_{1})\) is given by \(2\)-minors of the matrix \(\begin{pmatrix}x_{12}&x_{13}&x_{23}\\ y_{1}&y_{2}&y_{3}\end{pmatrix}\) and the minimal bigraded \(S\)-free resolution of \(\mathcal{R}(I_{1})\) has the form,
\[\begin{array}{c}S(-1,-2)\\ 0\longrightarrow\oplus\\ S(-2,-1)\end{array}\longrightarrow S(-1,-1)^{3}\longrightarrow S \longrightarrow 0.\]
Since \(\operatorname{reg}_{S}(\mathcal{R}(I_{1}))=1\) and \(S\) seen as a graded ring with respect to total degree is Koszul, by transfer of Koszulness ([14, Theorem 2]), \(\mathcal{R}(I_{1})\) is Koszul. In fact the defining ideal of \(\mathcal{R}(I_{1})\) is observed to be generated by a Grobner basis of quadrics with respect to graded reverse lexicographic order induced by \(x_{12}>x_{13}>x_{23}>y_{1}>y_{2}>y_{3}\).
Now, let \(X_{2}\) be a generic skew-symmetric matrix of order \(5\). Consider \(B_{X_{2}}=K[X_{2}]\) with \(I_{2}=\operatorname{Pf}_{4}(X_{2})=\langle x_{14}x_{23}-x_{13}x_{24}+x_{12}x_{ 34},\,x_{15}x_{23}-x_{13}x_{25}+x_{12}x_{35},\,x_{15}x_{24}-x_{14}x_{25}+x_{12} x_{45},\,x_{15}x_{34}-x_{14}x_{35}+x_{13}x_{45},\,x_{25}x_{34}-x_{24}x_{35}+x_{23} x_{45}\rangle\). From the following Betti table of the Pfaffian ring \(B_{X_{2}}/I_{2}\), we see that \(I_{2}\) does not have a linear resolution.
\[\beta_{(i,i+j)}=\begin{array}{c|cccc}&0&1&2&3\\ \hline 0&1&0&0&0\\ 1&0&5&5&0\\ 2&0&0&0&1\end{array}\]
As a consequence, \(\mathcal{R}(I_{2})\) as a graded ring with respect to total degree is not Koszul (Remark 1.5 (iii)).
Let \(I_{1}\) and \(I_{2}\) be as defined before.
**Proposition 2.6**.: _Let \(X\) be a generic skew-symmetric matrix of odd order \(n\) and \(B=K[X]\) with \(I=\operatorname{Pf}_{n-1}(X)\) for \(n=3,5\). Then,_
1. \(\mathcal{R}(I_{1})\) _is Koszul, whereas_ \(\mathcal{R}(I_{2})\) _is not Koszul._
2. \(I^{j}\) _have linear resolution for all_ \(j\geq 2\)_._
3. \(reg^{x}_{S}(\mathcal{R}(I^{j}))=0\) _for_ \(j\geq 2\)_._
Proof.: a) Koszulness of \(\mathcal{R}(I_{1})\) and non-Koszulness of \(\mathcal{R}(I_{2})\), both follows from previous discussion.
2. For \(n=3\), since \(\mathcal{R}(I_{1})\) is Koszul, we have in fact \(\operatorname{reg}_{B}(I_{1})^{j}=2j\) for \(j\geq 1\). For \(n=5\), the observation follows from the result by Cutkosky, Herzog and Trung which says that if an ideal \(I\) of a ring \(A\) is generated by \(d\)-sequence of \(n\) forms and is equigenerated of degree \(r\) then for all \(j\geq n+1\), \(\operatorname{reg}_{A}(I^{j})=(j-n-1)r+\operatorname{reg}_{A}(I^{n+1})\)[16, Corollary 3.8] and the fact that in our case \(\operatorname{reg}_{A}(I^{n+1})=(n+1)r\).
* Follows from b) and [10, Theorem 5.2].
**Remark 2.7**.: The following is a realization of diagonal subalgebra of Rees algebra \(\mathcal{R}(I_{1})\), as a quotient of standard graded \(K\)-algebra for \(\Delta=(1,1)\).
For \(n=3\), let \(T=\begin{pmatrix}t_{11}&t_{12}&t_{13}\\ t_{21}&t_{22}&t_{23}\\ t_{31}&t_{32}&t_{33}\end{pmatrix}\) where for \(1\leq j\leq 3\), \(t_{1j}=x_{12}y_{j}\), \(t_{2j}=x_{13}y_{j}\) and \(t_{3j}=x_{23}y_{j}\). Then \(\mathcal{R}(I_{1})_{\Delta}\) can be identified with \(K[T]/I_{2}(T)+\langle t_{12}-t_{23},\,t_{21}-t_{32},\,t_{11}-t_{33}\rangle\) where \(I_{2}(T)\) denotes the ideal generated by \(2\)-minors of \(T\) (From equation (1) and Lemma 1.3). From the above defining relations we get, \(\mathcal{R}(I_{1})_{\Delta}\cong K[T]/\langle-t_{12}t_{21}+t_{11}t_{22},\,t_ {11}t_{21}-t_{12}t_{31},\,t_{21}^{2}-t_{22}t_{31},\,t_{11}t_{12}-t_{13}t_{21}, \,t_{11}^{2}-t_{13}t_{31},\,t_{11}t_{21}-t_{12}t_{31},\,t_{12}^{2}-t_{13}t_{ 22},\,t_{11}t_{12}-t_{13}t_{21},\,t_{11}^{2}-t_{13}t_{21},\,t_{11}^{2}-t_{13}t_ {21},\,t_{11}t_{12}-t_{13}t_{21},\,-t_{12}t_{21}+t_{11}t_{22}\rangle\).
Similarly we can write an expression for \(\mathcal{R}(I_{2})\) for \(\Delta=(1,1)\).
It is evident from Proposition 2.6 that \(\mathcal{R}(I)\) is not always Koszul for a maximal order Pfaffian ideal \(I\) of a general skew symmetric matrix \(X\). However, the following result shows that, regardless of this, all of its diagonals are always Koszul.
**Theorem 2.8**.: _Let \(B=K[X]\) and \(I=\)Pf\({}_{n-1}(X)\). Then \(\mathcal{R}(I)_{\Delta}\) is Koszul for all \(\Delta=(c,e)\), \(c>0\), \(e>0\)._
Proof.: We have the defining relations of the Rees algebra in the given case to be of bidegree \((1,1)\). Since the defining relations satisfy certain nice properties from Lemma 2.3, we can consider the following complex.
\[\mathbb{F}:\cdots\xrightarrow{y_{n}}T(-2d,-3)\xrightarrow{g_{n}}T(-d,-2) \xrightarrow{y_{n}}T(-d,-1)\xrightarrow{g_{n}}T\longrightarrow 0. \tag{4}\]
Then using an idea similar to the one in [31, Theorem 3.1] and [12, Theorem 3.2], the required result is obtained.
It is known that Rees algebra of maximal generic Pfaffians is Cohen-Macaulay. Following are some observations that can be made regarding the Cohen-Macaulayness of diagonals of the same.
**Remark 2.9**.: Let \(X\) be a generic skew-symmetric matrix of odd order \(n\) and \(B=K[X]\) with \(I=\)Pf\({}_{n-1}(X)\).
1. Then \(\mathcal{R}(I)_{\Delta}\) is Cohen-Macaulay for \(c\gg 0\) and \(e>0\). This can be seen as a consequence of \(I\) being generated by \(d\)-sequence which implies vanishing of y-regularity of \(\mathcal{R}(I)\)[39, Corollary 3.2] and [15, Corollary 3.14].
2. In particular, for the previous cases of \(n=3,5\), \(\mathcal{R}(I)_{\Delta}\) is Cohen-Macaulay if \(c\geq 2e+1\), \(c,e>0\) which is a consequence of the repeated application of depth lemma, Lemma 1.6 and Lemma 1.4.
## 3. Pfaffian ideals of sparse skew-symmetric matrices
For a generic skew-symmetric matrix \(X\) of odd order \(n\), computations suggest that as \(n\) increases, the number of generators for \(I=\)Pf\({}_{n-1}(X)\) increases largely. Moreover, the expression for the generators of \(I\) becomes too complex thereby making the study of the minimal graded free resolutions of the corresponding Rees algebra hard. Thus for the generators of a Pfaffian ideal and the defining ideal of the Rees algebra to satisfy some properties, it makes sense to focus on sparse form of skew-symmetric matrices.
Note that we primarily focus on maximal order Pfaffians since an ideal \(I\) generated by non-maximal order Pfaffians most often lead to the defining relations of Rees algebra being of total degree greater than \(2\). For example,
\[X=\begin{bmatrix}0&x_{12}&0&x_{14}&0&0&0\\ -x_{12}&0&x_{23}&0&x_{25}&0&0\\ 0&-x_{23}&0&x_{34}&0&x_{36}&0\\ -x_{14}&0&-x_{34}&0&x_{45}&0&x_{47}\\ 0&-x_{25}&0&-x_{45}&0&x_{56}&0\\ 0&0&-x_{36}&0&-x_{56}&0&x_{67}\\ 0&0&0&-x_{47}&0&-x_{67}&0\end{bmatrix}_{\gamma\times 7}\]
In the above case, for \(I=\)Pf\({}_{4}(X)\) we get \(\mathcal{R}(I)\) to have 52 generators of bidegree \((1,1)\), 14 generators of bidegree \((0,2)\) and 3 generators of bidegree \((0,3)\). Hence it is clearly not generated by quadrics (with respect to total degree).
In an attempt to study sparse skew-symmetric matrices, we first focus on Pfaffians of tridiagonal matrices of the form given in Theorem 3.2. Following are some of the terms which will be used for the same.
A homogeneous ideal \(I\) of a standard graded ring \(A\) is of Grobner linear type if the ideal is of linear type with the linear relations of the defining ideal of \(\mathcal{R}(I)\cong K[X,Y]/J\) forming a Grobner basis with respect to some monomial order on \(K[X,Y]\). A sequence of monomials \(m_{1},\ldots,m_{s}\) in a set of indeterminates \(X\) is said to be an _\(M\)-sequence_ if for all \(1\leq i\leq s\), there exists a total order on the set of indeterminates, say \(x_{1}<\ldots<x_{n}\) with \(m_{i}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) and \(a_{1}>0,\ldots,a_{n}>0\), such that whenever \(x_{k}/m_{j}\) with \(1\leq k\leq n\) and \(i<j\), then \(x_{k}^{a_{k}}\cdots x_{n}^{a_{n}}/m_{j}\) (cf. [13]). It is proved that ideals generated by \(M\)-sequence are of Grobner linear type [13, Theorem 2.4 (i)]. For a monomial \(m\) and an indeterminate \(x\), let \(O_{x}(m)\) denote the exponent of \(x\) in \(m\). Then a sequence of monomials \(m_{1},\ldots,m_{s}\) in the set of indeterminates \(X\) is said to be of _interval type_ if for all \(1\leq i<j\leq s\) and \(x/\gcd(m_{i},m_{j})\), one has \(O_{x}(m_{i})\leq O_{x}(m_{k})\) for all \(i\leq k\leq j\) (cf. [13]). It is known that sequence of interval type implies \(M\)-sequence [13, Proposition 3.2].
**Lemma 3.1**.: _Let \(X=\begin{bmatrix}0&x_{1}&0&0&\ldots&0&0\\ -x_{1}&0&x_{2}&0&\ldots&0&0\\ 0&-x_{2}&0&x_{3}&\ldots&0&0\\ 0&0&-x_{3}&0&\ldots&0&0\\ \vdots&\vdots&\vdots&\vdots&&\vdots&\vdots\\ 0&0&0&\ldots&0&x_{n-1}\\ 0&0&0&0&\ldots&-x_{n-1}&0\end{bmatrix}_{n\times n}\), where \(n\in\mathbb{N}\)._
_Then,_
1. _det_ \(X=0\)_, if_ \(n\) _is odd._
2. _det_ \(X=\prod_{i=2k+1}x_{i}^{2}\)_,_ \(k=0,\ldots,\frac{n-2}{2}\)_, if_ \(n\) _is even._
Proof.: Let \(X\) be a matrix of the above form.
1. Follows from the property of the matrix being skew-symmetric.
2. The proof follows by induction on n. Clearly for \(n=2,4\) the determinant is given by \(x_{1}^{2}\) and \(x_{1}^{2}x_{3}^{2}\) respectively. Thus the statement holds in these cases. Let \(n\) be an even integer such that \(n>4\), then the Laplace expansion of the determinant along the last column and then the last row gives det \(X=(-x_{n-1})(-x_{n-1})\text{det }X^{\prime}\) where \(X^{\prime}\) is a matrix of order \(n-2\) whose determinant is given by the induction hypothesis. Thus the required result is obtained.
**Theorem 3.2**.: _Let \(X_{3}=\begin{bmatrix}0&x_{12}&0&0&\ldots&0&0\\ -x_{12}&0&x_{23}&0&\ldots&0&0\\ 0&-x_{23}&0&x_{34}&\ldots&0&0\\ 0&0&-x_{34}&0&\ldots&0&0\\ \vdots&\vdots&\vdots&\vdots&&\vdots&\vdots\\ 0&0&0&0&\ldots&0&x_{n-1\,n}\\ 0&0&0&0&\ldots&-x_{n-1\,n}&0\end{bmatrix}_{n\times n}\) where \(n=2r+1\), \(r\in\mathbb{N}\cup\{0\}\). Then \(I=Pf_{n-1}(X_{3})\) is a monomial ideal of Grobner linear type and \(\mathcal{R}(I)\) is Koszul and Cohen-Macaulay._
**Note 3.3**.: In case of a generic skew-symmeric matrix, it is known from [5, Theorem 2.2] that the Pfaffian ideal \(I=\)Pf\({}_{n-1}(X)\) is of linear type. But results in that direction cannot be directly applied in the sparse cases like the one above. Hence we will separately show that the Pfaffian ideal in this case is of linear type.
Proof.: Using Lemma 3.1, we get the generators of the Pfaffian ideal \(I=\)Pf\({}_{n-1}(X_{3})\) to have the following form,
\[p_{1}=\prod_{k=1}^{r}x_{2k\,2k+1},\ p_{i}=x_{12}\prod_{j<l}x_{j\,j+1}\prod_{i <k\geq r}x_{k\,k+1}, \tag{5}\]
where \(i=2p+1\), \(1\leq p\leq r\); \(j=1+2l\), \(l\geq 1\); \(k=i+1+2m\), \(m\geq 0\). That is, we have \(\{p_{i}:i=2p+1\), \(0\leq p\leq r\}\) to be the generating set of the Pfaffian ideal \(I=\)Pf\({}_{n-1}(X_{3})\).
Now for \(1\leq k<m\leq r+1\), assume that \(x_{j\,j+1}|\text{gcd}(p_{k},p_{m})\) for \(1\leq j\leq n-1\). Then there are two possibilities.
1. \(j<k\). This implies that \(j\) is odd and \(j<l\) for all \(k\leq l\leq m\). Thus \(x_{j\,j+1}|p_{l}\).
2. \(j>k\). Then we have \(j\) to be even. Hence \(j>m\) and so \(j>l\) for all \(k\leq l\leq m\). Thus \(x_{j\,j+1}|p_{l}\).
Since the generators are squarefree monomials, this implies that \(\{p_{i}:i=2p+1,\,1\leq p\leq r\}\) forms a sequence of interval type. In particular, it forms an \(M\)-sequence with respect to some total order and thus is of Grobner linear type.
Let \(\mathcal{R}(I)\cong S/J\) where \(S=K[x_{12},\ldots,x_{n-1\,n},y_{1},\ldots,y_{r+1}]\) and \(J\) is the defining ideal of \(\mathcal{R}(I)\). From the relation (2) in Section 1, we get \(J=\langle x_{i\,i+1}y_{j}-x_{i+1\,i+2}y_{j+1};i=2k+1\), \(0\leq k\leq r-1\), \(j=\frac{i+1}{2}\rangle\). Then from Lemma 1.7, with respect to graded reverse lexicographic term order induced by \(x_{12}>x_{13}>\ldots>x_{n-1\,n}>y_{1}>y_{2}>\ldots>y_{r+1}\), it can be concluded that \(J\) is generated by a regular sequence and so is a complete intersection of quadrics. This implies \(\mathcal{R}(I)\) is Cohen-Macaulay. Koszulness of \(\mathcal{R}(I)\) follows from observing that the defining relations of Rees algebra is generated by a Grobner basis of quadrics.
**Note 3.4**.: For \(n,m\in\mathbb{N}\), let \((n\,m)\) be the pair of integers which denotes indices of the indeterminates as entries in the matrix \(X_{3}\).
**Corollary 3.5**.: _The ideal of Pfaffians Pf\({}_{n-1}(X_{3})\) can be seen as the vertex cover ideal of an unmixed bipartite graph._
Proof.: Let \(G\) be the following bipartite graph.
Then it suffices to show that the set of all minimal vertex covers of \(G\) are given by \(\{(2k\;2k+1)\}\), \(1\leq k\leq r\) and \(\{(1\,2),(j\;j+1),(k\;k+1)\}\), where \(i=2p+1\), \(1\leq p\leq r\); \(j=1+2l\), \(l\geq 1\); \(k=i+1+2m\), \(m\geq 0\). We prove this by induction on \(r\) where the order of the matrix is \(n=2r+1\). For \(r=1\) and \(r=2\), the minimal vertex covers are given by \(\{(1\,2)\},\{(2\,3)\}\) and \(\{(1\,2),(34)\}\), \(\{(12),(45)\}\), \(\{(23),(45)\}\) respectively. Then the result is true for the base cases. Now let \(r>2\). The minimal vertex covers of \(G\) on \(2(r-1)\) vertices by induction hypothesis will have the form \(\{(2k\;2k+1)\}\), \(1\leq k\leq r-1\) and \(i=2p+1\), \(1\leq p\leq r-1\); \(j=1+2l\), \(l\geq 1\); \(k=i+1+2m\), \(m\geq 0\). Observe that only one vertex cover say \(u=\{(l\;l+1)\}\), \(l=2m+1\), \(0\leq m\leq r-2\) contains \((2r-3\;2r-2)\) and the rest of them contains \((2r-2\;2r-1)\). Then \(u\) can be extended by adjoining either \((2r-1\;2r)\) or \((2r\;2r+1)\) whereas the other vertex covers containing \((2r-2\;2r-1)\) can be extended only by adding \((2r\;2r+1)\). This is because if instead \((2r-1\;2r)\) is added, the edge between \((2r-3\;2r-2)\) and \((2r\;2r+1)\) would have empty intersection with the set. Thus the minimal vertex covers of \(G\) on \(n=2r+1\) vertices is given by \(\{(2k\;2k+1)\}\), \(1\leq k\leq r\) and \(\{(1\,2),(j\;j+1),(k\;k+1)\}\), where \(j=1+2l\), \(l\geq 1\); \(i=2p+1\), \(1\leq p\leq r\); \(k=i+1+2m\), \(m\geq 0\).
Thus the ideal of Pfaffians \(\mathrm{Pf}_{n-1}(X_{3})\) in Theorem 3.2 can be seen as the vertex cover ideal of the above unmixed bipartite graph \(G\).
As a consequence of the above results, the following can be said.
**Corollary 3.6**.: _For \(j\in\mathbb{N}\), \(I^{j}\) has linear resolutions where \(I\) is the vertex cover ideal of the graph \(G\)._
Proof.: Follows from Remark 1.5 (iii) and Theorem 3.2.
In [30], the authors gave some bounds on \((c,e)\) for which the diagonal subalgebras of hypersurfaces become Cohen-Macaulay. In the following proposition we try to extend it to the diagonals of algebras defined by complete intersections.
**Proposition 3.7**.: _Let \(R=S/J\) be a standard bigraded \(K\)-algebra where \(S=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\) and \(J\) a bihomogeneous ideal of \(S\) generated by regular sequence \(\{f_{1},\ldots,f_{l}\}\) of bidegree \((a_{i},b_{i})\), \(i=1,\ldots,l\). Let \(d_{1}=\max\{a_{i},\,1\leq i\leq l\}\) and \(d_{2}=\max\{b_{i},\,1\leq i\leq l\}\). Then for \(c\geq-n+ld_{1}\) and \(e\geq-m+ld_{2}\), \(R_{\Delta}\) is Cohen-Macaulay for the following cases._
1. _For_ \(\Delta=(c,e)\)_, if_ \(R\) _satisfies the property that all its associated primes do not contain_ \(S_{(c,0)}\) _and_ \(S_{(0,e)}\)_. In particular, this is true if_ \(R\) _is a domain._
2. _If_ \(R=R_{1}\otimes_{K}R_{2}\) _with dim_\(R=\) _dim_\(R_{1}+\) _dim_\(R_{2}\) _where dim denotes Krull dimension._
Proof.: We have that \(R\) is defined by a complete intersection. Hence the Koszul complex on \(\{f_{1},\ldots,f_{l}\}\) resolves \(R\). Applying \(\Delta\) to the resolution and then using Lemma 1.8 and Lemma 1.6 gives depth\((R_{\Delta})\geq m+n-l-1\).
1. Assume for \(\Delta=(c,e)\), \(R\) satisfies the property that all its associated primes do not contain \(S_{(c,0)}\) and \(S_{(0,e)}\). Then relative dimension of \(R\) coincides with it's Krull dimension [41, Section 2.2]. Hence from the idea similar to [41, Proposition 2.3], it can be concluded that \(\dim(R_{\Delta})=m+n-l-1\).
2. Let \(R=R_{1}\otimes_{K}R_{2}\) with \(\dim R=\dim R_{1}+\dim R_{2}\). Then since from [15, Lemma 2.7], \(\dim(R_{\Delta})\leq m+n-l-1\), from the lower bound on depth we obtain \(\dim(R_{\Delta})=m+n-l-1\).
As an application of the previous results, we have the following observations.
**Theorem 3.8**.: _Let \(I=Pf_{n-1}(X_{3})\). Let \(c,e\) be positive integers. Then,_
1. \(\mathcal{R}(I)_{\Delta}\) _is Koszul for all_ \(\Delta=(c,e)\)_._
2. \(\mathcal{R}(I)_{\Delta}\) _is Cohen-Macaulay for all_ \(\Delta=(c,e)\)_._
Proof.: Since \(\mathcal{R}(I)\) is Koszul, Koszulness of \(\mathcal{R}(I)_{\Delta}\) can be seen as a consequence of Remark 1.5 (ii).
We have that \(\mathcal{R}(I)\) is a complete intersection domain. Hence the Cohen-Macaulayness of all diagonals follow from Proposition 3.7
**Remark 3.9**.:
1. Let \(I=\)Pf\({}_{n-1}(X_{3})\), \(\Delta=(1,1)\) and \(T=(t_{ij})\), \(1\leq i\leq n-1\), \(1\leq j\leq r+1\). Then \[\mathcal{R}(I)_{\Delta}\cong K[T]/I_{2}(T)+\langle t_{ij}-t_{i+1j+1}:i=2k+1,0 \leq k\leq r-1,\,j=\frac{i+1}{2}\rangle.\]
2. In general, Pfaffian ideals need not be generated by monomials or binomials. Hence it is not always possible to associate a graph to a Pfaffian ideal. For instance for generic skew-symmetric matrices of order greater than \(3\), the generators are seen to be neither monomials nor binomials.
3. But if the Pfaffian ideal is a squarefree monomial ideal then it corresponds to a simplicial complex \(\Delta^{\prime}\), where the Pfaffian ideal can be identified as the Stanley Reisner ideal of \(\Delta^{\prime}\) ([36]). In fact Theorem 3.2 is a special case where the Pfaffian ideal can be viewed as the vertex cover ideal of an unmixed bipartite graph.
4. Corresponding to a Pfaffian ideal, even if a graph is associated to it, the correspondence need not be unique. For example, \[\text{Let}\ X=\begin{bmatrix}0&x_{12}&0&0&0\\ -x_{12}&0&x_{23}&x_{24}&0\\ 0&-x_{23}&0&x_{34}&0\\ 0&-x_{24}&-x_{34}&0&x_{45}\\ 0&0&0&-x_{45}&0\end{bmatrix}_{5\times 5}\] This matrix also corresponds to the same Pfaffian ideal Pf\({}_{4}(X_{3})\), the one in Theorem 3.2. Hence the correspondence between the skew-symmetric matrices and the graph is not unique.
Another form of sparse skew-symmetric matrix that we have considered is the following.
**Proposition 3.10**.: _Let \(n=2r+1,r\in\mathbb{N}\cup\{0\}\) and let \(X_{4}=\begin{bmatrix}O&A\\ -A^{T}&B\end{bmatrix}\), \(O\) is a zero block matrix,_
\[A=\begin{bmatrix}x_{1\,r+2}&x_{1\,r+3}&\ldots&x_{1\,n-1}&x_{1\,n}\\ x_{2\,r+2}&x_{2\,r+3}&\ldots&x_{2\,n-1}&x_{2\,n}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ x_{r+1\,r+2}&x_{r+1\,r+3}&\ldots&x_{r+1\,n-1}&x_{r+1\,n}\end{bmatrix},\ \text{ and }\ B=\begin{bmatrix}0&x_{r+2\,r+3}&\ldots&x_{r+2\,n-1}&x_{r+2\,n}\\ -x_{r+2\,r+3}&0&\ldots&x_{r+3\,n-1}&x_{r+3\,n}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ -x_{r+2\,n}&-x_{r+3\,n}&\ldots&-x_{n-1\,n}&0\end{bmatrix},\]
_where \(A\) is an \((r+1)\times r\) matrix and \(B\) is a skew-symmetric matrix of order \(r\times r\). Then,_
1. \(I=Pf_{n-1}(X_{4})\) _is generated by an unconditioned_ \(d\)_-sequence._
2. \(\mathcal{R}(I)\) _is Koszul and Cohen-Macaulay._
3. \(I^{j}\) _have linear resolution for all_ \(j\geq 1\)_._
Proof.: Let \(B=K[X_{4}]\). Then the generators of the Pfaffian ideal \(I=\)Pf\({}_{n-1}(X_{4})\) is given by the \(r\)-minors of the first \(r+1\) rows and the last \((n-r-1)=r\) columns of \(X_{4}\) which is \(\binom{r+1}{r}\) in number.
1. From [25, Proposition 1.1], the Pfaffians form an unconditioned \(d\)-sequence.
2. From example 1.2 in [25], it follows that \(\mathcal{R}(I)\cong S/J\) where \(S=K[X_{4},Y]\), \(Y=\big{[}y_{1},\ldots,y_{k}\big{]}\) for \(k=1,\ldots,r+1\) and the defining ideal \(J\) has the following form, \[J=\langle g_{i},i=1,\ldots,r\rangle\text{ where }g_{i}=\sum_{k=1}^{r+1}(-1)^{k+1}x_{k \,n-(i-1)}y_{k},\,i=1,\ldots r.\] Consider the following order on the indeterminates of \(S\), \(x_{1\,n}>x_{2\,n-1}>\ldots>x_{r\,n-(r-1)}\) followed by the other indeterminates, with term order being graded lexicographic. Then from Lemma 1.7, \(J\) is a complete intersection of quadrics. Thus being complete intersection rings defined by quadrics, \(\mathcal{R}(I)\) is Koszul and Cohen-Macaulay.
3. Since \(\mathcal{R}(I)\) is Koszul, \(\operatorname{reg}_{B}(I^{j})=rj\) for \(j\geq 1\) follows from Remark 1.5 (iii).
**Theorem 3.11**.: _Let \(I=Pf_{n-1}(X_{4})\). Then \(\mathcal{R}(I)_{\Delta}\) is Koszul and Cohen-Macaulay for all \(\Delta=(c,e)\), \(c,e>0\)._
Proof.: Koszulness of \(\mathcal{R}(I)_{\Delta}\) for all \(\Delta\) follows from Proposition 3.10 and Remark 1.5 (ii). Since \(\mathcal{R}(I)\) is a complete intersection domain, \(\mathcal{R}(I)_{\Delta}\) is Cohen-Macaulay for all \(\Delta\) from Proposition 3.7.
**Acknowledgement.** The first author is partially supported by the MATRICS grant, SERB India. The second author is financially supported by the INSPIRE fellowship, DST, India.
|
2306.02009 | Weight Bank Addition Photonic Accelerator for Artificial Intelligence | Neural networks powered by artificial intelligence play a pivotal role in
current estimation and classification applications due to the escalating
computational demands of evolving deep learning systems. The hindrances posed
by existing computational limitations threaten to impede the further
progression of these neural networks. In response to these issues, we propose
neuromorphic networks founded on photonics that offer superior processing speed
than electronic counterparts, thereby enhancing support for real time, three
dimensional, and virtual reality applications. The weight bank, an integral
component of these networks has a direct bearing on their overall performance.
Our study demonstrates the implementation of a weight bank utilizing parallelly
cascaded micro ring resonators. We present our observations on neuromorphic
networks based on silicon on insulators, where cascaded MRRs play a crucial
role in mitigating interchannel and intrachannel cross talk, a persistent issue
in wavelength division multiplexing systems. Additionally, we design a standard
silicon photonic accelerator to perform weight addition. Optimized to offer
increased speed and reduced energy consumption, this photonic accelerator
ensures comparable processing power to electronic devices. | Wenwen Zhang, Hao Zhang | 2023-06-03T05:44:26Z | http://arxiv.org/abs/2306.02009v1 | # Weight Bank Addition Photonic Accelerator for Artificial Intelligence
###### Abstract
Neural networks powered by artificial intelligence (AI) play a pivotal role in current estimation and classification applications due to the escalating computational demands of evolving deep learning systems. The hindrances posed by existing computational limitations threaten to impede the further progression of these neural networks. In response to these issues, we propose neuromorphic networks founded on photonics that offer superior processing speed than electronic counterparts, thereby enhancing support for real-time, three-dimensional (3D), and virtual reality (VR) applications. The 'weight bank'--an integral component of these networks--has a direct bearing on their overall performance. Our study demonstrates the implementation of a weight bank utilizing parallelly cascaded micro-ring resonators (MRRs). We present our observations on neuromorphic networks based on silicon on insulators (SOI), where cascaded MRRs play a crucial role in mitigating inter-channel and intra-channel cross-talk, a persistent issue in wavelength division multiplexing (WDM) systems. Additionally, we design a standard silicon photonic accelerator to perform weight addition. Optimized to offer increased speed and reduced energy consumption, this photonic accelerator ensures comparable processing power to electronic devices.
## 1 Introduction
Communication demands proliferate in recent years with growing data volume and expanding data capacity. The popular industry needs higher processing speed and more extended bandwidth to make the connection faster and more stable. And in the emergence of the internet of everything (IoT) and intelligent embedded society, tremendous data analytical tasks are requiring highly-computational ability with better energy efficiency. Traditional electronic technology is reaching the limit in providing larger processing bandwidth and lower energy consumption, where integrated photonics devices play an important role [1]. In the post-Moore era, the photonic accelerator becomes feasible by making use of their frontier novel nanostructured to reach goals of miniaturization and power efficient [2, 3]. Silicon photonic devices perform better than other photonic devices in terms of integration, compactivity, response to tuning through thermo-optic effect [4, 5] and reconfigurable linear systems [4, 6, 7]. Recently, a rapidly growing interest in neuromorphic photonics has been arousing for the potential possibility for machine intelligence by cooperating ultra-fast speed of photonics with the high energy efficiency of neuromorphic structure [8, 9, 10, 11]. A recurrent network is significantly useful in many applications at a smaller scale than fully connected networks or deep networks since some neurons can be reused [12]. [13] put up problems with management over ubiquitous interferences between channels and widespread heterogeneous wireless broadcasting. They propose a method for de-mixing mixed signals and decoupling photonic blind source separation (BSS) with frequency-dependent knowledge of the target frequency band. Other research also shows pave the way for advanced blind source separation in multi-antenna/array systems can greatly exceed current approaches based on digitizing ultra-redundant multi-dimensional signals [14].
Micro-ring resonator (MRR) weight banks make silicon photonics weighted addition possible. Tunable MRR weight bank based on a wavelength division multiplexed (WDM) optical signal has been proven to vastly exceed the capabilities of electronics process analogy signal in photonics [1, 9]. Photonic principal component analysis (PCA) approach which makes high
performance dimensionality reduction in wideband analog systems possible is also realized by weight banks, configuring record-high accuracy and precision weight banks and generating multi-channel correlated input signals in a controllable manner [15, 16, 17]. Independent component analysis (ICA) is also applied to photonic structures by on-chip MRRs to reveal underlying independent but hidden factors in mixed signals. [18]. However, inter-channel and intra-channel crosstalk is a significant source of signal degradation in WDM systems which reduces WDM channel count, increases insertion loss, and degrades adjacent channel isolation. Cascaded and series-coupled (second-order) MRR [19, 20, 21] filters are proven to be effective for low intra-channel crosstalk at higher data rates by offering larger input-to-through suppression over wide bandwidths [22]. To expand the usable channel number and extend the free spectral range (FSR) of MRR filters, a grating-assisted coupler has been applied to MRR filters as well [23]. In this work, we applied series a of standard photonic accelerators based on coupled MRRs weight banks.
Several ways to interconnect photonic neurons have been developed based on MRRs. MRRs are the most intuitive structures to conduct tunable weights in neuron networks because MRRs can direct signals to through/drop ports by simply changing resonance frequency. This makes plus/minus weighing easy to control. Inter-channel cross-talk will largely reduce the channel space and channel counts. The research focused on inter-channel cross talks try variant ways to suppress inter-channel cross-talk and increase circuit efficiency.
MRRs need waveguides to a couple in and a couple out optical lights, and there exist different ways of coupling [24]. In Fig. 1(a), each MRR is coupled to a distinct and parallel waveguide in the drop port, creating coherent interactions between neighboring MRRs. Cross talk occurs if the wrong channel is coupled partially and runs into an incorrect drop port. Dual-channel side-coupled integrated spaced sequences of optical resonators (SCISSOR) are introduced and
Figure 1: Different types of weight bank [24]. (a). Add-drop multiplexer. (b). Dual-band double channel side-coupled integrated spaced sequence of resonators (SCISSORs). (c). 1-pole MRR filters. Each MRR controls a separate WDM channel. Two waveguides make coherent feedback between surrounding MRRs. (d). 2-pole MRR filters. Interferometer-like feedforward coherent interactions. A B and C letters represent different WDM channels affected by the appointed resonator.
analyzed in [25], as shown in Fig. 1(b). Let each MRR in Fig. 1(b) control the weight of the separate WDM channel. Every channel is coupled to a neighboring MRR, which can decrease inter-channel cross-talk and create a feedback path for multiple MRRs, as is shown in Fig. 1(c). When using series or cascade (even poles) MRRs as components, corresponding weight banks create feedforward paths with coherence. Interactions between multiple MRRs should be considered for channel density problem, as is displayed in Fig. 1(d).
## 2 Methods
### Device fabrication and characteristics
This silicon photonics chip is fabricated based AMF technology using silicon-on-insulator (SOI) wafer at Institute of Microelectronics (IME) A*STAR foundry [26]. The silicon wafer platform is \(200mm\). The \(220\)nm-height silicon is buried in oxide with \(2\mu m\) thickness. To fabricate low-loss waveguide with \(500nm\) width, \(248nm\) & \(193nm\) deep ultraviolet (DUV) lithography is adapted. One 4*4 MRR weight bank is composed of 4 MRRs in parallel/series cascaded by 2 bus waveguides (<1.5 dB/cm) to trap light around these MRRs. TE mode lights directed by 8 degree grating couplers are tested in this fabrication with around 4.8dB insertion loss. The output is detected by balanced photo-detectors (BPDs). Mental routing traces and vias are also deposited for interconnects and for electrical probe/bond pads. Individual mental pads for each MRR are probed to thermally tune the resonance performance while the mental pad for ground effect (GND) are shared by all MRRs to make the whole chips neater and reduce electrical I/O ports.
### experimental setup and testing
The expected experimental setup is shown in Fig. 2. Distributed feedback lasers (DFBs) generating optical carriers at 1550 nm provide optical input signal for the whole photonic accelerator. The optical carrier then flows into four ring filters and then four ring modulators. Input optical carriers are then multiplexed together after these ring resonators. These optical carriers then experience wavelength-dependent channel delays under the effect of two split y-branches with different optical path lengths to construct partially correlated inputs. Eventually optical inputs enter the IN port of the MRR weight bank where weighting on the silicon photonic chip happens.
For tunable purpose, MRR weight bank is mounted on temperature-controlled fiber alignment stage. The insertion loss at measurement will include grating coupler insertion loss (about 5 dB), and waveguide/bending loss (around 1 dB). The OUT signal at THRU and DROP ports of the
Figure 2: Schematic of the experimental setup for performing photonic accelerator using an on-chip MRR weight bank. DFBs: distributed feedback lasers. SM: source meter. MRR: micro-ring resonator. BPD: balanced photo-detector.
MRR weight addition are summed up by on-chip balanced photo-detector (BPD). For thermally adjusting, MRR will be cooperated with N-doped ring-shape heater which is driven by a source meter set in current-source, voltage-measure mode.
## 3 Weight bank design
The detailed designs of the MRRs are shown in Fig. 3. Each MRR is companied with 2 silicon waveguides with \(220nm\) width which are fully etched. And around MRR is a circular rib waveguide with \(90nm\) width which hosts dopants and is slowly etched. To realize feedback control of MRRs, a N-doped photoconductive heaters section near the center of MRR circle is patterned. While a heavier N++ doped section is also patterned for ohmic contacts, according to [27, 28]. From [29], the concentration of N & N++ should be: \(5*10^{17}cm^{-3}\), \(5*10^{20}cm^{-3}\). The radius of all MRRs and ring filters are all set to \(10\mu m\), but due to the fabrication variance, the final outcome may be different. The coupling gaps between the MRR and rib waveguide is designed to be \(200nm\) on each side. And each ring in neighborhood is isolated by \(125um\).
The overview of the standard weight bank design based on silicon chip is displayed in Fig. 3(a). Mental pad arrays around the circuit guides electrical signal from source meter (SM) to individual ring filter and MRR are used to thermally offset resonant frequency of MRR optical transmission to configure its weight [28]. The optical wave is directed by grating coupler arrays into and out of ring filter and further flow into MRR weight bank. Both on-chip and off-chip balanced photo-detector (BPD) can be designed to collect the outputs of MRRs by complementary ports and perform electrical weighted addition. MRR zoomed-in graph with an N-doped in-ring heater is also displayed in Fig. 3(b). Due to the inevitable fabrication error that happens in AMF process, we copied the circuit with exactly the same parameter, as is shown in Fig. 3(c), and add several testing circuits to measure the instern loss of the grating coupler for later data analysis. The Interconnect outcome of demux and cascaded four ring resonators are also in fig. 3(d) and (e).
Figure 3: Details in weight bank design. (a). Overview GDS design of the standard weight bank design based on silicon chip. (b). MRR zoomed-in graph with an N-doped in-ring heater. (c) Overall schematic view of weight bank design after tilling. (d). Interconnect outcome of a demux. (e). Interconnect outcome of 4 series ring resonators
Physically, this neuromorphic network is implemented by miniature circular waveguide. The corresponding features of neuron nodes are embedded in silicon substrate by nano-scale etching. When input optical signal is captured, MRR weigh bank modulates the output signal of laser that are near threshold. Insignificant disturbance to values which are close to threshold will greatly impact the output signals. WDM is here realized by MRRs, using a specific wavelength of light at nodes in the system. And the non-linearity of the system is realized by feedbacks. MRR weight bank is calibrated with respect to feedback procedure control [27]. Each MRR is driven by electrical power from probing SM and its resonance can be thermally tuned by corresponding paired laser channel.
The resonance shift decides circulating optical power within each MRR. The optical power will be in part absorbed which results in a photonic response. The conductivity of the ring-shape N-doped photoconductive heater, on the contrary, will be partially influenced by the photonic response. The conductivity is easy to detect by the probing SM meanwhile [30, 28, 31]. Consequently, by converting sensing the photon response to electronic response, the optical transmission of each MRR can be detected directly, so as to realize the feedback control loop of configuring the weight of MRR in a continuous range.
Assuming that signal bandwidth is much narrower than optical carrier frequency, the time-frequency expression for a WDM input can be concluded by a slowly-varying envelope approxi
\begin{table}
\begin{tabular}{c c} \hline \hline
**symbol** & **definition** \\ \hline \(\omega_{\text{i}}\) & Optical carrier frequency \\ N & Number of optical carriers \\ \(\text{x}_{\text{i}}\) & Data signals \\ \(\text{E}_{\text{[in]}}(\omega,\text{t})\) & Time-frequency expression for WDM input \\ \(\text{E}_{0.\text{i}}\) & Carrier field amplitude \\ \(\delta\) & Dirac delta function \\ \(\vec{\Delta}\) & Transmission state of filter, resonant wavelength shifts \\ H & Tunable spectral filter response \\ \(+/-\) & Complementary outputs \\ R(\(\omega\)) & Detector responsivity \\ \(\nu_{\pi}\) & Voltage at \(\pi\) phase shift \\ \(\text{Z}_{0}\) & Characteristic impedance \\ \(\mu_{\text{i}}\) & Evaluation of weights \\ H(+,-) & Transmission functions \\ Tj & Amplitude transmission of the MRR through port \\ Dj & Amplitude transmission of the MRR drop port \\ \hline \hline \end{tabular}
\end{table}
Table 1: Definitions of symbols.
mation and short-time Fourier transform [24]:
\[E_{[in]}(\omega,t)=\sum_{i=1}^{N}E_{0,i}\sqrt{1+x_{i}(t)}\delta(\omega-\omega_{i}) \tag{1}\]
Where \(x_{i}(t)\) is strictly greater than -1. The transmission state of filter is configured by tuning a parameter vector, \(\vec{\Delta}\).
\[E_{[wei]}^{+,-}(\omega,t)=H^{+,-}(\omega;\vec{\Delta})E_{[in]}(\omega,t) \tag{2}\]
+/- superscripts represent complementary outputs of tunable spectral filter response. The effect of a balanced photodiode (PD) is indicated as the difference between two photocurrents derived from the weighted signals.
\[i_{PD}(t)=\int_{\omega}R(\omega)(|E_{wei}^{+}(\omega,y)|^{2}-|E_{wei}^{-}( \omega,y)|^{2})\,\mathrm{d}\omega \tag{3}\]
Net function fitting the form of weighted addition can be expressed as:
\[\mu_{i}=A_{i}(|H^{+}(\omega_{i};\overrightarrow{\Delta})|^{2}-|H^{-}(\omega_ {i};\overrightarrow{\Delta})|^{2}) \tag{4}\]
\[y(t)=\sum_{i=1}^{N}\mu_{i}x_{i}(t)+sum_{i=1}^{N}\mu_{i} \tag{5}\]
where
\[A_{i}\equiv R(\omega_{i})\cdot Z_{0}/\nu_{\pi}\cdot E_{0,i}^{2} \tag{6}\]
\[y=i_{PD}\cdot Z_{0}/\nu_{\pi}\cdot E_{0,i}^{2} \tag{7}\]
\[y=\vec{\mu}\cdot\vec{x} \tag{8}\]
The final scattering matrix of 2-channel weight bank can be derived:
\[\begin{bmatrix}\frac{1}{T_{1}}&0&0&-D_{2}T_{1}^{-1}\\ 0&\frac{T_{1}T_{2}-D_{1}D_{2}}{T_{2}}&D_{1}T_{2}^{-1}&0\\ 0&-D_{2}T_{2}^{-}1&T_{2}^{-}1&0\\ \frac{D_{1}}{T_{1}}&0&0&\frac{T_{1}T_{2}-D_{1}D_{2}}{T_{1}}\end{bmatrix}\]
In order to better understand various symbols and definitions, we summarize all the expressions used in formulas and equations in Table 1.
## 4 Conclusion
In this project, we designed an on-chip MRR weight bank which implements the photonic addition function. Grating coupler for TE mode wave at 1550nm frequency is used to introduce optical signal into chip. Electrical signal is directed by mental pads to thermally tune each ring filter, ring modulator, MRRs and BPD. In case of fabrication error during chemical process in AMF, we copied the circuit with exactly same parameters to compare the fabrication fitness. And ring-like heaters at ring filters could also help with adjusting the input wavelength center frequency. The micro-rings in the design are implemented with add-drop ring based on reverse-biased
PN-junction, where the individual input signal has different center frequency \(\lambda_{i}\) and can only be partially transmitted. And high-Q factor manner of micro-ring will lead to insignificant center spectral shift and massive transmission loss [32] as is show in Fig. 4a. The center spectral shift change due to voltage difference applied to PN-junction indicates transmission function of input signal is dependent on voltage change, which means micro-ring can separately control WDM input pulse transmission through voltage change. Ring modulator together with y-branches of different optical path lengths construct partially correlated inputs for IN port of MRR weight banks. And on-chip BPD is also adapted to perform weight addition/minus operation. Testing circuits with only grating couplers and waveguides are utilized to figure out the insertion loss of GC in future data analysis. After measurement, we expect a result of partially correlated channel signals as is shown in Fig. 4b, which exhibits waveform pairs of partially correlated signals associated with 3 typical \(\alpha\) values (\(\alpha=+0.8/1/-0.8\)) [15].
## 5 Acknowledgments
Fabrication support was provided via the Natural Sciences and Engineering Research Council of Canada (NSERC) Silicon Electronic-Photonic Integrated Circuits (SiEPIC) Program and the Canadian Microelectronics Corporation (CMC). Devices were fabricated at Advanced Micro Foundry (AMF) A STAR foundry in Singapore.
|
2301.00861 | Hardware Abstractions and Hardware Mechanisms to Support Multi-Task
Execution on Coarse-Grained Reconfigurable Arrays | Domain-specific accelerators are used in various computing systems ranging
from edge devices to data centers. Coarse-grained reconfigurable arrays (CGRAs)
represent an architectural midpoint between the flexibility of an FPGA and the
efficiency of an ASIC and are a promising candidate for servicing multi-tasked
workloads within an application domain. Unfortunately, scheduling multiple
tasks onto a CGRA is challenging. CGRAs lack abstractions that capture hardware
resources, leaving workload schedulers unable to reason about performance,
energy, and utilization for different schedules. This work first proposes a
CGRA architecture that can flexibly partition key resources, including the
global buffer memory capacity, the global buffer memory bandwidth, and the
compute resources. Partitioned resources serve as hardware abstractions that
decouple compilation and resource allocation. The compiler uses these
abstractions for coarse-grained resource mapping, and the scheduler uses them
for flexible resource allocation at run time. We then propose two hardware
mechanisms to support multi-task execution. A flexible-shape execution region
increases the overall resource utilization by mapping multiple tasks with
different resource requirements. Dynamic partial reconfiguration (DPR) enables
a CGRA to update the hardware configuration as the scheduler makes decisions
rapidly. We show that our abstraction can help automatic and efficient
scheduling of multi-tasked workloads onto our target CGRA with high
utilization, resulting in 1.05x-1.24x higher throughput and a 23-28% lower
latency in a multi-tasked cloud workload and 60.8% reduced latency in an
autonomous system workload when compared to a baseline CGRA running single
tasks at a time. | Taeyoung Kong, Kalhan Koul, Priyanka Raina, Mark Horowitz, Christopher Torng | 2023-01-02T20:00:41Z | http://arxiv.org/abs/2301.00861v1 | Hardware Abstractions and Hardware Mechanisms to Support Multi-Task Execution on Coarse-Grained Reconfigurable Arrays
###### Abstract
Domain-specific accelerators are used in various computing systems ranging from edge devices to data centers. Coarse-grained reconfigurable arrays (CGRAs) represent an architectural midpoint between the flexibility of an FPGA and the efficiency of an ASIC and are a promising candidate for servicing multi-tasked workloads within an application domain. Unfortunately, scheduling multiple tasks onto a CGRA is challenging. CGRAs lack abstractions that capture hardware resources, leaving workload schedulers unable to reason about performance, energy, and utilization for different schedules. This work first proposes a CGRA architecture that can flexibly partition key resources, including the global buffer memory capacity, the global buffer memory bandwidth, and the compute resources. Partitioned resources serve as hardware abstractions that decouple compilation and resource allocation. The compiler uses these abstractions for coarse-grained resource mapping, and the scheduler uses them for flexible resource allocation at run time. We then propose two hardware mechanisms to support multi-task execution. A flexible-shape execution region increases the overall resource utilization by mapping multiple tasks with different resource requirements. Dynamic partial reconfiguration (DPR) enables a CGRA to update the hardware configuration as the scheduler makes decisions rapidly. We show that our abstraction can help automatic and efficient scheduling of multi-tasked workloads onto our target CGRA with high utilization, resulting in 1.05x-1.24x higher throughput and a 23-28% lower latency in a multi-tasked cloud workload and 60.8% reduced latency in an autonomous system workload when compared to a baseline CGRA running single tasks at a time.
## 1 Introduction
Domain-specific accelerators have gained growing interest in recent years as they provide improved performance and energy efficiency over general-purpose processors. Application-specific integrated circuits (ASICs) [8, 18, 21] show the highest performance and efficiency as they are specialized for target applications such as image processing or machine learning (ML). However, the ASIC design process can span multiple years, and fixed-function accelerators quickly become obsolete as applications continue to evolve. Some works deploy applications on FPGAs [12, 16, 17]. FPGAs enable reconfiguration of the underlying hardware and can accelerate diverse workloads, but their bit-level flexibility incurs high area and energy overheads. Coarse-grained reconfigurable arrays (CGRAs) are promising architectures that lie between ASICs and FPGAs. A CGRA has arithmetic units and a routing system that are configurable in word-level granularity, providing flexibility at a lower overhead than a FPGA. With its unique advantages, a CGRA can be widely adopted in domains with high performance, power, and flexibility requirements.
As hardware accelerators are deployed in various scenarios, the demand for multi-task execution support on hardware is growing. For example, many vendors [21, 13] offer INFerence-as-a-Service, where multiple tenants share the same hardware to run inference tasks. Also, an autonomous system handles concurrent tasks to process various types of data from numerous sensors. Some works have explored multi-task execution support in ASICs and FPGAs. PREMA [11] and Planaria [14] propose a systolic array that supports multi-tenancy by temporal and spatial multiplexing, respectively. [35, 29, 34] propose an FPGA virtualization framework with multi-tenancy support. However, multi-task execution support on CGRAs has not been explored much thus far. A noteworthy exception is ChordMap [27] which schedules multiple tasks captured in synchronous data flow graphs onto a CGRA. However, it assumes that all tasks are known a priori, whereas in a multi-tenant cloud or multi-tasked edge workload scenario, tasks may arrive dynamically and require schedulers to react to maximize utilization.
Unfortunately, scheduling multiple tasks onto a CGRA is challenging as it lacks abstractions capturing hardware resources. In this paper, we propose hardware abstractions of a CGRA by partitioning key hardware resources. Both compilers and schedulers can exploit the abstrac
tions to reason about performance, energy, and utilization. We also develop hardware mechanisms that allow fast and flexible multi-task execution on a CGRA, which schedulers exploit to improve hardware utilization. We evaluate our CGRA with two different multi-tasked workload scenarios to show the potential. Our key contributions are:
[MISSING_PAGE_POST]
memory bandwidth, and the compute resources within the tile array. When a task is compiled in the Amber toolchain [23], a compiler converts it into a dataflow graph where each node and edge represents a hardware resource and communication, respectively. Specifically, GLB banks are used for medium-sized storage and communication to the host and tile array, and PE and MEM tiles are used for computation and as small scratchpads. The dataflow graph can derive the usage of memory capacity, memory bandwidth, compute units, and throughput.
We abstract the hardware resources by partitioning the GLB and tile array into homogeneous GLB-slices and array-slices, respectively. For example, we can abstract each GLB bank within our CGRA as a GLB-slice and every set of four columns in the tile array (48 PE tiles and 16 MEM tiles) as an array-slice. This abstraction serves as a middle layer that decouples offline bitstream generation by a compiler and run time resource allocation by a scheduler. During compilation, we represent the resource usage of each task using these abstracted GLB-slices and array-slices. For instance, a _conv2\(x\) layer in [19] utilizes 750KB of GLB memory capacity, 17.3MB/s of memory bandwidth, 80 PE tiles, and 17 MEM tiles and achieves 64 OPs/cycle throughput at a 500MHz clock frequency. The task is abstracted as seven GLB-slices and two array-slices in coarse-grain resource slice usage. It is possible to produce variants of the same task with different resource usage and throughput by tweaking the compiler. For example, increasing the unroll factor of the same task by four would achieve 4x throughput (256 OPs/cycle) with 288 PE tiles, 33 MEM tiles, and the same GLB memory capacity and bandwidth, which is abstracted as seven GLB-slices and six array-slices. Our approach allows for pre-computation of bitstreams that support different resource usage and throughput to be cached in on-chip storage to support fast dynamic partial reconfiguration, as discussed later. Table 1 summarizes the resource usage and throughput for several different variants of tasks. At run time, a scheduler leverages the hardware slice abstraction to decide which variant of tasks to choose, which resources to allocate, and when to execute.
### Hardware Mechanisms
**Flexible-Shape Execution Regions**. To manage multiple tasks that are concurrently running, we need a way to monitor hardware resources and the status of tasks, that are build upon the abstractions described above. We introduce an _execution region_, a sub-region of the CGRA on which a single task is mapped and executed. An execution region consists of one or more GLB-slices and array-slices. The flexibility to form different sizes and shapes of execution regions gives the scheduler a simplified and quantized view of hardware resources while providing enough information to allocate resources to each task to maximize resource utilization in multi-tasked workloads.
Figure 2 compares different mechanisms to form an execution region and how they affect resource allocation.
Figure 2: Resource allocation in the baseline CGRA and a CGRA with three different execution regions. Resources colored grey represent the blocks occupied by a current-running task, and those colored red represent blocks occupied by a next-running task.
The blocks colored in gray represent resources occupied by the currently running task, and those colored in red represent resources allocated to the next-running task. The baseline CGRA (Figure 1(a)) is unaware of our hardware slice abstraction, and the entire CGRA serves as a single large execution region. Since an existing task is already mapped onto the CGRA, subsequent tasks are always forced to wait until the previous tasks finish and release the single execution region.
The simplest mechanism to form an execution region is only to support fixed-sized regions. For example, all execution regions in Figure 1(b) consist of two GLB-slices and one array-slice. Fixed-sized regions are not optimal. Since each task must fit within the fixed-sized execution region, the largest task with the highest resource usage determines the size. On the other hand, when there are several available execution regions, a task can be unrolled and mapped in parallel to achieve higher throughput (e.g., the next-running task is unrolled by three in Figure 1(b)). This method does not require much architectural change, and the implementation of a scheduling algorithm can be straightforward given the assumption that all target tasks fit within an execution region. However, although unrolling increases throughput, optimization across the unrolled dimension can be challenging to support.
Another method is to support variably sized execution regions by merging multiple fixed-sized regions. We define the unit size of a region as in the fixed-sized region case, but we can merge multiple unit regions to form a larger execution region. For example, in Figure 1(c), three unit-sized regions are merged to execute the next-running task (colored in red). The benefit of variably sized execution regions is to allow compilation optimization across the unrolled dimension. For example, a _camera pipeline_ task with 3 pixels/cycle throughput uses four array-slices (Table 1). Naively unrolling it by four achieves 12 pixels/cycle throughput using 16 array-slices. However, the compiler can optimize to time-multiplex PE tiles and achieve 12 pixels/cycle throughput with only six array-slices. Support for a variably sized region still allows for the pre-computation of bitstreams for multiple variants of tasks with different resource usage and throughput. However, this approach may still suffer from low resource utilization since the ratio of GLB-slices and array-slices within an execution region always remains the same.
Therefore, we propose _flexible-shape execution regions_ in which GLB-slices and array-slices are no longer coupled. Decoupling of GLB-slices and array-slices enables finer-grained resource allocation. For example, Figure 1(d) shows how an execution region can be allocated any number of GLB-slices and array-slices, forming a non-rectangular shape, with remaining array-slices and GLB-slices available to be used by other tasks. The support for flexible-shape execution regions improves resource utilization, especially for multi-tasked workloads where memory-intensive and compute-intensive tasks are mixed. However, it may require additional communication between the GLB-slices and the array-slices. In this work, we limit the placement of GLB-slices and array-slices within an execution region to be contiguous to simplify our study. Design space exploration on flexible placement support and the required network remains as future work. Section 3.1 describes the benefits of these mechanisms in more detail with a cloud system example.
**Dynamic Partial Reconfiguration.** Dynamic partial reconfiguration (DPR) is a mechanism to update the hardware configuration in reconfigurable architectures. We propose fast-DPR following the DPR mechanism proposed in Amber SoC [7], but with added features to exploit hardware abstractions. In Amber, every other GLB bank stores the configuration bitstreams and independently streams configuration into two columns of the tile array. Also, clocks and configuration signals are distributed down each column together, enabling reconfiguring the tile array at high clock frequency without pipeline stages. In our CGRA, we also reuse GLB blocks to store and stream bitstreams to the tile array and follow the same clock distribution network. Unlike Amber, however, one GLB bank streams configuration into one array-slice (in turn, four columns of the tile array) as an array-slice is the minimum unit of execution regions.
We added a feature to relocate bitstreams at run time to exploit hardware abstractions further. In Amber, the compiler generates region-aware bitstreams; the bitstreams for one region cannot be reused in different regions even though the two regions are homogeneous. This limitation comes from the fact that the address of each configuration register in different columns has a distinct column #id. On the other hand, our compiler generates region-agnostic bitstreams by assuming that the task is always mapped to the leftmost region. We also added a register indicating the destination region of DPR to GLB banks. When the host processor triggers DPR, GLB banks read the register and stream bitstreams to the target region via the network between the GLB and the tile array. With this bitstream relocation feature, a user can pre-load bitstreams of the next task to the GLB in advance and rapidly map it to any next available region just by writing to a single register.
## 3 Evaluation
We evaluate the benefits of multi-task execution support under two different workload scenarios. In a cloud system example scenario (Section 3.1), our CGRA with flexible-shape execution regions enables 1.05x-1.24x higher throughput and 23-28% lower normalized turn-around time (NTAT) over the baseline CGRA. In an autonomous system example scenario (Section 3.2), our CGRA enables 60.8% reduced total latency.
### Example 1: Cloud System
**Overview**. In this example, we construct a synthetic cloud computing scenario that models real-world examples in which the CGRA serves application requests from multiple users (Figure 2(a)). We construct the multi-tasked workload using kernels from machine learning (ML) and image processing domains, including ResNet-18 [19] and MobileNet [20] from the ML domain, and camera pipeline and Harris corner detector from the image processing domain. Table 1 summarizes the benchmark tasks and their resource requirements.
To generate the multi-tasked workload, we assume four tenants share the CGRA and are assigned one of the four target applications. Each tenant sends a request to the CGRA following a Poisson distribution. Whenever a new task arrives, or an existing task finishes, the scheduler is triggered and runs a greedy algorithm to schedule the next available task. The scheduler checks if dependencies are met before scheduling the task (e.g., in ResNet-18, _conv2\(x\) depends on _conv1_x_). If there is more than one version of a task that can be mapped onto the available resources, the greedy scheduler always chooses the one with the highest throughput.
**Metrics**. We measure _Normalized Turn-Around Time_ and _throughput_ to compare the baseline CGRA and the three partitioning mechanisms described in Section 2.3. _Turn-Around Time_ (TAT) is the interval from the time of request to submit a task to the time of task completion. _Normalized Turn-Around Time_ (NTAT) is the ratio of the TAT to the execution time, which represents the relative delay of a task (Equation (1) - (2)). We calculate NTAT for each request and the arithmetic average for each application. We also measure the average throughput for each application to demonstrate the performance benefit.
\[\mathit{TAT}\ =\ \mathit{wait\_time}\ +\ \mathit{execution\_time} \tag{1}\] \[\mathit{NTAT}\ =\ \mathit{TAT}\ /\ \mathit{execution\_time} \tag{2}\]
**Results**. Figure 4 illustrates the relative improvements in NTAT and throughput for flexible-shape execution regions compared to fixed- and variably-sized execution regions. Even with a simple greedy scheduling algorithm, we achieve 23-28% decreased NTAT and 1.05x-1.24x higher throughput. Note that we only pre-compile each task to two different variants in this case study (Table 1), and a scheduler greedily selects the one with higher throughput if resources are available. Co-optimizing compilation and scheduling policy may improve NTAT and throughput further, which remains future work.
### Example 2: Autonomous System
**Overview**. In this case study, we construct a synthetic edge system scenario modeling the real world in which multiple tasks from image processing and ML domains execute in parallel and can dynamically trigger. Specifically, we develop an autonomous system scenario as described in Figure 2(b) following a methodology used in [30]. 2 The system takes a RAW image in Bayer encoding format (RGGB) from sensors at 30 fps and first runs a _camera
Figure 4: Evaluation in a cloud system example. (a) NTAT and (b) throughput for each task with fixed-sized, variably sized, and flexible-shape resource partitioning, normalized to the baseline CGRA. Flexible-shape partitioning decreases NTAT by 23-28% and increases throughput by 1.05x-1.24x.
Figure 3: (a) Cloud system example scenario with four tenants submitting requests to the CGRA. Each tenant is assigned with a task from _MobileNet_, _ResNet-18_, _camera pipeline_, and _Harris_, respectively. (b) Autonomous system example with tasks that may be triggered under conditions.
_pipeline_ task on the CGRA to convert to an RGB image. Once the CGRA generates an RGB image, the system runs object detection and dynamically decides on the next tasks. 3 When an event happens (e.g., detection of a specific background), it processes the event and executes the corresponding tasks (e.g., _depth estimation_). Except for a _camera pipeline_ that runs every frame, we set the period from one event to the next same event to follow a uniform random distribution between 3-7 frames.
Footnote 3: This work assumes that object detection is executed in another hardware in the system (e.g. GPU or ASIC).
**Results**. We evaluate the benefit of hardware resource partitioning and fast DPR by comparing our proposed CGRA to the baseline CGRA with AXI4-Lite-based DPR. Specifically, the baseline CGRA maps only one task at a time. When more than one event occurs, the baseline handles each task one by one and reconfigures using sequential AXI4-Lite configuration transactions. In the proposed CGRA with multi-task execution support, we exploit flexible-shape resource partitioning to concurrently run more than one task on the CGRA when possible. Also, we use the parallel and high-frequency DPR mechanisms in Section 2.3 to configure bitstreams. We compute the arithmetic average of the latency over all frames. As described in Figure 5, our techniques enable a 60.8% latency reduction compared to the baseline. With fast DPR, reconfiguration takes less than 5% of the total latency, an appreciable reduction from 14.4% in the baseline.
## 4 Related Work
As Deep Neural Networks (DNNs) are widely used in various domains, DNN accelerators [18, 17, 8, 9, 10, 25] have emerged and been deployed in the cloud system [21, 13]. To that end, many prior works have explored multi-tenancy support on DNN accelerators in cloud systems. Multi-task execution support is also studied in FPGAs targeting both cloud and edge computing. However, a non-negligible portion of FPGA resources is typically reserved for controlling multi-task execution, ultimately decreasing the available computing resources. ChordMap [27] explores the automated mapping of multi-tasked applications onto a CGRA, but it is limited to mapping multiple tasks within streaming applications with all tasks known a priori. Our work proposes hardware abstractions and mechanisms, which both compilers and schedulers can exploit and co-optimize to improve resource utilization in both cloud and edge systems.
**Multi-Task Execution on DNN Accelerators**. Some DNN accelerators service multi-DNN tasks at the software level. AI-MT [2] and Layerweaver [31] propose a scheduling policy to mix compute- and memory-intensive tasks to increase hardware utilization. PREMA [11] implements preemptible NPUs to support multi-tenancy via temporal multiplexing. Many works add flexibility to an accelerator to accommodate multiple DNN tasks. Planaria [14] introduces a flexible systolic array with dynamic architecture fission to map multiple DNN tasks. [26] suggests a multi-directional network to support up to four DNN tasks with different dataflow. Other works [24, 3] explore a computing system with multiple DNN accelerators with different hardware characteristics. While these works only support DNN workloads, our work can support any applications that can be mapped onto a CGRA.
**Multi-Task Execution on FPGAs**. In FPGAs, multi-task execution support has been explored in the context of virtualization. Some works divide an FPGA into a static region, a _shell_, which serves as glue logic between the host and the FPGA, and a dynamic region, a _role_, which handles the computation of tasks. [4, 5, 33] partition a physical FPGA into several fixed-size virtual blocks and share them across multiple tasks. AmorphOS [22] presents a hardware abstraction of an FPGA, _Morphlet_, which dynamically alters its size based on resource requirements. ViTAL [35] provides a full-stack framework to run multiple tasks with different sizes on homogeneous regions. [34] supports running multi-DNN tasks on an FPGA by dividing hardware resources into multiple PE cores and spatially multiplexing them, while [30] evaluates the benefits of temporal multiplexing of FPGAs using DPR for vision applications on embedded devices. While these works only target scenarios where underlying applications change infrequently because of long reconfiguration time of FPGAs, our work can support both cloud systems and real-time edge systems due to rapid DPR.
## 5 Conclusion
Multi-task execution support on accelerators is becoming increasingly relevant in both cloud and edge systems and
Figure 5: The average latency of an autonomous system example with different execution regions. The values are normalized to the result of the baseline. A red bar indicates the time spent for reconfiguration, and a blue bar indicates the sum of wait time and execution time. To show the benefit of fast-DPR (Section 2.3), we assume the baseline CGRA uses AXI4-Lite interface for DPR, while others use fast-DPR.
has the potential to improve performance through better hardware utilization. This work proposes abstracting hardware resources within a CGRA into coarser-grained units with which a workload scheduler can quickly make decisions. Based on the proposed abstraction, we develop hardware mechanisms to support multi-task execution through flexible-shape hardware partitioning and high-throughput dynamic partial reconfiguration. Our evaluations modeling both a cloud and an edge system scenario suggest that the abstraction and hardware mechanisms can enable automatic schedulers to achieve high performance in multi-tasked workloads on future CGRAs.
|
2310.07697 | ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation | Recent works have successfully extended large-scale text-to-image models to
the video domain, producing promising results but at a high computational cost
and requiring a large amount of video data. In this work, we introduce
ConditionVideo, a training-free approach to text-to-video generation based on
the provided condition, video, and input text, by leveraging the power of
off-the-shelf text-to-image generation methods (e.g., Stable Diffusion).
ConditionVideo generates realistic dynamic videos from random noise or given
scene videos. Our method explicitly disentangles the motion representation into
condition-guided and scenery motion components. To this end, the ConditionVideo
model is designed with a UNet branch and a control branch. To improve temporal
coherence, we introduce sparse bi-directional spatial-temporal attention
(sBiST-Attn). The 3D control network extends the conventional 2D controlnet
model, aiming to strengthen conditional generation accuracy by additionally
leveraging the bi-directional frames in the temporal domain. Our method
exhibits superior performance in terms of frame consistency, clip score, and
conditional accuracy, outperforming other compared methods. | Bo Peng, Xinyuan Chen, Yaohui Wang, Chaochao Lu, Yu Qiao | 2023-10-11T17:46:28Z | http://arxiv.org/abs/2310.07697v2 | # ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation
###### Abstract
Recent works have successfully extended large-scale text-to-image models to the video domain, producing promising results but at a high computational cost and requiring a large amount of video data. In this work, we introduce ConditionVideo, a training-free approach to text-to-video generation based on the provided condition, video, and input text, by leveraging the power of off-the-shelf text-to-image generation methods (_e.g._, Stable Diffusion). ConditionVideo generates realistic dynamic videos from random noise or given scene videos. Our method explicitly disentangles the motion representation into condition-guided and scenery motion components. To this end, the ConditionVideo model is designed with a UNet branch and a control branch. To improve temporal coherence, we introduce sparse bi-directional spatial-temporal attention (sBiST-Attn). The 3D control network extends the conventional 2D controlnet model, aiming to strengthen conditional generation accuracy by additionally leveraging the bi-directional frames in the temporal domain. Our method exhibits superior performance in terms of frame consistency, clip score, and conditional accuracy, outperforming compared methods. See the project website at [https://pengbo807.github.io/conditionvideo-website/](https://pengbo807.github.io/conditionvideo-website/).
Introduction
Diffusion-based models [22, 23, 24, 25] demonstrates impressive results in large-scale text-to-image (T2I) generation [26, 27, 28]. Much of the existing research proposes to utilize image generation models for video generation. Recent works [25, 26, 27] attempt to inflate the success of the image generation model to video generation by introducing temporal modules. While these methods reuse image generation models, they still require a massive amount of video data and training with significant amounts of computing power. Tune-A-Video [22] extends Stable Diffusion [22] with additional attention and a temporal module for video editing by tuning one given video. It significantly decreases the training workload, although an optimization process is still necessary. Text2Video [25] proposes training-free generation, however, the generated video fails to simulate natural background dynamics. Consequently, the question arises: _How can we effectively utilize image generation models without any optimization process and embed controlling information as well as modeling dynamic backgrounds for video synthesis?_
In this work, we propose ConditionVideo, a training-free conditional-guided video generation method that utilizes off-the-shelf text-to-image generation models to generate realistic videos without any fine-tuning. Specifically, aiming at generating dynamic videos, our model disentangles the representation of motion in videos into two distinct components: conditional-guided motion and scenery motion, enabling the generation of realistic and temporally consistent frames. By leveraging this disentanglement, we propose a pipeline that consists of a UNet branch and a control branch, with two separate noise vectors utilized in the sampling process. Each noise vector represents conditional-guided motion and scenery motion, respectively. To further enforce temporal consistency, we introduce sparse bi-directional spatial-temporal attention (sBiST-Attn) and a 3D control branch that leverages bi-directional adjacent frames in the temporal dimension to enhance conditional accuracy. These components strengthen our model's ability to generate high-quality conditional-guided videos. Our ConditionVideo method outperforms the baseline methods in terms of frame consistency, conditional accuracy, and clip score.
Our key contributions are as follows. (1) We propose ConditionVideo, a training-free video generation method that leverages off-the-shelf text-to-image generation models to generate conditional-guided videos with realistic dynamic backgrounds. (2) Our method disentangles motion representation into conditional-guided and scenery motion components via a pipeline that includes a U-Net branch and a conditional-control branch. (3) We introduce sparse bi-directional spatial-temporal attention (sBiST-Attn) and a 3D conditional-control branch to improve conditional accuracy and temporal consistency, generating high-quality videos that outperform compared methods.
## 2 Related work
### Diffusion Models
Image diffusion models have achieved significant success in the field of generation [13, 22, 23, 24], surpassing numerous generative models that were once considered state-of-the-art [10, 25]. With the assistance of large language models [26, 27], current research can generate videos from text, contributing to the prosperous of image generation [26, 27].
Recent works in video generation [25, 13, 26, 28, 29, 30, 31, 32, 33, 34] aim to emulate the success of image diffusion models. Video Diffusion Models [13] extends the UNet [27] to 3D and incorporates factorized spacetime attention [1]. Imagen Video [26] scales this process up and achieves superior resolution. However, both approaches involve training from scratch, which is both costly and time-consuming.
Alternative methods explore leveraging pre-trained text-to-image models. Make-A-Video [25] facilitates text-to-video generation through an expanded unCLIP framework. Tune-A-Video [26] employs a one-shot tuning pipeline to generate edited videos from input guided by text. However, these techniques still necessitate an optimization process.
Compared to these video generation methods, our training-free method can yield high-quality results more efficiently and effectively.
### Conditioning Generation
While GAN-based methods have made considerable progress in conditional generation [28, 29, 26, 27, 28, 29, 28, 29, 30, 31, 32], research on the conditional generation of diffusion models is limited. For the diffusion model-based methods, T2I Adapter [26] and ControlNet [25] aim to enhance controllability through the use of extra annotations. T2Iadapter [27] proposes aligning internal knowledge with external control signals to facilitate image generation. On the other hand, ControlNet duplicates and fixes the original weight of the large pre-trained T2I model. Utilizing the cloned weight, ControlNet trains a conditional branch for task-specific image control.
For diffusion-based conditional video generation, recent works have centered on text-driven video editing [28, 29, 26, 29, 30, 31, 32]. These methods prioritize similarity with the input video rather than creating new content. In contrast, our method uses dynamic features from the input reference video for more creative generation and can add extra foreground movements. Concurrent works like Follow-Your-Pose [26] and Text2Video-Zero [27]
generate videos based on given conditions. However, these methods still require training or have difficulty in generating realistic background movements (_e.g.,_ the flow of the waves in Fig. 1 (a). The dynamic version on our website can better show the advantages of our method.) Moreover, we propose additional techniques to improve time and conditioning consistency and introduce dynamic scene referencing, a novel approach in this field.
## 3 Preliminaries
**Stable Diffusion.** Stable Diffusion employs an autoencoder [22] to preprocess images. An image \(x\) in RGB space is encoded into a latent form by encoder \(\mathcal{E}\) and then decoded back to RGB space by decoder \(\mathcal{D}\). The diffusion process operates with the encoded latent \(z=\mathcal{E}(x)\).
For the diffusion forward process, Gaussian noise is iteratively added to latent \(z_{0}\) over \(T\) iterations [10]:
\[\begin{split} q\left(z_{t}\mid z_{t-1}\right)&= \mathcal{N}\left(z_{t};\sqrt{1-\beta_{t}}z_{t-1},\beta_{t}I\right),\\ t&=1,2,\ldots,T,\end{split} \tag{1}\]
where \(q\left(z_{t}\mid z_{t-1}\right)\) denotes the conditional density function and \(\beta\) is given.
The backward process is accomplished by a well-trained Stable Diffusion model that incrementally denoises the latent variable \(\hat{z_{0}}\) from the noise \(z_{T}\). Typically, the T2I diffusion model leverages a UNet architecture, with text conditions being integrated as supplementary information. The trained diffusion model can also conduct a deterministic forward process, which can be restored back to the original \(z_{0}\). This deterministic forward process is referred to as DDIM inversion [11, 12]. We will refer to \(z_{T}\) as the noisy latent code and \(z_{0}\) as the original latent in the subsequent section. Unless otherwise specified, the frames and videos discussed henceforth refer to those in latent space.
**ControlNet.** ControlNet [13] enhances pre-trained large-scale diffusion models by introducing extra input conditions. These inputs are processed by a specially designed conditioning control branch, which originates from a clone of the encoding and middle blocks of the T2I diffusion model and is subsequently trained on task-specific datasets. The output from this control branch is added to the skip connections and the middle block of the T2I model's UNet architecture.
## 4 Methods
ConditionVideo leverages guided annotation, denoted as \(Condition\), and optional reference scenery, denoted as \(Video\), to generate realistic videos. We start by introducing our training-free pipeline in Sec. 4, followed by our method for modeling motion in Sec. 4.2. In Sec. 4.3, we present our sparse bi-directional spatial-temporal attention (sBiST-Attn) mechanism. Finally, a detailed explanation of our proposed 3D control branch is provided in Sec. 4.4.
### Training-Free Sampling Pipeline
Fig. 2 depicts our proposed training-free sampling pipeline. Inheriting the autoencoder \(\mathcal{D}(\mathcal{E}(\cdot))\) from the pre-trained image diffusion model (Sec. 3), we conduct video transformation between RGB space and latent space frame by frame. Our ConditionVideo model contains two branches: a UNet branch and a 3D control branch. A text description is fed
Figure 2: **Illustration of our proposed training-free pipeline.** (Left) Our framework consists of a UNet branch and a 3D control branch. The UNet branch receives either the inverted reference video \(z_{T}^{INV}\) or image-level noise \(\epsilon_{b}\) for background generation. The 3D control branch receives an encoded condition for foreground generation. The text description is fed into both branches. (Right) Illustration of our basic spatial-temporal block. We employ our proposed sBiST-Attn module into the basic block between the 3D convolution block and the cross-attention block. The detail of sBiST-Attn module is shown in Fig. 3
into both branches. Depending on the user's preference for customized or random background, the UNet branch accepts either the inverted code \(z_{T}^{INV}\) of the reference background video or the random noise \(\epsilon_{b}\). The condition is fed into the 3D control branch after being added with random noise \(\epsilon_{c}\). We will further describe this disentanglement input mechanism and random noise \(\epsilon_{b}\), \(\epsilon_{c}\) in Sec. 4.2.
Our branch uses the original weight of ControlNet [11]. As illustrated on the right side of Fig. 2, we modify the basic spatial-temporal blocks of these two branches from the conditional T2I model by transforming 2D convolution into 3D with 1\(\times\)3\(\times\)3 kernel and replacing the self-attention module with our proposed sBiST-Attn module (Sec. 4.3). We keep other input-output mechanisms the same as before.
### Strategy for Motion Representation
Disentanglement for Latent Motion RepresentationIn conventional diffusion models for generation (_e.g._, ControlNet), the noise vector \(\epsilon\) is sampled from an i.i.d. Gaussian distribution \(\epsilon\sim\mathcal{N}(0,I)\) and then shared by both the control branch and UNet branch. However, if we follow the original mechanism and let the inverse background video's latent code to shared by two branches, we observe that the background generation results will be blurred (Experiments are shown in the Appx B.). This is because using the same latent to generate both the foreground and the background presumes that the foreground character has a strong relationship with the background. Motivated by this observation, we explicitly disentangle the video motion presentation into two components: the motion of the background and the motion of the foreground. The background motion is generated by the UNet branch whose latent code is presented as background noise \(\epsilon_{b}\sim\mathcal{N}(0,I)\). The foreground motion is represented by the given conditional annotations while the appearance representation of the foreground is generated from the noise \(\epsilon_{c}\sim\mathcal{N}(0,I)\).
Strategy for Temporal Consistency Motion RepresentationTo attain temporal consistency across consecutively generated frames, We investigated selected noise patterns that facilitate the creation of cohesive videos. Consistency in foreground generation can be established by ensuring that the control branch produces accurate conditional controls. Consequently, we propose utilizing our control branch input for this purpose: \(\hat{C}_{cond}=\epsilon_{c}+\mathcal{E}_{c}(Condition),\epsilon_{c_{i}}\in \epsilon_{c},\epsilon_{c_{i}}\sim\mathcal{N}(0,I)\subseteq\mathbb{R}^{H\times W \times C},\forall i,j=1,...,F,\ \ \epsilon_{ci}=\epsilon_{cj},\) where \(H\), \(W\), and \(C\) denote the height, width, and channel of the latent \(z_{t}\), \(F\) represents the total frame number, \(C_{cond}\) denotes the encoded conditional vector which will be fed into the control branch and \(\mathcal{E}_{c}\) denotes the conditional encoder. Additionally, it's important to observe that \(\epsilon_{c_{i}}\) corresponds to a single frame of noise derived from the video-level noise denoted as \(\epsilon_{c}\). The same relationship applies to \(\epsilon_{b_{i}}\) and \(\epsilon_{b}\) as well.
When generating backgrounds, there are two approaches we could take. The first is to create the background using background noise \(\epsilon_{b}\): \(\epsilon_{b_{i}}\in\epsilon_{b},\ \ \epsilon_{b_{i}}\sim\mathcal{N}(0,I) \subseteq\mathbb{R}^{H\times W\times C}\)\(\epsilon_{b_{i}}=\epsilon_{bj},\ \ \ \forall i,j=1,...,F\). The second approach is to generate the background from an inverted latent code, \(z_{T}^{INV}\), of the reference scenery video. Notably, we observed that the dynamic motion correlation present in the original video is retained when it undergoes DDIM inversion. So we utilize this latent motion correlation to generate background videos. Our ConditionVideo method is more user-friendly and cost-efficient compared to techniques that require motion training.
```
0:\(Condition\), \(Text\), \(Video\)(Optional) Parameter:\(T\)
0:\(\hat{X}_{0}\):generated video
1:if\(Video\) is not None then
2:\(z_{0}^{Video}\leftarrow\mathcal{E}(Video)\) //encode video
3:\(z_{T}^{INV}\leftarrow\) DDIM_Inversion(\(z_{0}^{Video},T\), UNetBranch)
4:\(z_{T}\gets z_{T}^{INV}\) //customize background
5:else
6:\(z_{T}\leftarrow\epsilon_{b},\ \ \ \text{//random background}\)
7:endif
8:\(C_{cond}\leftarrow\epsilon_{c}+\mathcal{E}_{c}(Condition)\ \ \ \text{//encode condition}\)
9:\(C_{text}\leftarrow\mathcal{E}_{t}(Text)\) //encode input prompt
10:for\(t=T...1\)do
11:\(\epsilon_{t}\leftarrow\) ControlBranch(\(C_{cond},t,C_{text}\))
12:\(\hat{z}_{t-1}\leftarrow\) DDIM_Backward(\(z_{t},t,C_{text},c_{t},\)
13: UNetBranch)
14:endfor
15:\(\hat{X}_{0}\leftarrow\mathcal{D}(\hat{z}_{0})\)
16:return\(\hat{X}_{0}\)
```
**Algorithm 1**Sampling Algorithm
During the sampling process, in the first forward step \(t=T\), we feed the background latent code \(z_{T}^{INV}\) or \(\epsilon_{b}\) into the UNet branch and the condition \(C_{cond}\) into our 3D control branch. Then, during the subsequent reverse steps \(t=T-1,..,0\), we feed the denoised latent \(z_{t}\) into the UNet branch while still using \(C_{cond}\) for 3D control branch input. The details of the sampling algorithm are shown in Alg. 1
### Sparse Bi-directional Spatial-Temporal Attention (sBiST-Attn)
Taking into account both temporal coherence and computational complexity, we propose a sparse bi-directional spatial-temporal attention (sBiST-Attn) mechanism, as depicted in Fig. 3. For video latent \(z_{t}^{i},\ i=1,...,F\), the attention matrix is computed between frame \(z_{t}^{i}\) and its bi-directional frames, sampled with a gap of 3. This interval was chosen after weighing frame consistency and computational cost (see Appx C.1). For each \(z_{t}^{i}\) in \(z_{t}\), we derive the query feature from its frame \(z_{t}^{i}\). The key and value features are derived from the bi-directional frames \(z_{t}^{3j+1},\ j=0,...,\lfloor(F-1)/3\rfloor\). Mathematically, our sBiST-Attn can be expressed as:
\[\begin{cases}\text{Attention}(Q,K,V)=\text{Softmax}\left(\frac{QK^{T}}{\sqrt{d }}\right)\cdot V\\ Q=W^{Q}z_{t}^{i},K=W^{K}z_{t}^{[3j+1]},V=W^{V}z_{t}^{[3j+1]},\\ j=0,1,\ldots,\lfloor(F-1)/3\rfloor\end{cases} \tag{2}\]
where \([\cdot]\) denotes the concatenation operation, and \(W^{Q},W^{K},W^{V}\) are the weighted matrices that are identical to those used in the self-attention layers of the image generation model.
### 3D Control Branch
Frame-wise conditional guidance is generally effective, but there may be instances when the network doesn't correctly interpret the guide, resulting in an inconsistent conditional output. Given the continuous nature of condition movements, ConditionVideo propose enhancing conditional alignment by referencing neighboring frames. If a frame isn't properly aligned due to weak control, other correctly aligned frames can provide more substantial conditional alignment information. In light of this, we design our control branch to operate temporally, where we choose to replace the self-attention module with the sBiST-Attn module and inflate 2D convolution to 3D. The replacing attention module can consider both previous and subsequent frames, thereby bolstering our control effectiveness.
## 5 Experiments
### Implementation Details
We implement our model based on the pre-trained weights of ControlNet [13] and Stable Diffusion [12] 1.5. We generate 24 frames with a resolution of 512 x 512 pixels for each video. During inference, we use the same sampling setting as Tune-A-Video [23].
### Main results
In Fig. 1, we display the success of our training-free video generation technique. The generated results from ConditionVideo, depicted in Fig. 1 (a), imitate moving scenery videos and show realistic waves as well as generate the correct character movement based on posture. Notably, the style of the backgrounds is distinct from the original guiding videos, while the motion of the backgrounds remains constant. Furthermore, our model can generate consistent backgrounds when sampling \(\epsilon_{b}\) from Gaussian noise based on conditional information, as shown in Fig.1 (b),(c),(d). These videos showcase high temporal consistency and rich graphical content.
### Comparison
Compared MethodsWe compare our method with Tune-A-Video [23], ControlNet [13], and Text2Video-Zero [1]. For Tune-A-Video, we first fine-tune the model on the video from which the condition was extracted, and then sample from the corresponding noise latent code of the condition video.
Qualitative ComparisonOur visual comparison conditioning on pose, canny, and depth information is presented in Fig. 4, 5, and 6. Tune-A-Video struggles to align well with our given condition and text description. ControlNet demonstrates improvement in condition-alignment accuracy but suffers from a lack of temporal consistency. Despite the capability of Text2Video to produce videos of exceptional quality, there are still some minor imperfections that we have identified and indicated using a red circle in the figure. Our model surpasses all others, showcasing outstanding condition-alignment quality and frame consistency.
Figure 4: **Qualitative comparison condition on the pose.**_βThe Cowboy, on a rugged mountain range, Western painting styleβ_. Our results outperform in both temporal consistency and pose accuracy, while others have difficulty in maintaining either one or both of the qualities.
Figure 3: **Illustration of ConditionVideoβs sparse bi-directional attention.** The purple blocks signify the frame weβve selected for concatenation, which can be computed for key and value. The pink block represents the current block from which weβll calculate the query. The blue blocks correspond to the other frames within the video sequence. Latent features of frame \(z_{t}^{i}\), bi-directional frames \(z_{t}^{3j+1},\ j=0,...,\lfloor(F-1)/3\rfloor\) are projected to query \(Q\), key \(K\) and value \(V\). Then the attention-weighted sum is computed based on key, query, and value. Notice that the parameters are the same as the ones in the self-attention module of the pre-trained image model.
Quantitative ComparisonWe evaluate all the methods using three metrics: _frame consistency_[14, 15, 16], _clip score_[17, 18], and _pose accuracy_[16]. As other conditions are hard to evaluate, we use pose accuracy for conditional consistency only. The results on different conditions are shown in Tab. 1 and 2. We achieve the highest frame consistency, and clip score in all conditions, indicating that our method exhibits the best text alignment. We also have the best pose-video alignment among the other three techniques of conditioning on the pose.
The conditions are randomly generated from a group of 120 different videos. For more information please see Appx D.2.
### Ablation Study
We conduct an ablation study on the pose condition, temporal module, and 3D control branch. Our qualitative results are visualized in Fig. 7. In this study, we alter each component for comparison while keeping all other settings the same.
Ablation on Pose ConditionWe evaluate performance with and without using pose, as shown in Fig. 7. Without pose conditioning, the video is fixed as an image, while the use of pose control allows for the generation of videos with certain temporal semantic information.
Ablation on Temporal ModuleTraining-free video generation heavily relies on effective spatial-temporal modeling. In addition to comparing with a self-attention mechanism without temporal, we conduct an ablation study on three different spatial-temporal mechanisms. First, we remove our sBiST-attention mechanism and replaced it with Sparse-Causal attention [14]. Then, we compare our bi-directional attention mechanism with a dense attention mechanism [15] which attends to all frames for key and value.
The results are presented in Tab. 3. A comparison of temporal and non-temporal attention underlines the importance of temporal modeling for generating time-consistent videos. By comparing our method with Sparse Causal attention, we demonstrate the effectiveness of ConditionVideo's sBiST at
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & FC(\%) & CS & PA (\%) \\ \hline Tune-A-Video & 95.84 & 30.74 & 26.13 \\ ControlNet & 94.22 & 32.97 & 79.51 \\ Text2Video-Zero & 98.82 & 32.84 & 78.50 \\ Ours & **99.02** & **33.03** & **83.12** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparisons condition on pose. FC,CS,PA represent _frame consistency_, _clip score_ and _pose-accuracy_, respectively
Figure 5: **Qualitative comparison condition on canny. β_A man is runninβ_. Tune-A-Video fails in canny-alignment. ControlNet generates low temporal consistent frames. Although Text2Video outperforms the first two methods, it generates parts of the legs that do not correspond to the real structure of the human body and the shoes are not the same color.**
Figure 6: **Qualitative comparison condition on depth. β_ice coffeeβ_. All three methods used for comparison have the problem of changing the appearance of the object when the viewpoint is switched, and only our method ensures the consistency of the appearance before and after.
tention module, proving that incorporating information from bi-directional frames improves performance compared to using only previous frames. Furthermore, we observe almost no difference in frame consistency between our method and dense attention, despite the latter requiring more than double our generation duration.
Ablations on 3D control branchWe compare our 3D control branch with a 2D version that processes conditions frame-by-frame. For the 2D branch, we utilize the original ControlNet conditional branch. Both control branches are evaluated in terms of frame consistency, clip score, and pose accuracy. Results in Tab. 4 show that our 3D control branch outperforms the 2D control branch in pose accuracy while maintaining similar frame consistency and clip scores. This proves that additional consideration of bi-directional frames enhances pose control.
## 6 Discussion and Conclusion
In this paper, we propose ConditionVideo, a training-free method for generating videos with reasonable motion. We introduce a method that generates motion representation conditioned on background video and conditional information. Our method additionally strengthens frame consistency and condition alignment through our sBiST-Attn mechanism and 3D control branch. Experimental results demonstrate that our method can generate high-quality videos, opening new avenues for research in video generation and AI-based content creation.
While the condition-based and enhanced temporal attention blocks contribute to enhancing the temporal coherence of the video, we have observed that using sparse conditions, such as pose information, can still lead to videos with noticeable flickering. To address this issue, a potential solution would involve incorporating more densely sampled control inputs and additional temporal-related structures.
|
2305.15025 | Dior-CVAE: Pre-trained Language Models and Diffusion Priors for
Variational Dialog Generation | Current variational dialog models have employed pre-trained language models
(PLMs) to parameterize the likelihood and posterior distributions. However, the
Gaussian assumption made on the prior distribution is incompatible with these
distributions, thus restricting the diversity of generated responses. These
models also suffer from posterior collapse, i.e., the decoder tends to ignore
latent variables and directly access information captured in the encoder
through the cross-attention mechanism. In this work, we propose Dior-CVAE, a
hierarchical conditional variational autoencoder (CVAE) with diffusion priors
to address these challenges. We employ a diffusion model to increase the
complexity of the prior distribution and its compatibility with the
distributions produced by a PLM. Also, we propose memory dropout to the
cross-attention mechanism, which actively encourages the use of latent
variables for response generation. Overall, experiments across two commonly
used open-domain dialog datasets show that our method can generate more diverse
responses without large-scale dialog pre-training. Code is available at
https://github.com/UKPLab/dior-cvae. | Tianyu Yang, Thy Thy Tran, Iryna Gurevych | 2023-05-24T11:06:52Z | http://arxiv.org/abs/2305.15025v2 | # Dior-CVAE: Diffusion Priors in Variational Dialog Generation
###### Abstract
Conditional variational autoencoders (CVAEs) have been used recently for diverse response generation, by introducing latent variables to represent the relationship between a dialog context and its potential responses. However, the diversity of the generated responses brought by a CVAE model is limited due to the oversimplified assumption of the isotropic Gaussian prior. We propose, Dior-CVAE, a hierarchical CVAE model with an informative prior produced by a diffusion model. Dior-CVAE derives a series of layer-wise latent variables using attention mechanism and infusing them into decoder layers accordingly. We propose memory dropout in the latent infusion to alleviate posterior collapse. The prior distribution of the latent variables is parameterized by a diffusion model to introduce a multimodal distribution. Overall, experiments on two popular open-domain dialog datasets indicate the advantages of our approach over previous Transformer-based variational dialog models in dialog response generation. We publicly release the code for reproducing Dior-CVAE and all baselines at [https://github.com/SkyFishMoon/Latent-Diffusion-Response-Generation](https://github.com/SkyFishMoon/Latent-Diffusion-Response-Generation).
## 1 Introduction
Dialog response generation in open domain typically refers to the task of generating _relevant and informative_ responses. Due to the open nature of these dialogs, i.e., their diverse topics and the lack of specific goals, a dialog context can be followed by multiple responses, presenting a _one-to-many_ relationship (Csaky et al., 2019). This relationship usually poses a major challenge to sequence-to-sequence dialog generation models which are inherently deterministic, i.e., can not produce different responses given the same dialog. Although different decoding strategies such as nucleus sampling (Holtzman et al., 2020) have been introduced to bring stochasticity, these strategies mostly perform on the token-level and thus might harm the fluency of the generated responses.
To generate multiple responses given a dialog context, many existing studies try to introduce latent variables into the response generation model (Zhao et al., 2017; Shen et al., 2017; Serban et al., 2017; Chen et al., 2018). A typical take is the use of conditional variational autoencoders (CVAEs) (Sohn et al., 2015), a variation of VAEs. CVAEs draw a number of latent variables from an assumed prior distribution conditioned on the dialog context and use such variables to further guide the generative process. These latent variables often capture either potential topics, implicit intents or different styles of responses (Zhao et al., 2017). The prior and posterior distributions over these latent variables are usually assumed to be Gaussian, mainly constrained by a Kullback-Leibler (KL) divergence term that explicitly encourages the variational posterior to match the prior. However, these Gaussian assumptions greatly limit the flexibility and representation capability of the CVAE models.
With the advent of Transformer-based models, several attempts have been made to introduce Transformers to VAEs, either training Transformer-based VAEs from scratch (Li et al., 2020; Chen et al., 2022) or incorporating pre-trained Transformer language models (PLMs) with different latent infusion
Figure 1: Limitation of the isotropic Gaussian prior distribution in the multiple-response case vs. the multi-modal distribution offered by a diffusion model.
methods Park and Lee (2021); Hu et al. (2022); Tu et al. (2022). The former requires excessive training to create Transformer VAEs that work comparably well to large-scale PLMs. The latter allows us to use existing PLMs to derive latent representations and infuse them into the generative process. Latent variables have been incorporated into the decoder in either the embedding layer, the last decoder layer, or every self-attention layer Hu et al. (2022).
One major challenge of CVAEs is the well-known posterior collapse Bowman et al. (2016), especially when incorporating the recent pre-trained models based on a Transformer encoder and/or decoder Vaswani et al. (2017). The KL-divergence objective for latent variables can be easily achieved by the expressive pre-trained encoder, while the decoder can easily neglect the sampled latent variables. Many previous studies mitigate this problem by weakening the decoder Bowman et al. (2016); Semeniuta et al. (2017); Zhao et al. (2017) or controlling the weight of the KL-divergence term Fu et al. (2019); Li et al. (2019).
Another challenge occurs due to the simple prior distribution, which is assumed to be the uninformative Gaussian distribution and thus incompatible with the complexity of open-domain dialogs. This assumption restricts the generated responses to a relatively small region of the latent space Chen et al. (2019); Gu et al. (2019), i.e., the responses may be slightly different in textual forms but not in topics or intents (Figure 1). Several works introduce more complex distributions such as using a neural network (NN) to sample implicit latent representations Fang et al. (2019) or using normalizing flows Luo and Chien (2021). While diffusion models have been known for their strong sampling diversity and faithful mode coverage of the underlying data distribution Ho et al. (2020); Nichol and Dhariwal (2021); Dhariwal and Nichol (2021), surprisingly, they have not been used to parameterize the priors.
In this work, we propose **Dior-CVAE** which uses a **d**iffusion model to parameterize the **prior** distribution of a hierarchical **CVAE** model. In particular, we incorporate BART Lewis et al. (2020) into a hierarchical CVAE Hu et al. (2022) and improve the latent variable computation using attention mechanism. Adapted from previous work, we integrate the hierarchical latent variables into the decoder accordingly. We propose a memory dropout method to avoid the sampled latent variables being ignored by the expressive BART decoder. We propose to parameterize the prior using a diffusion model Ho et al. (2020) for a more expressive distribution than the typical isotropic Gaussian. We evaluate DiorcVAE on two commonly-used datasets. Overall, experiments demonstrate that our proposed model can generate better quality responses w.r.t. diversity and coherent metrics compared to current methods.
## 2 Problem Statement and Background
### Dialog Response Generation
Dialog response generation (DRG) aims at generating a response utterance \(r\coloneqq u_{T+1}\) as a continuation of a dialog context \(c\coloneqq u_{1}^{T}=[u_{1},u_{2},\cdots,u_{T}]\) consisting of \(T\) utterances that are usually made by one of the speakers. Each utterance \(u_{i}=[u_{i}]_{1}^{K}\) is a sequence of \(K\) tokens.
### Conditional Variational Autoencoders
Variational Autoencoders (VAEs) offer unsupervised sampling data from a prior distribution that is assumed to be similar to the real distribution of the data. Conditional Variational Autoencoders (CVAEs) are an extension of VAEs, a class of generative models that has shown excellent results in several machine learning applications. CVAEs enhance the traditional VAE by conditioning on additional information, thus allowing us to guide the data generation towards specific attributes or features.
A CVAE is a conditional latent variable encoder-decoder model that parameterize the prior and posterior distributions using neural networks (NNs). Given a dialog context \(c\) as condition and latent variables \(\mathbf{z}\), CVAE uses NNs (parametrized by \(\psi\)) to approximate the prior distribution of latent variables given the condition, denoted as \(p_{\mathbf{\psi}}(\mathbf{z}|c)\). The posterior distribution \(q_{\mathbf{\phi}}(\mathbf{z}|r,c)\), conditioned on both the response \(r\) and the condition \(c\), is responsible for approximating the true posterior distribution over these latent variables (also parameterized by NNs). By doing this, the posterior network can provide robust and accurate representations of complex and high-dimensional data in a lower-dimensional latent space. CVAE then generates new valid responses \(r\) using the latent variables and specific conditions based on the generative distribution \(p_{\mathbf{\theta}}(r|c,\mathbf{z})\), parameterized by \(\theta\).
During the training process, the CVAE is optimized to maximize the Evidence Lower Bound
(ELBO), which can be defined as:
\[\log p(r|c) \geq\mathbb{E}_{\mathbf{z}\sim q_{\mathbf{\phi}}(\mathbf{x}|r,c)}[\log p_{\mathbf{ \theta}}(r|\mathbf{z},c)] \tag{1}\] \[-\text{KL}(q_{\mathbf{\phi}}(\mathbf{z}|r,c)||p_{\mathbf{\psi}}(\mathbf{z}|c))\]
where the first term is a reconstruction term that ensures the generated data is as close as possible to the original input, and a KL divergence term aligns the learned posterior distribution with the prior.
CVAEs have shown great potential to ensure the diversity and informativeness of the generated responses in DRG due to the introduction of the latent variables \(\mathbf{z}\) that can be used to represent underlying, hidden factors assumed to influence the observable data, e.g., topics, intents and styles corresponding to different responses (Zhao et al., 2017). However, they align the true posterior distribution over the latent variables to a simple prior distribution such as an isotropic Gaussian distribution, which is incompatible to the multi-modal1 nature of diverse responses (Gu et al., 2019; Chen et al., 2019).
Footnote 1: A multimodal distribution is a continuous probability distribution with two or more modes.
A promising solution to the problem is to parameterize these distributions with more expressive distributions using advanced generative models like normalizing flows (Luo and Chien, 2021) and generative adversarial networks (Khan et al., 2020). In this work, we propose to use expressive PLMs to estimate the posterior distribution (SS2.4), while parameterizing the prior distribution with a diffusion model (SS2.5).
### Pre-trained Transformer LMs
Previous work has utilized PLMs for dialog response generation (Park and Lee, 2021; Hu et al., 2022; Tu et al., 2022), which have shown to be capable of generating high quality text (Lewis et al., 2020; Raffel et al., 2020). In this work, we also use an encoder-decoder PLM with \(L\) encoder (Enc) and \(L\) decoder (Dec) layers.
We form the input string by simply concatenating them together following the dialog order with a separator token, for example, "\(u_{1}\) </s> \(u_{2}\) </s> \(u_{3}\) </s> \(u_{4}\)". The input is then converted into a sequence of \(N\) tokens \(\mathbf{c}=y_{1}^{N}\). The encoder takes the input tokens and returns contextualized embeddings. \(\mathbf{H}_{c}^{\text{Enc}_{l}}\in\mathbb{R}^{N\times d}\) is the hidden output by the \(l\)-th \(e\)ncoder layer \(\texttt{Enc}_{l}(\cdot)\).
Similar to the dialog context, the response will be converted into a sequence of tokens \(\mathbf{r}=x_{1}^{K}\) and will also be embedded into vectors \(\mathbf{H}^{\text{Dec}_{l}}\). Different from the encoder, each decoder layer \(\texttt{Dec}_{l}(\cdot)\) will additionally take the output of the final encoder layer \(\mathbf{H}_{c}^{\text{Enc}_{L}}\) as input to align the source and target hidden space:
\[\mathbf{H}^{\text{Dec}_{l}}=\texttt{Dec}_{l}(\mathbf{H}^{\text{Dec}_{l-1}}, \mathbf{H}_{c}^{\text{Enc}_{L}}). \tag{2}\]
Since the decoder of a PLM is usually very powerful, the latent variables can easily be ignored. Thus several methods have been proposed to incorporate these PLMs into VAEs (Park and Lee, 2021; Hu et al., 2022; Tu et al., 2022).
### PLMs to VAEs
The use of PLMs in VAEs or CVAEs mainly differ in (1) the derivation of latent variables from the encoder and (2) the latent infusion in the decoder. Inspired by DELLA (Hu et al., 2022), instead of getting only one latent variable \(\mathbf{z}\), we obtain a series of hierarchical latent variables \(\mathbf{z}=\{\mathbf{z}^{1},\cdots,\mathbf{z}^{L}\}\) to improve the flexibility of the aggregated posterior distribution. The latent variable \(\mathbf{z}^{l}\in\mathbb{R}^{d}\) is first computed from the representations of the corresponding encoder layer and the latent variables in lower layers. It is then infused into the corresponding decoder layer. Specifically, the posterior distribution of the latent variable \(\mathbf{z}\) can be factorized as:
\[q_{\mathbf{\phi}}(\mathbf{z}|r,c)=\prod_{n=1}^{L}q_{\mathbf{\phi}}(\mathbf{z}^{l}|\mathbf{z}^{<l},r,c). \tag{3}\]
### Diffusion Models
Diffusion models are another type of generative models based on latent variables (Ho et al., 2020). The data generation process relies on a Markov Chain \(\mathbf{z}_{M},\cdots,\mathbf{z}_{0}\) of diffusion steps by gradually adding random noise to data with a fixed procedure (forward),
\[\mathbf{z}_{0}\rightarrow\cdots\rightarrow\mathbf{z}_{t-1}\rightarrow\mathbf{z}_{t} \rightarrow\cdots\rightarrow\mathbf{z}_{M}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\]
and then learning the reverse process by reconstructing the data (reverse).
\[\mathbf{z}_{M}\rightarrow\cdots\rightarrow\mathbf{z}_{t}\rightarrow\mathbf{z}_{t-1} \rightarrow\cdots\rightarrow\mathbf{z}_{0}\]
Given a data point \(\mathbf{z}_{0}\) sampled from a target distribution, the forward process gradually corrupts the data with Gaussian noise until getting a pure Gaussian noise variable \(\mathbf{z}_{M}\sim\mathcal{N}(0,\mathbf{I})\). Each step \(t\in[1,\cdots,M]\) of the corruption is controlled by
\[q(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\mathcal{N}(\sqrt{1-\beta_{t}}\mathbf{z}_{t-1},\beta_{t }\mathbf{I}) \tag{4}\]
with a predefined \(\beta_{t}\in(0,1)\) indicating different levels of the introduced noise.
In the reverse process, the diffusion model is trained to gradually denoise samples from the standard Gaussian distribution until it fully reconstructs data samples from the target distribution. Specifically, the reverse process is defined by:
\[p_{\mathbf{\varphi}}(\mathbf{z}_{0:M})=p(\mathbf{z}_{M})\prod_{t=1}^{M}p_{\mathbf{ \varphi}}(\mathbf{z}_{t-1}|\mathbf{z}_{t}) \tag{5}\]
where \(\mathbf{\varphi}\) denotes the learnable parameters and:
\[\begin{split}& p(\mathbf{z}_{M})=\mathcal{N}(\mathbf{0},\mathbf{ I})\\ & p_{\mathbf{\varphi}}(\mathbf{z}_{t-1}|\mathbf{z}_{t})=\mathcal{N}( \mathbf{\mu}_{\mathbf{\varphi}}(\mathbf{z}_{t},t),\sigma_{t}^{2}\mathbf{I})\end{split} \tag{6}\]
where \(\mathbf{\mu}_{\mathbf{\varphi}}(\mathbf{z}_{t},t)\) is paramterized by an NN with \(\sigma_{t}\) predefined. Since the NN here is used to approximate the original data \(\mathbf{z}_{t-1}\) given a noisy datapoint \(\mathbf{z}_{t}\) at the timestep \(t\), it is literally named a _denoising network_.
According to the forward process, given access to the original data \(\mathbf{r}_{0}\), the reverse transiton distribution can be derived analytically:
\[q(\mathbf{z}_{t-1}|\mathbf{z}_{t},\mathbf{z}_{0})=\mathcal{N}(\mathbf{\mu}_{t}( \mathbf{z}_{t},\mathbf{z}_{0}),\sigma_{t}^{2}\mathbf{I}) \tag{7}\]
Note that the original data \(\mathbf{z}_{0}\) is not available in the actual generation process, i.e., the response is not given for computing \(\mathbf{z}_{0}\). The training objective of the diffusion model is to approximate \(\mathbf{\mu}_{t}(\mathbf{z}_{t},\mathbf{z}_{0})\) using the denoising network \(\mathbf{\mu}_{\mathbf{\varphi}}(\mathbf{z}_{t},t)\):
\[\mathbb{E}_{t,\mathbf{z}_{0},\mathbf{z}_{t}}\left[\frac{1}{2\sigma_{t}^{2}}|| \mathbf{\mu}_{t}(\mathbf{z}_{t},\mathbf{z}_{0})-\mathbf{\mu}_{\mathbf{\varphi}}(\mathbf{ z}_{t},t)||\right] \tag{8}\]
After the training is done, we can utilize the trained denoising network \(\mathbf{\mu}_{\mathbf{\varphi}}(\mathbf{z}_{t},t)\) to build the reverse Markov Chain. Through sampling from the reverse chain, we can get new high quality data samples.
In the case of conditional generation, the target distribution becomes a conditional distribution \(p(\mathbf{z}|\mathbf{a})\). The condition information can be easily introduced as an additional input for the denoising network, as in \(\mathbf{\mu}_{\mathbf{\varphi}}(\mathbf{z}_{t},t,\mathbf{a})\). In this paper, we follow the classifier-free guidance (Ho and Salimans, 2022) to train a conditional and an unconditional diffusion model simultaneously and use the interpolation of the output of theses two models as the final prediction during sampling.
## 3 Dior-CVAE
We present Dior-CVAE, a hierarchical CVAE model based on an encoder-decoder Transformer, with several proposed improvements (Figure 2). First, we enhance the layer-wise latent variable computation with attention mechanism (SS3.1) and infuse them into the decoder accordingly. Second, we introduce memory dropout to alleviate posterior collapse, a well-known problem in CVAEs. Last, we parameterize the prior distribution of the latent using a diffusion model for a more flexible representation compared to an isotropic Gaussian distribution.
### Hierarchical Latent Variables
Different from Hu et al. (2022), we construct the sequence representation \(\mathbf{e}_{c}^{\mathsf{Enc}_{i}}\) of each encoder layer by attending over all hidden states of that layer (Yang et al., 2016):
\[\mathbf{e}_{c}^{\mathsf{Enc}_{l}}=\sum_{i=1}^{N}\alpha_{i}\mathbf{h}_{c_{i}}^ {\mathsf{Enc}_{l}} \tag{9}\]
where \(\alpha_{i}\) is the attention weights calculated by:
\[\alpha_{i}=\frac{\exp(\text{tanh}(\mathbf{v}^{T}\mathbf{h}_{c_{i}}^{\mathsf{ Enc}_{l}}))}{\sum_{i}\exp(\text{tanh}(\mathbf{v}^{T}\mathbf{h}_{c_{i}}^{\mathsf{ Enc}_{l}}))} \tag{10}\]
\(\mathbf{v}\in\mathbb{R}^{d}\) is a trainable parameter vector.
In addition to the dialog context, we also need the information from the reference response \(\mathbf{r}\) during training to infer parameters of the posterior distribution in Eq. (3). Subsequently, we utilize the same Transformer encoder to encode the response simultaneously and get the representation \(\mathbf{e}_{r}^{\mathsf{Enc}_{l}}\) from hidden states output by \(l\)-th encoder layer following the above attention method.
To summarize the information from latent variables of the lower layers, we calculate the representation of these latent variables through a _summary network_ defined as:
\[\mathbf{z}^{<l}=\texttt{MLP}(\mathbf{W}_{\text{sum}_{1}}^{l}\mathbf{z}^{<l-1} +\mathbf{W}_{\text{sum}_{2}}^{l}\mathbf{z}^{l-1}) \tag{11}\]
where \(\mathbf{W}_{\text{sum}_{1}}^{l},\mathbf{W}_{\text{sum}_{2}}^{l}\in\mathbb{R}^ {d\times d}\) are the matrices of trainable parameters specific to each encoder layer, \(\texttt{MLP}\) denotes the fully-connected neural network with one hidden layer.
Based on the information from dialog context, corresponding response and latent variables of the lower layer, we can then calculate the parameters of
the latent variable \(\mathbf{z}^{l}\) through a _recognition network_, which can be described as follows:
\[\begin{bmatrix}\mathbf{\mu}_{l}\\ \log(\mathbf{\sigma}_{l})\end{bmatrix}=\texttt{MLP}(\mathbf{W}^{l}_{\text{rec}} \begin{bmatrix}\mathbf{z}^{<l}\\ \mathbf{e}^{enc_{l}}_{\mathbf{c}}\\ \mathbf{e}^{enc_{l}}_{\mathbf{r}}\end{bmatrix}) \tag{12}\]
where \(\mathbf{W}^{l}_{\text{rec}}\in\mathbb{R}^{2d\times 3d}\in\mathbb{R}^{2d \times 2d}\) and \([\cdot]\) denotes the concatenation of the representations. Then we utilize the re-parameterization trick Kingma et al. (2021) to draw samples from the inferred posterior distribution of \(\mathbf{z}^{l}\).
### Latent Infusion in Decoder
Previous work has shown that hierarchical latent memory can better infuse knowledge into PLMs compared to other methods such as using the latent variables as input token embeddings or infusing it into the self-attention keys and values Hu et al. (2022). We elaborate the use of _hierarchical latent memory_ in the following part of this section. For dialog response generation, there is an additional attention mechanism to communicate between the generated response and the dialog context, which allows the decoder to easily bypass the latent variables and then completely ignore the latent variables, causing posterior collapse Bahuleyan et al. (2018). To mitigate this, different dropouts to the decoder input have been proposed and adapted to text VAEs such as standard uniform dropout Srivastava et al. (2014) or word dropout Iyyer et al. (2015); Miladinovic et al. (2022). While for CVAEs, the conditional attribute \(\mathbf{a}\) theoretically may make the latent redundant due to the attribute information that should be partially captured by the latent. We thus propose memory dropout that addresses this issue, which is described afterwards.
Hierarchical Latent Memory.As the name implies, we infuse the layer-wise latent variables into the corresponding decoder layer through adding them to the memory bank of the attention mechanism inside the decoder layer. Effects of the latent variable can then be propagated to the following generated text through the self-attention mechanism and cross-attention mechanism of every decoder layer. To be more specific, for the \(l\)-th decoder layer, we use the concatenation of the latent variable \(\mathbf{z}^{l}\) and the hidden states of the \(l-1\)-th layers as input to the \(l\)-th layer. We also concatenate the latent variables to the final output of the encoder layer. The latent variables can serve as an additional memory vector for the self-attention and cross-attention module of every decoder layer:
\[\mathbf{H}^{\texttt{Dec}_{l}}=\texttt{Dec}_{l}(\mathbf{W}^{l}_{p}\mathbf{z} ^{l}\oplus\mathbf{H}^{\texttt{Dec}_{l-1}},\mathbf{W}^{l}_{d}\mathbf{z}^{l} \oplus\mathbf{H}^{\texttt{Enc}_{L}}) \tag{13}\]
where \(\mathbf{W}^{l}_{p}\in\mathbb{R}^{d\times d},\mathbf{W}^{l}_{d}\in\mathbb{R}^{d \times d}\) are the trainable parameters. Based on the hierarchical latent memory method, information in the latent variables can be dispersed to the text generated in the following steps.
Memory dropoutTo encourage the decoder to utilize the latent variable, we apply random dropout to the hidden state \(\mathbf{h}^{\texttt{Enc}_{L}}_{c_{i}}\) where \(i\in[1,N]\). Subsequently, the equation of calculation in the decoder layer currently gets to be:
\[\mathbf{H}^{\texttt{Dec}_{l}} =\texttt{Dec}_{l}(\mathbf{H}^{\texttt{Dec}_{l-1}},m_{\beta}( \mathbf{H}^{\texttt{Enc}_{L}}_{c})) \tag{14}\] \[=\texttt{Dec}_{l}(\mathbf{H}^{\texttt{Dec}_{l-1}},m_{\beta}( \mathbf{h}^{\texttt{Enc}_{L}}_{c_{1}},\cdots,\mathbf{h}^{\texttt{Enc}_{L}}_{c_ {N}}))\]
where \(m_{\beta}(\cdot)\) denotes the dropout operation with a certain probability \(\beta\). In comparison with previous methods, our proposed memory dropout does not introduce additional trainable parameters Miladinovic et al. (2022).
Figure 2: Our Dior-CVAE model architecture.
### Diffusion Priors
To improve the flexibility and mode coverage of the prior distribution, we propose to model the prior distribution of the latent variables conditioned on the dialog context using a diffusion model (SS2.5). Formally, for the latent variable in \(l\)-th layer \(\mathbf{z}^{l}\), we use the Markov Chain \(\mathbf{z}_{M}^{l},\cdots,\mathbf{z}_{1}^{l},\mathbf{z}_{0}^{l}\) to model the generative process, where \(\mathbf{z}_{0}^{l}=\mathbf{z}^{l}\). Then we can formulate the generative process as:
\[\mathbf{z}_{M}^{l} \sim\mathcal{N}(\mathbf{0},\mathbf{I}) \tag{15}\] \[\mathbf{z}_{t-1|t}^{l} \sim p_{\boldsymbol{\varphi}^{l}}(\mathbf{z}_{t-1}^{l}|\mathbf{z }_{t}^{l},\mathbf{c}),\forall t\in[M,\dots,1]\]
where \(p_{\boldsymbol{\varphi}^{l}}(\mathbf{z}_{t-1}^{l}|\mathbf{z}_{t}^{l},\mathbf{c })=\mathcal{N}(\boldsymbol{\mu}_{\boldsymbol{\varphi}^{l}}(\mathbf{z}_{t}^{l},t,\mathbf{c}),(\sigma_{t}^{l})^{2}\mathbf{I})\). \(\boldsymbol{\mu}_{\boldsymbol{\varphi}^{l}}(\mathbf{z}_{t}^{l},t,\mathbf{c})\) is defined in Eq. (18).
Hierarchical latent variables assume a dependent relationship between the latent variable of the \(l\)-th layer and latent variables in the lower layers (SS3.1). To represent this in the prior distribution, we further use the noisy latent variables and conditional representations from all other layers as input for the denoising network in \(l\)-th layer. Specifically, we concatenate the noisy latent variables and conditional representation from all layer as input to the denoising network:
\[\mathbf{e}_{z_{t}}^{l}=\begin{bmatrix}\mathbf{z}_{t}^{1}\\ \vdots\\ \mathbf{z}_{t}^{L}\end{bmatrix},\mathbf{e}_{c_{t}}^{l}=\begin{bmatrix} \mathbf{e}_{c}^{\mathsf{Enc}_{1}}\\ \vdots\\ \mathbf{e}_{c}^{\mathsf{Enc}_{L}}\end{bmatrix} \tag{16}\]
To condition on the time step scalar \(t\), we first map it to a sinusoidal positional encoding \(\mathsf{pe}(t)\in\mathbb{R}^{d}\)(Vaswani et al., 2017). The positional encoding is then passed through a feed-forward neural network to get the time embedding:
\[\mathbf{e}_{p}^{l}=\texttt{MLP}(\mathsf{pe}(t)) \tag{17}\]
The conditional information contained in the response representation \(\mathbf{r}^{\mathsf{Enc}_{l}}\) can be combined with the time embedding using the element-wise addition operation.
The denoising network can then be defined as:
\[\boldsymbol{\mu}_{\boldsymbol{\varphi}^{l}}(\mathbf{z}_{t}^{l},t,\mathbf{c}) =\texttt{MLP}(\mathbf{W}_{\mu}^{l}\begin{bmatrix}\mathbf{e}_{p}^{l}+ \mathbf{e}_{c_{t}}^{l}\\ \mathbf{e}_{z_{t}}^{l}\end{bmatrix}) \tag{18}\]
where \(\mathbf{W}_{\mu}^{l}\in\mathbb{R}^{2Ld\times d}\) is the matrix of trainable parameter.
### End-to-end Training
As mentioned in SS2.2, the objective of CVAEs is to maximize the ELBO consisting of a reconstruction and a KL divergence term. To learn also the latent diffusion prior simultaneously, we follow Vahdat et al. (2021) to decompose the KL term into its negative entropy \(\mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{r},\mathbf{c})}[\log q_ {\boldsymbol{\phi}}(\mathbf{z}|\mathbf{r},\mathbf{c})]\) (Eq. 3) and cross-entropy \(\mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{r},\mathbf{c})}[-\log p_ {\boldsymbol{\psi}}(\mathbf{z}|\mathbf{c})]\). The ELBO in Eq. (1) can then be rewritten as:
\[\log p(\mathbf{r}|\mathbf{c}) \geq\mathbb{E}_{\mathbf{z}\sim q_{\boldsymbol{\phi}}(\mathbf{z}| \mathbf{r},\mathbf{c})}[\log p_{\boldsymbol{\theta}}(\mathbf{r}|\mathbf{z}, \mathbf{c})] \tag{19}\] \[-\mathbb{E}_{\mathbf{z}\sim q_{\boldsymbol{\phi}}(\mathbf{z}| \mathbf{r},\mathbf{c})}[\log q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{r}, \mathbf{c})]]\] \[+\mathbb{E}_{\mathbf{z}\sim q_{\boldsymbol{\phi}}(\mathbf{z}| \mathbf{r},\mathbf{c})}[\log p_{\boldsymbol{\psi}}(\mathbf{z}|\mathbf{c})]\]
where the reconstruction term and the negative entropy term can be easily calculated by utilizing the re-parameterization trick (Kingma and Welling, 2014). The cross entropy term can be further expressed with the denoising score matching objective (Vahdat et al., 2021).
## 4 Experiments
All experiments are implemented using the OpenNMT-py library (Klein et al., 2017). We evaluate all models after 5,000 updating steps on the validation set and pick the model with the best validation results. We generate responses by sampling from the top-\(k\) tokens with the top-\(p\) predicted probabilities at each decoding step (Fan et al., 2018; Holtzman et al., 2020). We refer the reader to Appendix A for more detailed hyperparameter settings.
### Datasets & Metrics
We train and evaluate our proposed model on two common benchmarking datasets DailyDialog (Li et al., 2017) and Persona-Chat (Zhang et al., 2018). DailyDialog dataset is a collection of dialogs in English whose topics include work, babies, and other aspects of daily life. Persona-Chat includes personas of the speakers in every dialog, in order to encourage models to generate more engaging and personalized responses.
We measure the lexical similarity of the generated responses and the references using BLEU-\(1/2\)(Papineni et al., 2002). To measure lexical diversity, we use Distinct-\(1/2\)(Li et al., 2016) to compute the ratio of distinct \(n\)-gram in responses.
### Baselines
We compare Dior-CVAE with state-of-the-art for dialog response generation.
**iVAE\({}_{\mathbf{MI}}\)**(Fang et al., 2019): an implicit VAE model based on LSTMs that uses a NN to produce the posterior distribution.
**LIC**Golovanov et al. (2019): a PLM fine-tuned on the open-domain dialog datasets.
**Optimus**Li et al. (2020): a pre-trained Transformer VAE for text generation.
**ProphetNet**Qi et al. (2020): a PDG pre-trained on predicting more than one future tokens.
**MVP+S**Tang et al. (2022): a multi-task supervised pre-trained model for text generation.
**DRESS**Han et al. (2022): a PLM fine-tuned to produce a balanced semantic distribution over the generated responses.
**PLATO**Bao et al. (2020): a large-scale pre-trained DRG model that uses a discrete latent variable to address the one-to-many problem.
**DialogVED**Chen et al. (2022): a Transformer VAE pre-trained on large-scale dialog data in order to improve DRG.
## 5 Results
### Main Results on DRG
Table 1 presents the automatic evaluation results on the test sets of DailyDialog and Persona-chat. Our Dior-CVAE model generates higher quality responses when comparing to the baselines, demonstrated by the higher results over most of the evaluation metrics on the two datasets.
On the DailyDialog dataset, Dior-CVAE achieves a higher score not only on the BLEU score that expresses the text similarity between the generated sentence and the reference sentence but also on the Distinct score that expresses the diversity of generated responses. While solely fine-tuning on the target dialog dataset, our model significantly outperform all baselines by a large margin. In addition, Dior-CVAE also performs better than the models pre-trained on large-scale dialog data such as PLATO and DialogVED. This implies that the diffusion model can help to build the prior distribution more precisely.
Considering the Persona-chat dataset, even with less number of parameter, Dior-CVAE mostly achieve better performance in comparison to SOTA models with or without dialog pre-training. This demonstrates the expressive representation capability of the diffusion model in modelling the prior distribution in combination with the PLMs to generate high quality responses.
### Ablation Study
To verify the effectiveness of each component in Dior-CVAE, we conduct an ablation study by measuring the contributions of them for DRG.
Table 2 presents the results of the ablation study on the test set of the DailyDialog dataset. We can find the diffusion priors (DP) benefit both the BLEU-1/2 scores which measure the coherence and the Distinct-1/2 scores that evaluate the diversity of the generated responses. To further verify the performance gain originating from the memory dropout (MD) and hierarchical latent memory (HLM), we ablate the effects of these two components respectively. The hierarchical latent variables also contribute to both the BLEU-1/2 score and Distinct-1/2 scores, similar to DP. This improvement indicates the importance of the latent computation and infusion in the encoder and decoder. Different from the two above, the memory dropout method mainly contributes to the Distinct-1/2 scores while having negative effects on the BLEU-1/2 scores. This behavior is expected as the dropout aims at encouraging the decoder to guide the generation using latent variables. The diversity introduced by the latent variables might bring the generated responses far from the references.
### Human Evaluation
Since automatic metrics for open-domain text generation may not be consistent with human perceptions Liu et al. (2016), we also conduct human evaluation on the DailyDialog dataset2 with the help of three expert annotators. All annotators have NLP background. We sample 100 dialogs in the intersection of DailyDialog and DailyDialog++Sai et al. (2020) to have multiple references for each dialog. For each dialog, we generate five responses using Dior-CVAE and DialogVED. For quality, each annotator is asked to judge the quality in regards to the following four criteria: Coherent (COH), Informative (INF), Safety (SAF) and Engagement (ENG) on 3-point Likert scale. We describe the details of the criteria and the indication of each point in Appendix B. Furthermore, we automatically mark responses that do not violate any criteria as _valid_ responses, i.e., only a maximum of 5 generated responses are valid. For the evaluation of diversity, annotators are asked to annotate the number of distinct meanings among the _valid_ responses.
Footnote 2: Only DailyDialog dataset has an extended version annotated with multiple responses per dialog, namely DailyDialog.
Table 3 reports the result of human evaluation. We can notice that Dior-CVAE achieves a better
performance than the DialogVED not only on the quality metrics but also on the diversity metric. We further demonstrate some case studies to help illustrate the effectiveness of our model in Table 4.
## 6 Related Work
Variational dialog generation.Dialog generation in the open-domain is still a challenging task. Many methods are troubled with _safe and commonplace response_ problem, e.g., "I don't know." (Li et al., 2016). This is due to the one-to-many relationship between a dialog context and its potential responses that can not be captured by vanilla seq2seq model. Conditional variational autoencoders (CVAE) (Kingma and Welling, 2014; Sohn et al., 2015) can effectively increase the diversity of generated responses by enriching the dialog context representations with latent variables (Zhao et al., 2017; Shen et al., 2017; Serban et al., 2017; Chen et al., 2018). However, in these works, the form of prior and posterior distributions of the latent variables are assumed to be isotropic Gaussian, which is too simple to match the real data distribution.
To address this, some studies (Serban et al., 2017; Zhao et al., 2018; Gao et al., 2019; Cai and Cai, 2022) try to introduce discrete latent variables to non-smooth regions of probability mass and multiple modes in latent variable space. Another line of solutions is to build another VAE or CVAE in the latent variable space to improve the representation ability of the dialog generation model in order to capture more complex relationships (Shen and Su, 2018; Shen and Su, 2018). Additionally, there are also some studies try to utilize more advanced generative models like Generative Adversarial Network (Goodfellow et al., 2020; Gu et al., 2019; Khan et al., 2020) or Normalizing Flows (Rezende and Mohamed, 2015; Luo and Chien, 2021).
\begin{table}
\begin{tabular}{l|c c c|c c c c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{DailyDialog} & \multicolumn{4}{c|}{PersonaChat} & \multirow{2}{*}{\begin{tabular}{c} Param \\ (mil.) \\ \end{tabular} } \\ & BLEU-1 & BLEU-2 & Distinct-1 & Distinct-2 & BLEU-1 & BLEU-2 & Distinct-1 & Distinct-2 &
\begin{tabular}{c} Param \\ (mil.) \\ \end{tabular} \\ \hline \hline \multicolumn{10}{c|}{_without dialog pre-training_} \\ \hline iVAE\({}^{\dagger}_{\text{alt}}\) & 30.9 & 24.9 & 2.9 & 25.0 & 38.2 & 27.7 & 0.9 & 8.2 & 3.9 \\ LIC & - & - & - & - & 40.5 & 32.0 & 1.9 & 11.3 & 117 \\ Optimus\({}^{\dagger}\) & 41.2 & 38.5 & 4.1 & 29.7 & 42.7 & 34.3 & 1.9 & 11.7 & 227 \\ ProphetNet & 44.3 & 39.2 & 3.9 & 21.1 & **46.6** & **39.1** & 1.3 & 7.5 & 332 \\ MVP+S\({}^{\dagger}\) & 45.7 & 42.9 & 5.1 & 27.1 & 43.4 & 35.8 & 2.0 & 11.1 & 406 \\ DRESS & - & - & - & 5.4 & 29.1 & - & - & - & 406 \\ \hline Dior-CVAE (ours) & **50.3** & **46.7** & **7.0** & 35.1 & 42.6 & 36.1 & 2.8 & 26.5 & 237 \\ \hline \hline \multicolumn{10}{c}{_with large-scale dialog pre-training_} \\ \hline PLATO & 39.7 & 31.1 & 5.4 & 29.1 & 40.6 & 31.5 & 2.1 & 12.1 & 115 \\ DialogVED & 43.1 & 37.0 & 5.8 & **37.2** & 42.8 & 35.7 & **3.2** & **27.3** & 392 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main results on the test sets of DailyDialog and Persona-chat datasets. \(\dagger\) indicates the results of these models are fine-tuned by this work. The highest results are highlighted in **bold**.
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Quality} & \multirow{2}{*}{
\begin{tabular}{c} Diversity \\ (mil.) \\ \end{tabular} } \\ & COH & INF & SAF & ENG & \\ \hline Human & 1.908 & 1.862 & 2.000 & 1.947 & 4.831 \\ Dior-CVAE & 1.534 & 1.601 & 1.993 & 1.693 & 1.845 \\ DialogVED & 1.215 & 1.378 & 1.983 & 1.433 & 1.500 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study results over the test set of DailyDialog.
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Quality} & \multirow{2}{*}{
\begin{tabular}{c} Diversity \\ (mil.) \\ \end{tabular} } \\ & COH & INF & SAF & ENG & \\ \hline Human & 1.908 & 1.862 & 2.000 & 1.947 & 4.831 \\ Dior-CVAE & 1.534 & 1.601 & 1.993 & 1.693 & 1.845 \\ DialogVED & 1.215 & 1.378 & 1.983 & 1.433 & 1.500 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Human evaluation results over the test set of DailyDialog. We sample 100 dialogs from the DailyDialog test set.
Diffusion models for text generation.Diffusion models (Sohl-Dickstein et al., 2015) have achieved great success in generating continuous data like image and audio (Kong et al., 2021; Mittal et al., 2021). Even so, extending the denoising diffusion models to natural language remains an open challenge due to the inherently discrete nature of texts.
To adapt this class of models to the discrete data, some past studies (Austin et al., 2021; Hoogeboom et al., 2021, 2022) focused on defining a forward diffusion process on discrete data. However, the use of a diffusion model in the discrete space can be computationally expensive. A sentence of just ten words can have massive possible combinations of word sequences, making it difficult to model the probability distribution over the entire discrete space. As a result, the number of model parameters required to accurately represent the distribution can be extremely large, leading to slow training time and high computational requirements. Other studies try to apply the continuous denoising diffusion model on the text data. Based on the non-autoregressive decoding method, Diffusion-LM (Li et al., 2022) and DiffuSeq (Gong et al., 2022; Strudel et al., 2022) proposed a new kind of language models. However, language generation is currently dominated by large pre-trained auto-regressive transformers (Brown et al., 2020; Chowdhery et al., 2022). To better utilize the existing pretrained language models, there are some other studies (Liu et al., 2022; Lovelace et al., 2022; Yu et al., 2022) trying to apply the diffusion model in the latent space of the text.
## 7 Conclusion & Future Work
In this paper, we propose Dior-CVAE, a novel variational dialog generation model. Our model derives and infuses hierarchical latent variables using attention mechanisms. We propose memory dropout in the latent infusion to address posterior collapse. Our model parameterizes the prior distribution using a diffusion model to improve the diversity of the generated responses. Our experiments on two popular dialog datasets verify the effectiveness of our methods.
## Limitations
As with any generative models, there are drawbacks to the Dior-CVAE that we constructed. The first is the unstablization of the training process. The problem stems from the sampling operation during the optimization process of the diffusion model. Also, integration of random latent variables can also injure the relevancy of the generated responses to some extent. We believe that with further follow-up work and optimization, these issues can be addressed, and this approach will turn out to be a compelling preliminary work for diverse response generation using diffusion priors.
## Acknowledgements
This work has been funded by the European Union under the Horizon Europe grant No 101070351 (SERMAS). We thank our internal reviewers for their constructive comments on this work.
|
2304.02670 | Reconstructing Network Dynamics of Coupled Discrete Chaotic Units from
Data | Reconstructing network dynamics from data is crucial for predicting the
changes in the dynamics of complex systems such as neuron networks; however,
previous research has shown that the reconstruction is possible under strong
constraints such as the need for lengthy data or small system size. Here, we
present a recovery scheme blending theoretical model reduction and sparse
recovery to identify the governing equations and the interactions of weakly
coupled chaotic maps on complex networks, easing unrealistic constraints for
real-world applications. Learning dynamics and connectivity lead to detecting
critical transitions for parameter changes. We apply our technique to realistic
neuronal systems with and without noise on a real mouse neocortex and
artificial networks. | Irem Topal, Deniz Eroglu | 2023-04-05T18:02:55Z | http://arxiv.org/abs/2304.02670v1 | # Reconstructing Network Dynamics of Coupled Discrete Chaotic Units from Data
###### Abstract
Reconstructing network dynamics from data is crucial for predicting the changes in the dynamics of complex systems such as neuron networks; however, previous research has shown that the reconstruction is possible under strong constraints such as the need for lengthy data or small system size. Here, we present a recovery scheme blending theoretical model reduction and sparse recovery to identify the governing equations and the interactions of weakly coupled chaotic maps on complex networks, easing unrealistic constraints for real-world applications. Learning dynamics and connectivity lead to detecting critical transitions for parameter changes. We apply our technique to realistic neuronal systems with and without noise on a real mouse neocortex and artificial networks.
_Dynamical networks_, including power grids, food webs, climate networks, and neuron networks, described by dynamical units oscillating on complex networks, are fundamental components of our everyday lives. The ability to regulate network dynamics is crucial for predicting, thus, controlling these systems' behavior to acquire the desired functionality. Neuron networks are an important class of dynamical networks for human wellness since the changes in the interaction can lead to undesired pathological situations. For instance, epileptic seizures are associated with emergent neural network synchronization when the dynamical network parameters change [1]. Consequently, it is vital to anticipate critical transitions to neuronal synchronization and invent predictive technologies to detect early warning signals to prevent potential tragedies [2]. In the case of neuron network dynamics, consisting of intrinsic neuron function and the coupling scheme between neurons, the critical transitions to synchronization are not directly determinable. Therefore, the governing equation must be recovered from the observations of the nodes for forecasting the critical transitions due to parameter changes.
The network dynamics reconstruction from data is a very active research field [3; 4; 5; 6; 7; 8]. Various methods were proposed to infer the connectivity matrix under some constraints, such as the need for a system to be at steady-state [9] or requiring prior knowledge about the dynamics [10; 11; 12] or the coupling strength [13]. In addition to the studies that reveal the connectivity matrix by control signals or analytical solutions, statistical learning approaches such as compressed sensing were also introduced to learn entire unknown dynamics [14; 15], which also infers the connectivity structure. However, statistical learning techniques are not extendable for large networks or require long time series measurements. A natural question is then whether revealing the network dynamics of weakly interacting chaotic oscillators would be possible using relatively short data without requiring knowledge of the system's nodal behavior and coupling scheme. This question is especially relevant in weak coupling regimes, in which the synchronization regime is unstable and the decay of correlation is exponential for chaotic oscillators, meaning that similarity measures cannot capture the interaction topology.
This Letter reports a dynamical network reconstruction approach from time series observations by integrating mean-field approaches from dynamical systems theory with statistical learning tools. Neural networks are described by chaotic isolated dynamics [16], weakly interacting nodes [17] and interaction through scale-free type networks [18]. Our reconstruction approach assumes that we have the mentioned neuroscientific setting and access to all nodes' data while the local dynamics of the nodes, the coupling function between them and the interaction structure are unknowns. Our methodology accurately identifies them using rather short time series and is independent from the network size, which is important since it is, generally, impossible to have long real-world observations, and real networks are large. Finally, as the reconstruction methodology includes mean-field approximations, the inferred model may not estimate the exact future states of the system due to the chaotic nature of the dynamical units. However, the reconstructed model allows us to predict the emergent collective behavior of dynamical networks considering parameter change, which is crucial to avoid undesired behaviors for real-world applications such as epilepsy seizures.
_Model.--_ The network dynamics of weakly coupled and identical \(n\) oscillators with interaction akin to diffusion is described by
\[\mathbf{x}_{i}(t+1)=\mathbf{f}(\mathbf{x}_{i}(t))+\sum_{j=1}^{n}w_{ij}\mathbf{H}(\mathbf{x}_{i}(t),\mathbf{x}_{j}(t))+\mathbf{\eta}_{i}(t) \tag{1}\]
where \(\mathbf{x}_{i}\in\mathbb{R}^{m}\), \(\mathbf{f}\colon\mathbb{R}^{m}\to\mathbb{R}^{m}\) represents the isolated dynamics of nodes and we assume it is chaotic [19].
is a diffusive coupling function (\(\mathbf{H}(\mathbf{x},\mathbf{x})=\mathbf{H}(0)=0\text{ and }\mathbf{H}(\mathbf{x},\mathbf{y})=-\mathbf{H}(\mathbf{y},\mathbf{x})\)). \(\mathbf{W}=[w_{ij}]\in\mathbb{R}^{n\times n}\) is the adjacency matrix of weighted and directed network where \(w_{ij}\geq 0\) is the interaction strength from node-\(j\) to node-\(i\). The noise term, \(\mathbf{\eta}_{i}(t)\), is uniformly distributed \(\|\mathbf{\eta}_{i}(t)\|\leq\eta_{0}\) for all nodes where \(\eta_{0}\) is noise intensity. This network dynamics, Eq. (5), is used to model numerous real-world applications including brain networks [20], power grids [21; 22], superconductors [23], and cardiac pacemaker cells [24].
_Reduction theorem.--_ A low-dimensional reduction of Eq. (5) is key for our network dynamics reconstruction approach. The reduction theorem applies a mean-field approach that relies on two main statements: (i) the statistical behavior of nodes' dynamics (frequency distribution of states) must be preserved and (ii) a large portion of nodes must be interacting with at least a few nodes in the network. These statements are satisfied with the given assumptions for the reduction theorem: _chaotic local dynamics_ of expanding maps and _weak coupling_ (to preserve the nodes' state distribution against fluctuations due to the interactions or external noise) and _scale-free networks_ (most of the nodes have small degrees \(k\sim n^{\epsilon}\), and some nodes are hubs with degrees \(k\sim n^{\frac{1}{2}+\epsilon}\) where \(\epsilon\) is an arbitrarily small number), which also mimic brain network dynamics. Using the theorem, the coupling term of Eq. (1) can be reduced as follows:
\[\sum_{j=1}^{n}w_{ij}\mathbf{H}(\mathbf{x}_{i},\mathbf{x}_{j}) \approx k_{i}\int\alpha\tilde{\mathbf{H}}(\mathbf{x}_{i},\mathbf{x}_{j})d\mu(\mathbf{x} _{j})\] \[= k_{i}\alpha(\tilde{\mathbf{V}}(\mathbf{x}_{i})+\tilde{C})=k_{i}\mathbf{V}(\bm {x}_{i})+C\]
where \(\mathbf{V}\) is the effective coupling function, \(k_{i}=\sum_{j}w_{ij}\) is the incoming degree of node \(i\), \(\mathbf{H}=\alpha\tilde{\mathbf{H}}\), \(\alpha\) is a multiplier for the coupling function, \(\mu\) is a physical measure of the isolated dynamics, \(C\) is the integration constant and the integral takes into account the cumulative effect of interactions on node-\(i\). As the coupling term is reduced as a function of an invariant measure \(\mu\), the reduction theorem works for a system in a steady state. Furthermore, to apply this mean-field approach-based reduction theorem, the statistical properties of individual dynamical systems must be preserved, which is satisfied by chaotic oscillators and weak coupling (see Supp. Mat. II and Ref. [25]). Then Eq. (5) can be written as
\[\mathbf{x}_{i}(t+1)=\mathbf{f}(\mathbf{x}_{i}(t))+k_{i}\mathbf{V}(\mathbf{x}_{i}(t))+C+\mathbf{\kappa} _{i}(t)+\mathbf{\eta}_{i}(t) \tag{2}\]
where \(\mathbf{\kappa}_{i}(t)\) is a small fluctuation for an interval of time that is exponentially large and depends on the state of neighbors of the \(i\)th node.
_Reconstruction scheme.--_ To learn isolated dynamics \(\mathbf{f}\) and coupling function \(\mathbf{H}\), we first need to classify nodes regarding their degrees. According to the reduction theorem, nodes with a similar in-degree must have a similar governing equation. To identify the governing equations of each node independently from other nodes, we use sparse regression, particularly the Sparse Identification of Nonlinear Dynamical Systems (SINDy) technique [26]. We denote the data collection of node-\(i\) by Eq.(5) as \(\mathscr{X}_{i}=[\mathbf{x}_{i}(1),\ldots,\mathbf{x}_{i}(T-1)]^{T}\) and \(\mathscr{X}_{i}^{\prime}=[\mathbf{x}_{i}(2),\ldots,\mathbf{x}_{i}(T)]^{T}\). SINDy performs a sparse regression for the linear equation \(\mathscr{X}_{i}^{\prime}=\mathbf{\Psi}(\mathscr{X}_{i})\mathbf{\Xi}_{i}\) to solve for \(\mathbf{\Xi}_{i}=[\xi_{i}^{1},\ldots,\xi_{i}^{p}]^{T}\), which is a vector of coefficients that defines the dynamics, where \(\mathbf{\Psi}=[\psi_{1},\ldots,\psi_{p}]\) represents a library of basis functions and is applied to \(\mathbf{x}_{i}\) as
\[\mathbf{\Psi}(\mathscr{X}_{i})=\left[\begin{array}{cccc}\text{candidate functions of }\mathbf{x}\\ \hline\psi_{1}(\mathbf{x}_{i}(1))&\psi_{2}(\mathbf{x}_{i}(1))&\ldots&\psi_{p}(\mathbf{x}_{i}( 1))\\ \psi_{1}(\mathbf{x}_{i}(2))&\psi_{2}(\mathbf{x}_{i}(2))&\ldots&\psi_{p}(\mathbf{x}_{i}(2))\\ \vdots&\vdots&\ddots&\vdots\\ \psi_{1}(\mathbf{x}_{i}(T-1))&\psi_{2}(\mathbf{x}_{i}(T-1))&\ldots&\psi_{p}(\mathbf{x}_{i}( T-1))\end{array}\right]\left|\begin{array}{c}\mathbf{\Xi}_{i}^{p}\\ \mathbf{\Xi}_{i}^{p}\end{array}\right.\]
where \(p\) is the number of candidate functions in the library \(\mathbf{\Psi}\). (For a detailed description for the basis library, see Supp. Mat. Sec. VIII.) Sparse regression's goal is to determine the dynamics with a small number of functions in \(\mathbf{\Psi}\) by finding active coefficients in \(\mathbf{\Xi}_{i}\) (see Supp. Mat. Sec. VII). Consequently, we obtain a predicted model for each node only using the associated node's own data, and we expect to learn similar models for the nodes with similar in-degree \(k_{i}\). A distance matrix is obtained by normalized Euclidean distance to classify the predicted models \(d_{ij}=(\Sigma_{k=1}^{p}\ \frac{1}{V_{k}}\ |\xi_{i}^{k}\ -\ \xi_{i}^{k}|^{2})^{1/2}\), where \(|\cdot|\) is absolute value, \(V_{k}\) is the variance of the predicted coefficients of the \(k\)th function in \(\mathbf{\Psi}\). Assume \(\mathbf{\Xi}_{i}\) and \(\mathbf{\Xi}_{j}\) are two predicted models of nodes-\(i\) and \(j\), which are presented as a linear combination of some functions within the library \(\mathbf{\Psi}\). We get smaller \(d_{ij}\) for a similar pair of nodes \(i\) and \(j\), while \(d_{ij}\) will be large for distinct nodes, such as a low-degree node and a hub. An example computation of \(d_{ij}\) can be found in Supp. Mat. Sec. VIII. The histogram \(P(D)\) is obtained by the row-sum of the distance matrix \(D_{i}=\sum_{j}d_{ij}\), which provides an excellent classification for model similarities in terms of their degrees (Fig. 1 (a)). The low-degree nodes are expected to be located in the highest bin of the histogram since the network has many low-degree nodes.
The models recovered for low-degree nodes are determined as our \(\mathbf{f}\) with a negligible fluctuation \(\mathbf{\kappa}_{i}\). Contrarily, the distance \(D_{i}\) is expected to be large for the hub nodes as they are the rarest. Therefore, hubs are located in the lowest bin of the histogram \(P(D)\) (Fig. 1 (a)). Note that the success of the reconstruction depends on the separability of the low-degree nodes and hubs with respect to their degrees, which means the network topology plays an important role here [2], which is further illustrated in Supp. Mat. Sec. II.
We obtain the cumulative coupling effect on the hub by discarding learned isolated dynamics contribution from the identified hub node's data as \(\mathscr{X}_{h}^{\prime}-\mathbf{f}(\mathscr{X}_{h})\) where \(h\) denotes the hub node. We fit a function to the cumulative coupling effect on the hub and learn the coupling function \(\mathbf{H}\) with a possible linear shift due to the integration constant \(C\) (Supp. Mat. Sec. II). The size of the linear shift can be easily estimated using \(\mathbf{H}(\mathbf{0})=\mathbf{0}\). Inferring the interaction function is vital to reveal the network [27], and learning \(\mathbf{H}\) from such reduced dynamics increases the feasibility of our approach. Introducing the Laplacian matrix, \(\mathbf{L}\) with \(L_{ij}=\delta_{ij}k_{i}-w_{ij}\) where \(\delta_{ij}\) is the Kronecker delta (\(\delta_{ii}=1\) and \(\delta_{ij}=0\) if \(i\neq j\)) and assuming that \(\mathbf{H}\) is a linear function, we can rewrite Eq. (5) in a compact form as
\[\mathbf{X}(t+1)=\mathbf{F}(\mathbf{X}(t))-(\mathbf{L}\otimes\mathbf{H})(\mathbf{X}(t)), \tag{3}\]
where \(\mathbf{X}=[\mathbf{x}_{1},\cdots,\mathbf{x}_{n}]^{T}\), \(\mathbf{F}(\mathbf{X})=[\mathbf{f}(\mathbf{x}_{1}),\cdots,\mathbf{f}(\mathbf{x}_{n})]^{T}\) and \(\otimes\) is the Kronecker product (See Supp. Mat. Sec. I for the derivation in terms of Laplacian matrix and [28]). Defining \(\mathbf{Y}(t)=\mathbf{X}(t+1)-\mathbf{F}(\mathbf{X}(t))\), Eq. (8) can be written as \(\mathbf{Y}=\mathbf{G}\mathbf{X}\) where \(\mathbf{G}=-(\mathbf{L}\otimes\mathbf{H})\). Finally, we complete the reconstruction by learning sparse matrix \(\mathbf{G}\in\mathbb{R}^{mn\times mn}\), by solving the linear equation \(\mathbf{Y}^{T}=\mathbf{X}^{T}\mathbf{G}^{T}\) using sparse regression, namely Least Absolute Shrinkage and Selection Operator (LASSO)[29], as suggested in Ref. [30]. LASSO adds the \(\ell_{1}\) regularization penalty to the least-squares loss function to find the sparse coefficients (the links), and it is assessed as a compressed sensing approach [31]. Note that the linear equation can also be solved with \(\ell_{2}\)-norm for long time series, however, as we are interested in short data (in the case of the length of the time series \(T<mn\)) the compressed sensing approach must be employed [31]. Consequently, we learn the connectivity matrices \(\mathbf{G}\) and \(\mathbf{L}\) as seen in Fig. 1 (b). It is also important to note that learning the equations of all nodes by a single sparse regression without the reduction theorem is only possible for relatively small networks. The library extension, due to network size, causes a statistically correlated data matrix that quickly fails on reconstruction [32; 33]. A detailed discussion can be found in Supp. Mat. Sec. XII.
_Mouse neocortex reconstruction.--_ A weighted and directed neural network (987 nodes and 1536 edges), representing a mouse neocortex [34; 35], is considered (Supp. Mat. Sec. IV). To mimic neurons, we used electrically coupled Rulkov maps akin to diffusion as
\[u_{i}(t+1) = \frac{\beta}{1+u_{i}(t)^{2}}+v_{i}(t)-\sum_{j}L_{ij}u_{j}+\eta_{i}\] \[v_{i}(t+1) = v_{i}(t)-\nu u_{i}(t)-\sigma\]
Figure 1: Illustration of the reconstruction scheme. (a) Performing sparse regression on each observation gives predicted models for each node. Nodes with the same in-degree are reconstructed with the same predicted models, which allow us to classify nodes concerning their in-degrees. As low-degree nodes are abundant and represent the isolated dynamics with a negligible noise, learning \(\mathbf{f}\) is possible. Discarding the local dynamics, \(\mathbf{f}\), from the hubβs data gives the dominant coupling effect on the hub. Therefore, the coupling function \(\mathbf{H}\) can be learned. (b) After learning \(\mathbf{f}\) and \(\mathbf{H}\), the problem is defined as a linear problem for each node by subtracting the local dynamics, which obtains the remaining interaction effect for each node. Sparse regression on the remaining interaction dynamics of node-\(i\) where \(i=1,\ldots,n\) entirely reconstructs the dynamical networks. Nonzero \(\mathbf{G}_{i}^{T}\) elements are the incoming connections for \(i\)th node (see Supp. Mat. Sec. II for a step by step scheme).
where the fast variable \(u_{i}\) is the membrane potential and the slow variable \(v_{i}\) is the ion concentration variation [36]. The constant parameters \(\beta=4.1\) and \(\nu=\sigma=0.001\) are fixed for chaotic bursting dynamics [37].
_Noise-free case.--_ Following the reconstruction scheme, the nodes are classified using the similarity histogram Fig. 2(a) for \(\eta_{0}=0\), while the pairwise Pearson correlations do not show any information about the degrees (inset in Fig. 2(a)). The difference between return maps of a low-degree node and the hub is illustrated in Fig. 2(b), and the comparison against an isolated node dynamics is given in Supp. Mat. Sec. V-A. Effective coupling \(\mathbf{V}(\mathbf{x})\) is found approximately \([0.1(u+1),0]\), meaning that \(\alpha\) is \(0.1\) and the linear shift is \(1\) on \(u\)-variable due to \(\mathbf{H}(\mathbf{0})=\mathbf{0}\) (Fig. 2(c)). Finally, we learn the network topology by solving the linear equation \(\mathbf{Y}=\mathbf{G}\mathbf{X}\) using the learned \(\mathbf{f}\) and \(\mathbf{\tilde{H}}\). We measure the reconstruction error using the fraction of the false negatives (positives) out of the positives (negatives), \(FNR\) (\(FPR\)). The \(FNR\) (\(FPR\)) equals \(0\) for perfect reconstruction (Supp. Mat. Sec. III). Here, we use the ground truth Laplacian matrix to assess the accuracy of the reconstruction. When the ground truth is not available, the learned model can be evaluated by the cross-validation techniques (Supp. Mat. Sec. XI). The reconstruction error is found to be almost zero for a data length larger than \(T>200\) (Fig. 2(d)). Furthermore, a systematical evaluation of our approach according to penalty terms and time series lengths is performed. When the number of nodes \(n\) times the dimension of the local dynamics \(m\) exceeds the time series length \(nm>T\), it corresponds to an underdetermined linear problem. Even for short data (\(T\approx 200\)), a successful reconstruction is possible with a small penalty term when \(mn=1974\) (Fig. 3(a)). Note that if the problem is overdetermined \(nm<T\), then \(\ell_{2}\)-norm regression can be used for faster computations (Supp. Mat. Sec. IX). Furthermore, we provide reconstruction analysis for Henon map
Figure 2: Reconstruction procedure for weakly electrically coupled Rulkov maps on a real mouse neocortex network. (a) Node-similarity histogram determines the low-degree nodes (orange bar) and the hub (red bar). Inset: Histogram \(P(S)\) presents the correlations between the original time series. It is impossible to infer the connectivity structure from the correlations due to the chaotic nature of the Rulkov maps. (b) The return maps of a low-degree node and the hub are slightly different due to weak coupling effect. (c) The effective coupling, \(V(u)\), shifts through the horizontal direction due to the integral constant \(1\). (d) \(FNR\) for different lengths of time series. \(FPR\) are zero for all time series lengths.
and the Tinkerbell map in Supp. Mat. Sec. V. We also perform our procedure on a macaque monkey visual cortex network [35; 38]. This network is not scale-free; however, assuming the local dynamics and the hub node are known, we reconstructed the network dynamics (see Supp. Mat. Sec. VI).
_Noise effect on reconstruction performance.--_ To measure the robustness of our methodology against noise, we systematically perform the reconstruction procedure for various \(\eta_{0}\) values (Eq. (5)) on synthetic as well as real mouse neocortex networks. The reconstruction approach is robust to small noise intensities for the real-world example (Fig. 3(b)). The average robustness of the reconstruction procedure over 50 different directed and weighted random scale-free networks is given in Fig. 4. The scale-free networks are generated using the algorithm in Ref. [39], and weights are assigned uniformly from the interval \([0.8,1.2]\). The algorithm generates undesired self-loops and multiple edges, so first we remove them. As the system size grows, the reconstruction performance reduces with respect to \(FNR\) for increasing noise intensity \(\eta_{0}\) (Fig. 4(a)), since the noise becomes more dominant than the weak coupling, which prohibits learning \(\mathbf{H}\) using the coupling effect. As similar to real-world application, \(FPR\) results are also negligibly small for the noise induced reconstruction case (Fig. 4(b)). We also performed experiments by generating denser networks where the reconstruction technique fails, therefore, the network sparsity is crucial for the reconstruction (see Supp. Mat. Sec. X).
_Prediction of emergent behavior.--_ Although we recover the true network structure, the reduction theory approximates the isolated dynamics \(\mathbf{f}\) and coupling function \(\mathbf{H}\) with small bounded fluctuations. As the local dynamics is chaotic and we consider possible noise effect, time evolution forecasting of the given system can diverge from the
Figure 4: Average noise effect on reconstruction performance illustrated using (a) \(FNR\) and (b) \(FPR\) for 50 realizations of simulations using random scale-free networks of system sizes \(n=200,400,600\), 800 and 1000. Time series length is fixed as 500 during all the simulations. The shaded regions represent the corresponding standard deviations.
Figure 3: (a) The reconstruction performance for different time series lengths and a series of penalty terms as \(FNR\). \(FPR\) always equals 0 for this case. (b) Noise effect on reconstruction performance on real network as \(FNR\) and \(FPR\).
original system. However, as we recovered the network dynamics with high accuracy for the given data as,
\[\mathbf{X}(t+1)=\mathbf{F}(\mathbf{X}(t))+\gamma\mathbf{G}\mathbf{X}(t), \tag{4}\]
where \(\gamma\) is the coupling control parameter and initially it is \(\gamma=1\) for the reconstructed model. Then, it is possible to detect critical coupling strength factor \(\gamma_{c}\) to predict the emergent behavior of the dynamical network by fully analytical techniques if the reconstructed coupling function is identity matrix [40]. For general coupling functions, master stability function [41] or connection graph method [42] can be performed for the detection (see Supp. Mat. Sec. XIII).
_Conclusions.--_ The network is fully recovered by our approach in the setting where \(\mathbf{f}\) is chaotic, the network is scale-free and the coupling is weak. Because of the weak coupling between nodes and the chaotic nature of the local dynamics, the correlation between measured time series decays exponentially; therefore, it is impossible to reconstruct such complex systems by conventional methods. Cutting-edge autonomous statistical learning techniques also fail when the network size is large. The key idea in our procedure is splitting the model equation into parts by reduction theorem and inferring each unknown (\(\mathbf{f},\mathbf{H}\) and \(\mathbf{W}\)) one by one using sparse recovery. Although the reduction theorem is not established for the general chaotic discrete maps, we showed its validity on various maps. Our approach guarantees the full reconstruction for the noise-free case and small noise intensity even for relatively short time series with no limitations on the network size. However, the quality of reconstruction decreases for increasing noise and the destructive effect of the noise also increases with increasing system size. Finally, obtaining the network dynamics allows one to predict the emergent behavior under parameter changes. There are available regression-based approaches that can learn network topology using short time series [8]; however, it is impossible to detect the critical transitions with only the connectivity. The ability to detect such transitions is crucial for applications such as a transition to collective behavior in the brain network, which can lead to undesired implications. Thus, it is desirable to put forward precautionary norms to avert potential disasters.
_Data and code availability.--_The data we used in this study can be regenerated by running the code which is publicly available on GitHub [43].
We are indebted to Tiago Pereira, Matteo Tanzi, Sajjad Bakrani, Arash Rezaeinazhad, Thomas Peron and Jeroen Lamb for enlightening discussions. This work is supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant No. 118C236. D.E. acknowledges support from the BAGEP Award of the Science Academy.
|
2310.06589 | Hidden symmetry of Bogoliubov de Gennes quasi-particle eigenstates and
universal relations in flat band superconducting bipartite lattices | Unconventional flat band (FB) superconductivity, as observed in van der Waals
heterostructures, could open promising avenues towards high-T$_c$ materials. In
FBs, pairings and superfluid weight scale linearly with the interaction
parameter, such an unusual behaviour justifies and encourages strategies to
promote FB engineering. Bipartite lattices (BLs) which naturally host FBs could
be particularly interesting candidates. Within Bogoliubov de Gennes theory and
in the framework of the attractive Hubbard model in BLs, a hidden symmetry of
the quasi-particle eigenstates is revealed. As a consequence, we demonstrate
universal relations for the pairings and the superfluid weight that are
independent of the characteristics of the hopping term. Remarkably, it is shown
that these general properties are insensitive to disorder as long as the
bipartite character is protected. | G. Bouzerar, M. Thumin | 2023-10-10T12:54:59Z | http://arxiv.org/abs/2310.06589v3 | # Universal relations in flat band superconducting bipartite lattices
###### Abstract
Unconventional flat band (FB) superconductivity, as observed in van der Waals heterostructures, could open promising avenues towards high-T\({}_{c}\) materials. Indeed, in FBs, pairings and superfluid weight scale linearly with the interaction parameter, such an unusual behaviour justifies strategies to promote FB engineering. Bipartite lattices (BLs) which naturally host FBs could be particularly interesting candidates. By revealing a hidden symmetry of the quasi-particle eigenstates, we demonstrate that pairings and superfluid weight obey universal relations in BLs. Remarkably, these general properties are insensitive to disorder as long as the bipartite character is protected.
Over the past decade, one witnesses a growing interest for a family of emerging materials: the flat band (FB) systems [1; 2; 3; 4; 5]. In FB compounds, the kinetic energy being quenched, the electron-electron interaction energy is the unique relevant energy scale giving access to strongly correlated physics. FBs are found at the origin of an unconventional form of superconductivity (SC) of interband nature. In FBs, superfluid weight (SFW) and critical temperature scale linearly with the effective interaction amplitude \(|U|\)[6; 7; 8] which contrasts with the dramatic \(e^{-1/(\rho(E_{F})|U|)}\) scaling in standard BCS theory. Recently, it has been shown that the Bogoliubov de Gennes (BdG) approach is astonishingly quantitatively accurate in describing SC in FBs. Surprisingly, the agreement revealed between exact methods and BdG concerns systems where quantum fluctuations are the strongest: one-dimensional systems such as the sawtooth chain, the Creutz ladder and other FB system. The SFW obtained with BdG and that calculated with density matrix renormalization group (DMRG) were found to agree impressively [9].
In the framework of the attractive Hubbard model (AHM) in bipartite lattices (BLs) and within BdG theory, we demonstrate universal sum-rules and other relations that pairings and SFW obey in half-filled and partially filled FB systems. Some of the relations proved here in a general context have been established for peculiar two dimensional BLs assuming uniform pairings in each sublattice [8; 10]. For the sake of concreteness, all the properties that will be proved are illustrated in the Supplemental Material [11] on a typical two dimensional BL. Finally, in this work, we restrict ourself to \(T=0\).
BLs consist in two sublattices \(\mathcal{A}\) and \(\mathcal{B}\), with respectively \(\Lambda_{A}\) and \(\Lambda_{B}\) orbitals per cell where \(\mathcal{A}=\{A_{1},A_{2},,...,A_{\Lambda_{A}}\}\) and \(\mathcal{B}=\{B_{1},B_{2},...,B_{\Lambda_{B}}\}\). In the absence of interaction, the spectrum consists exactly in \(N_{fb}=\Lambda_{B}-\Lambda_{A}\) FBs located at \(E=0\) and \(2\,\Lambda_{A}\) dispersives bands (DBs) being symmetric because of chiral symmetry. The total number of orbitals per cell is defined as \(\Lambda=\Lambda_{B}+\Lambda_{A}\). Here, \(\Lambda_{B}\) > \(\Lambda_{A}\) is assumed. The AHM reads,
\[\hat{H}=\sum_{Ii,J,\sigma}t^{IJ}_{A_{i}B_{j}}c^{\dagger}_{I\Lambda _{i},\sigma}c_{JB_{j},\sigma}\] \[-|U|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
positive.
_Hidden symmetry in the BdG eigenstates.-_ Let us define positive (respectively negative) eigenstates those with positive (respectively negative) energy. Consider \(|\Psi\rangle=(|{\bf u}\rangle,|{\bf v}\rangle)^{t}\) an eigenstate of energy \(E\), where \(|{\bf u}\rangle=(|{\bf a}\rangle,|{\bf b}\rangle)^{t}\) and \(|{\bf v}\rangle=(|\bar{\bf a}\rangle,|\bar{\bf b}\rangle)^{t}\). \(|{\bf a}\rangle\) and \(|\bar{\bf a}\rangle\) (respectively \(|{\bf b}\rangle\) and \(|\bar{\bf b}\rangle\)) are column of length \(\Lambda_{A}\) (respectively \(\Lambda_{B}\)).
**Lemma 1**: _Positive (respectively negative) eigenstates can be split into two subsets \({\cal S}_{+}\) and \({\cal S}_{-}\), where, \(|\Psi\rangle\in{\cal S}_{+}{\Leftrightarrow}|{\bf v}\rangle=(|{\bf a}\rangle,-|{ \bf b}\rangle)^{\natural}\), and \(|\Psi\rangle\in{\cal S}_{-}{\Leftrightarrow}|{\bf v}\rangle=(-|{\bf a}\rangle,| {\bf b}\rangle)^{\natural}\)._
Proof: At half-filling \(\hat{H}_{BdG}\) is invariant under particle-hole (PH) transformation which reads,
\[\begin{bmatrix}\hat{\bf C}_{A|}^{\dagger}\\ \hat{\bf C}_{B|}^{\dagger}\end{bmatrix}\stackrel{{ PH}}{{ \Longrightarrow}}\begin{bmatrix}\hat{\bf C}_{A|}\\ -\hat{\bf C}_{B\dagger}\end{bmatrix}. \tag{5}\]
Hence, \(|\Psi\rangle=(|{\bf a}\rangle,|{\bf b}\rangle,|\bar{\bf a}\rangle,|\bar{\bf b }\rangle)^{t}\stackrel{{ PH}}{{\Rightarrow}}|\Psi_{1}\rangle=(| \bar{\bf a}\rangle,-|\bar{\bf b}\rangle,|{\bf a}\rangle,-|{\bf b}\rangle)^{t}\). PH symmetry implies \(|\Psi_{1}\rangle=e^{i\varphi}|\Psi\rangle\), leading to \(e^{i\varphi}=\pm 1\). Thus, we are left with two possibilities: (1) \(|\Psi\rangle\in{\cal S}_{+}\) or (2) \(|\Psi\rangle\in{\cal S}_{-}\) corresponding respectively to \(\varphi=0\) and \(\varphi=\pi\). Notice that, if \(|\Psi_{+}\rangle\in{\cal S}_{+}\) has energy \(E\), then the eigenstate \(\hat{U}|\Psi_{+}\rangle\in{\cal S}_{-}\) has energy \(-E\), since \(\hat{U}\hat{H}_{BdG}\hat{U}^{\dagger}=-\hat{H}_{BdG}\) where \(\hat{U}=\begin{bmatrix}0&\hat{\mathbb{1}}_{\Lambda}\\ -\hat{\mathbb{1}}_{\Lambda}&0\end{bmatrix}\).
We proceed further and demonstrate a second lemma that is crucial for what follows.
**Lemma 2:**_For any \(|U|\neq 0\), \({\cal S}_{-}\) (respectively \({\cal S}_{+}\)) consists exactly in \(\Lambda_{B}\) (respectively \(\Lambda_{A}\)) eigenstates of positive or zero energy and \(\Lambda_{A}\) (respectively \(\Lambda_{B}\)) eigenstates of strictly negative energy._
Proof: For what follows, for a given square matrix \(\hat{M}\), we define \(In(\hat{M})=(n_{m},n_{p})\) where \(n_{m}\) is the number of strictly negative eigenvalues and \(n_{p}\) that of the positive or zero eigenvalues. Now, consider \(|\phi_{n}^{s}\rangle=(|u_{n}^{s}\rangle,|v_{n}^{s}\rangle)^{t}\) a QP eigenstate of energy \(E_{n}^{s}\) in \({\cal S}_{s}\) (\(s=\pm\)), using Eq.(2) one finds,
\[\hat{\cal H}^{s}|u_{n}^{s}\rangle=E_{n}^{s}|u_{n}^{s}\rangle, \tag{6}\]
where the \(\Lambda\times\Lambda\) matrices are,
\[\hat{\cal H}^{+}=\begin{bmatrix}\hat{\Delta}^{A}&\hat{h}_{AB}\\ \hat{h}_{AB}^{\dagger}&-\hat{\Delta}^{B}\end{bmatrix},\,\hat{\cal H}^{-}= \begin{bmatrix}-\hat{\Delta}^{A}&\hat{h}_{AB}\\ \hat{h}_{AB}^{\dagger}&\hat{\Delta}^{B}\end{bmatrix}. \tag{7}\]
For infinitesimal \(|U|\), apply a degenerate perturbation theory to the \(N_{fb}\) FB eigenstates of \(\hat{\cal H}^{\pm}|_{|U|=0}\) which have weight on \({\cal B}\) orbitals only. The projection of \(\hat{\Delta}^{B}\) in the FB eigenspace being positive definite, it implies that the energy shift of each FB eigenstates of \(\hat{\cal H}^{+}\) is strictly negative, and strictly positive for those of \(\hat{\cal H}^{-}\). In other words, it means that \(In(\hat{\cal H}^{-})=(\Lambda_{A},\Lambda_{B})\) and \(In(\hat{\cal H}^{+})=(\Lambda_{B},\Lambda_{A})\).
Now, assume, there exist a peculiar value \(|U_{c}|\) such that for \(|U|<|U_{c}|\), \(In(\hat{\cal H}^{-})=(\Lambda_{A},\Lambda_{B})\) and \(In(\hat{\cal H}^{+})=(\Lambda_{B},\Lambda_{A})\), and for \(|U|>|U_{c}|\), \(In(\hat{\cal H}^{+})=(\Lambda_{B}-1,\Lambda_{A}+1)\) and \(In(\hat{\cal H}^{-})=(\Lambda_{A}+1,\Lambda_{B}-1)\). At \(|U_{c}|\), \(\hat{\cal H}^{-}\) and \(\hat{\cal H}^{+}\) have at least an eigenstate with zero energy, \(|u_{0}^{s}\rangle=(|{\bf a}_{0}^{s}\rangle,|{\bf b}_{0}^{s}\rangle)^{t}\), \(s=\pm\). From Eq.(7) and for \(s=+\),
\[(\hat{\Delta}^{B}+\hat{h}_{AB}^{\dagger}(\hat{\Delta}^{A})^{-1} \hat{h}_{AB})|{\bf b}_{0}^{+}\rangle=0,\] \[|{\bf a}_{0}^{+}\rangle=-(\hat{\Delta}^{A})^{-1}\hat{h}_{AB}|{\bf b }_{0}^{+}\rangle. \tag{8}\]
\(\Delta_{i}^{A}>0\) has been used. \(\hat{\Delta}^{B}+\hat{h}_{AB}^{\dagger}(\hat{\Delta}^{A})^{-1}\hat{h}_{AB}\) is the sum of a positive definite matrix and positive semi definite one, their sum is positive definite and hence zero cannot be an eigenvalue, \(|{\bf b}_{0}^{+}\rangle=[{\bf 0}]_{\Lambda_{B}}\) and \(|{\bf a}_{0}^{+}\rangle=[{\bf 0}]_{\Lambda_{A}}\) where \(|{\bf 0}]_{N}\) is the column vector with N zeros. The same proof applies for \(|u_{0}^{-}\rangle\). This proves the second lemma.
_Pairing sum rule in half-filled bipartite lattices.-_ We focus on the negative eigenstates of \(\hat{H}_{BdG}\). We define \(|\psi_{n+}^{<}\rangle\) where \(n=1,....,\Lambda_{B}\) the normalized eigenstates in \({\cal S}_{+}\) and similarly \(|\psi_{m-}^{<}\rangle\) where \(m=1,....,\Lambda_{A}\) those in \({\cal S}_{-}\). We write \(|\psi_{n+}^{<}\rangle=(|u_{n}^{+}\rangle,|v_{n}^{+}\rangle)^{t}\) and \(|\psi_{m-}^{<}\rangle=(|u_{m}^{-}\rangle,|v_{m}^{-}\rangle)^{t}\). At \(T=0\), pairings are given by,
\[\Delta_{l}^{\lambda}=-\frac{|U|}{N_{c}}\Big{(}{\sum_{{\bf k},s=n+} }\langle\psi_{s}^{<}|\hat{O}_{\lambda_{l}}|\psi_{s}^{<}\rangle+{\sum_{{\bf k},s=m- }}\langle\psi_{s}^{<}|\hat{O}_{\lambda_{l}}|\psi_{s}^{<}\rangle\Big{)}, \tag{9}\]
where \(\hat{O}_{\lambda_{l}}=\hat{c}_{{\bf k}\lambda_{l},\downarrow}\hat{c}_{{\bf k} \lambda_{l},\uparrow}\), \(\lambda=A,B\) and \(l=1,...,\Lambda_{\lambda}\), \(n\) runs over \(1,...,\Lambda_{B}\), and \(m\) over \(1,...,\Lambda_{A}\), \(N_{c}\) being the number of cells. Eq.(9) leads to,
\[\Delta_{i}^{A} = -\frac{|U|}{N_{c}}\Big{(}{\sum_{{\bf k},n=1}^{n=\Lambda_{B}}}|a_{ ni}^{+}|^{2}-{\sum_{{\bf k},m=1}^{m=\Lambda_{A}}}|a_{mi}^{-}|^{2}\Big{)},\] \[\Delta_{j}^{B} = \frac{|U|}{N_{c}}\Big{(}{\sum_{{\bf k},n=1}^{n=\Lambda_{B}}}|b_{nj} ^{+}|^{2}-{\sum_{{\bf k},m=1}^{m=\Lambda_{A}}}|b_{mj}^{-}|^{2}\Big{)}. \tag{10}\]
The eigenstates beeing normalized, one finally finds the sum-rule,
\[\sum_{j=1}^{\Lambda_{B}}\Delta_{j}^{B}-\sum_{i=1}^{\Lambda_{A}} \Delta_{i}^{A}=\frac{|U|}{2}(\Lambda_{B}-\Lambda_{A}). \tag{11}\]
A similar expression has been obtained recently in Ref.[1] where uniform pairings
\(\frac{|U|}{N_{c}}(\sum_{{\bf k},n=1}^{n=\Lambda_{B}}|b_{nj}^{+}|^{2}+\sum_{{\bf k},m=1}^{m=\Lambda_{A}}|b_{mj}^{-}|^{2})=|U|\langle\hat{n}_{B_{j},\uparrow}\rangle =\frac{|U|}{2}\). Similarly, for any \(i\), one finds \(\Delta_{i}^{A}\leq\frac{|U|}{2}\).
If \(\langle\Delta^{\lambda}\rangle\), \(\lambda=A,B\), denote the average of the pairings on each sublattice, then,
\[|U|(\langle\Delta^{B}\rangle-\langle\Delta^{A}\rangle)=\frac{1}{\Lambda_{B}}(F _{1}-F_{2}), \tag{12}\]
where \(F_{1}=\frac{1}{N_{c}}\sum_{{\bf k},j,n}|b_{nj}^{+}|^{2}+\frac{r}{N_{c}}\sum_{{ \bf k},i,n}|a_{ni}^{+}|^{2}\) and \(F_{2}=\frac{1}{N_{c}}\sum_{{\bf k},j,m}|b_{mj}^{-}|^{2}+\frac{r}{N_{c}}\sum_{{ \bf k},i,m}|a_{mi}^{-}|^{2}\), with \(r=\frac{\Lambda_{B}}{\Lambda_{A}}\geq 1\). Eigenstates being normalized, implies \(F_{1}\geq\frac{\Lambda_{B}}{2}\) and \(F_{2}\leq\frac{\Lambda_{B}}{2}\) which demonstrates,
\[\langle\Delta_{B}\rangle\geq\langle\Delta_{A}\rangle. \tag{13}\]
Combining this equation and Eq.(3) gives,
\[\frac{\langle\Delta_{B}\rangle}{|U|}\geq\frac{r-1}{2r}. \tag{14}\]
For instance, in the stub lattice (\(r=2\)), recently it has been found numerically that the lower bound of \(\frac{\langle\Delta_{B}\rangle}{|U|}\) is 0.25 which coincides exactly with \(\frac{r-1}{2r}\)[14].
_Pairings in partially filled flat bands.-_Partially filled FBs for which \(\mu=-|U|/2\), correspond to electron density \(\nu\) varying between \(\nu_{min}=2\Lambda_{A}\) and \(\nu_{max}=2\Lambda_{B}\). For the half-filled case we introduce \(\overline{\nu}=\Lambda_{A}+\Lambda_{B}\). To calculate the pairings for \(\nu_{min}\leq\nu\leq\nu_{max}\), we use the pseudo-spin SU(2) symmetry of the AHM in BLs [4; 5; 6], which is a form of rotation invariance in particle-hole space. The AHM is re-expressed,
\[\hat{H}= \sum_{Ii,J_{2},\sigma}t_{A_{i}B_{j}}^{IJ}\hat{c}_{IA_{i},\sigma} ^{\dagger}\hat{c}_{JB_{j},\sigma}-\frac{2}{3}|U|\sum_{I,\lambda=A,B}\hat{\bf T }_{I\lambda_{l}}\cdot\hat{\bf T}_{I\lambda_{l}} \tag{15}\] \[-(\mu+|U|/2)\sum_{I,l,\lambda=A,B}\hat{n}_{I\lambda_{l}}.\]
The components of the pseudo-spin operator read,
\[\hat{T}_{I\lambda_{l}}^{+} =\eta_{\lambda}\hat{c}_{I\lambda_{l},\uparrow}\hat{c}_{I\lambda_ {l},\downarrow}, \tag{16}\] \[\hat{T}_{I\lambda_{l}}^{-} =\eta_{\lambda}\hat{c}_{I\lambda_{l},\downarrow}^{\dagger}\hat{c} _{I\lambda_{l},\uparrow}^{\dagger},\] (17) \[\hat{T}_{\bar{I}\lambda_{l}} =\frac{1}{2}(1-\hat{n}_{I\lambda_{l}}), \tag{18}\]
\(\eta_{\lambda}=1\) (respectively \(-1\)) if \(\lambda=A\) (respectively \(B\)). These operators obey the usual commutation relations of spin operators. In partially filled FBs, the last term (right side) in Eq.(15) vanishes and,\([\widehat{H},\hat{T}^{\pm}]=[\widehat{H},\hat{T}^{z}]=0\), where \(\hat{\bf T}=\sum_{I,l,\lambda=A,B}\hat{\bf T}_{I\lambda_{l}}\) is the total pseudo-spin operator. The Hamiltonian has pseudospin SU(2) symmetry. \(\langle\hat{\bf T}_{I\lambda_{l}}\rangle_{0}\) is cell independent and,
\[\langle\hat{\bf T}_{\lambda_{l}}\rangle_{0}=\begin{bmatrix}\langle\hat{T}_{ \lambda_{l}}^{x}\rangle_{0}=\eta_{\lambda}\Re(\frac{\Delta_{l}^{\lambda}}{|U|} )\\ \langle\hat{T}_{\lambda_{l}}^{y}\rangle_{0}=\eta_{\lambda}\Im(\frac{\Delta_{l}^{ \lambda}}{|U|})\\ \langle\hat{T}_{\lambda_{l}}^{z}\rangle_{0}=\frac{1}{2}(1-n_{\lambda_{l}}) \end{bmatrix}, \tag{19}\]
\(\hat{H}_{BdG}\) is invariant under any identical rotation of the pseudo-spins. We consider \(\mathcal{R}_{y}(\theta)\) the rotation of angle \(\theta\) around the \(y\)-axis,
\[\begin{bmatrix}\hat{c}_{I\lambda_{l},\uparrow}\\ \hat{c}_{I\lambda_{l},\downarrow}\end{bmatrix}\stackrel{{\mathcal{R }_{y}(\theta)}}{{\Longrightarrow}}\begin{bmatrix}\cos(\theta/2)\hat{c}_{I \lambda_{l},\uparrow}-\eta_{\lambda}\sin(\theta/2)\hat{c}_{I\lambda_{l}, \downarrow}^{\dagger}\\ \cos(\theta/2)\hat{c}_{I\lambda_{l},\downarrow}+\eta_{\lambda}\sin(\theta/2) \hat{c}_{I\lambda_{l},\uparrow}^{\dagger}\end{bmatrix}. \tag{20}\]
Let us assume that the self-consistent solution for \(\nu=\bar{\nu}\) is known. The expectation value of the corresponding pseudo-spins reads,
\[\mathbf{\bar{T}}_{\lambda_{l}}=\begin{bmatrix}\bar{T}_{\lambda_{l}}^{x}=\eta_{ \lambda}\frac{\bar{\lambda}_{l}^{\lambda}}{|U|}\\ \bar{T}_{\lambda_{l}}^{y}=0\\ \bar{T}_{\bar{\lambda}_{l}}^{x}=0\end{bmatrix}. \tag{21}\]
\(\bar{T}_{\lambda_{l}}^{y}\) and \(\bar{T}_{\lambda_{l}}^{z}\) vanish since (i) the pairings are taken real and (ii) because of the uniform density theorem [12]. Applying \(\mathcal{R}_{y}(\theta)\) to the pseudo-spins leads to a BdG solution corresponding to a partial filling of the FBs,
\[\Delta_{l}^{\lambda}=\bar{\Delta}_{l}^{\lambda}\cos(\theta), \tag{22}\]
\[n_{\lambda_{l}}=1+2\eta_{\lambda}\frac{\bar{\Delta}_{l}^{\lambda}}{|U|}\sin( \theta). \tag{23}\]
The corresponding filling factor is,
\[\nu(\theta)=\overline{\nu}+\sin(\theta)(\Lambda_{B}-\Lambda_{A}). \tag{24}\]
We emphasize that Eq.(3) has been used. Hence, \(\theta=\pi/2\) corresponds to the fully filled FBs, i.e. \(\nu=\nu_{max}\) while \(\theta=-\pi/2\) to empty FBs or \(\nu=\nu_{min}\). Combining Eq.(22) and Eq.(24), one obtains,
\[\Delta_{l}^{\lambda} = \bar{\Delta}_{l}^{\lambda}f(\nu), \tag{25}\]
\[n_{\lambda_{l}} = 1+2\eta_{\lambda}\frac{\bar{\Delta}_{l}^{\lambda}}{|U|}\sqrt{1-f^{2} (\nu)}, \tag{26}\]
where,
\[f(\nu)=\frac{2}{\nu_{max}-\nu_{min}}\sqrt{(\nu-\nu_{min})(\nu_{max}-\nu)}. \tag{27}\]
Similar expressions have been derived in Ref.[1], where a uniform pairing is forced on the orbitals on the dominant lattice. Our proof is general, without restriction on the pairings, and requires only that the sum-rule given in Eq.(3) has been proved.
_The superfluid weight in partially filled FBs.-_ Here, we
derive a general relationship between \(D^{s}\) in partially filled FBs and that of half-filled BL. The SFW is defined as [2; 3],
\[D^{s}_{\mu}=\frac{1}{N_{c}}\frac{\partial^{2}\Omega(\mathbf{q})}{ \partial q_{\mu}^{2}}\Big{|}_{\mathbf{q}=\mathbf{0}}, \tag{28}\]
\(\Omega(\mathbf{q})\) is the grand-potential and \(q\) mimics the effect of a vector potential, introduced by a standard Peierls substitution.
Recently, it has been argued that when the quantum metric (QM) [20; 21] associated to FBs is not minimal, corrections should be included in Eq.(6) [1]. Contrary to \(D^{s}_{\mu}\), the QM which measures the typical spreading of the FB eigenstates is a quantity which depends on the orbital positions. However, for any BL, one can always find the orbital positions which minimize the QM, therefore, for which Eq.(6) is correct. It generally corresponds to the most symmetrical positions of the orbitals within the cell. Following Refs.[22] and [23] leads to,
\[D^{s}_{\mu}=\frac{2}{N_{c}}\sum_{\mathbf{k},mn}\frac{J^{nm}_{\mu} }{E^{<}_{n}-E^{>}_{m}}, \tag{29}\]
where \(J^{nm}_{\mu}=|\langle\Psi^{<}_{n}|\hat{V}_{\mu}|\Psi^{>}_{m}\rangle|^{2}-| \langle\Psi^{<}_{n}|\hat{\Gamma}\hat{V}_{\mu}|\Psi^{>}_{m}\rangle|^{2}\), with \(\hat{\Gamma}=\text{diag}(\hat{\mathbb{I}}_{\lambda\times\Lambda},-\hat{ \mathbb{I}}_{\lambda\times\Lambda})\) and \(\hat{V}=\text{diag}(\hat{v}^{0},\,\hat{v}^{0})\). The velocity operator along the \(\mu\)-direction is \(\hat{v}^{0}_{\mu}=\frac{\partial\hat{h}^{0}}{\partial k_{\mu}}\) where \(\hat{h}^{0}=\begin{bmatrix}0&\hat{h}_{AB}\\ \hat{h}^{\dagger}_{AB}&0\end{bmatrix}\).
To avoid confusion due to multiple indices, we introduce here the notation \(|\Psi^{>}_{m}\rangle=(|a^{>}_{m}\rangle,|b^{>}_{m}\rangle,|\bar{a}^{>}_{m} \rangle,|\bar{b}^{>}_{m}\rangle)^{t}\) for the eigenstates with positive energy \(E^{>}_{m}\), similarly \(|\Psi^{<}_{n}\rangle=(|a^{<}_{n}\rangle,|b^{<}_{n}\rangle,|\bar{a}^{<}_{n} \rangle,|\bar{b}^{<}_{n}\rangle)^{t}\) for those with negative energy \(E^{<}_{n}\). Thus, we ignore whether these states belong to \(\mathcal{S}^{\pm}\). The eigenstates for \(\nu=\bar{\nu}\) are specified by simply replacing \(n\to 0n\) and \(m\to 0m\).
Assuming the eigenstates known for \(\nu=\bar{\nu}\), \(D^{s}_{\mu}\) in partially filled FBs is obtained using the pseudospin SU(2) symmetry of the Hamiltonian. Recall that the quasi particle eigenvalues are invariant under the pseudospin rotation. From Eq.(20) the rotated eigenstates are, \(|\psi^{<}_{n}\rangle=\hat{U}_{\theta}|\Psi^{<}_{0n}\rangle\) (similarly \(|\Psi^{>}_{m}\rangle=\hat{U}_{\theta}|\Psi^{>}_{0m}\rangle\)) where,
\[\hat{U}_{\theta}=\begin{bmatrix}c&0&s&0\\ 0&c&0&-s\\ -s&0&c&0\\ 0&s&0&c\end{bmatrix}, \tag{30}\]
with \(c=\cos(\theta/2)\) and \(s=\sin(\theta/2)\). The matrix elements in Eq.(29) are given by,
\[\langle\Psi^{<}_{n}|\hat{\Gamma}^{p}\hat{V}_{\mu}|\Psi^{>}_{m} \rangle=\langle\Psi^{<}_{0n}|\hat{U}_{-\theta}\hat{\Gamma}^{p}\hat{V}_{\mu} \hat{U}_{\theta}|\Psi^{>}_{0m}\rangle, \tag{31}\]
where \(p=0\) or \(1\). \(|\bar{a}^{<}_{0n}\rangle=\epsilon_{n}|a^{<}_{0n}\rangle\) and \(|\bar{b}^{<}_{0n}\rangle=-\epsilon_{n}|b^{<}_{0n}\rangle\) where \(\epsilon_{n}=1\) (respectively \(-1\)) if \(|\Psi^{<}_{0n}\rangle\in\mathcal{S}^{+}\) (respectively \(\in\mathcal{S}^{-}\)). We proceed similarly with \(|\Psi^{>}_{0m}\rangle\) and get,
\[J^{nm}_{\mu}=|C^{<,>}_{0nm}|^{2}g_{nm}, \tag{32}\]
where \(g_{nm}=((1-\epsilon_{n}\epsilon_{m})c+(\epsilon_{n}+\epsilon_{m})s)^{2}-(1+ \epsilon_{n}\epsilon_{m})^{2}\) and \(C^{<,>}_{0nm}=\langle a^{<}_{0n}|\partial_{\mu}\hat{h}_{AB}|b^{>}_{0m} \rangle+\langle b^{<}_{0n}|\partial_{\mu}\hat{h}_{AB}^{\dagger}|a^{>}_{0m}\rangle\). Eq.(32) can be simplified and gives,
\[J^{nm}_{\mu}=-4\epsilon_{n}\epsilon_{m}|C^{<,>}_{0nm}|^{2}\cos ^{2}(\theta). \tag{33}\]
Using Eq.(24), we then end up with,
\[D^{s}_{\mu}(\nu)=f^{2}(\nu)D^{s}_{\mu}(\bar{\nu}). \tag{34}\]
In partially filled FBs, \(D^{s}_{\mu}\) always has a universal parabolic shape and vanishes for \(\nu=\nu_{min}\) and \(\nu_{max}\). To derive Eq.(7), one needs Eq.(3). Note that Eq.(33) indicates that the contributions to \(D^{s}_{\mu}(\nu)\) originating from pairs of eigenstates in the same subspace \(\mathcal{S}^{+}\) or \(\mathcal{S}^{-}\) are positive, while they are negative in the other case.
_Effects of disorder.-_ We have previously considered clean systems. An interesting question is: What is the impact of disorder that preserves the bipartite character of the lattice such as random hoppings or introduction of vacancies? Translation invariance being broken, \(\hat{H}_{BdG}\) must be diagonalized in real space. The number of zero energy eigenstates is \(\mathcal{N}_{E=0}=|\mathcal{N}_{\mathcal{B}}-\mathcal{N}_{\mathcal{A}}|\) where \(\mathcal{N}_{\lambda}\) is the total number of orbitals \(\lambda=\mathcal{A},\mathcal{B}\). In the clean case, our proofs are based on Lemma 1 and Lemma 2, which remain valid in the single cell made up of \(\mathcal{N}_{\mathcal{A}}\) A-orbitals and \(\mathcal{N}_{\mathcal{B}}\) B-orbitals. Thus, in the disordered half-filled BL, Eq.(3) becomes,
\[\sum_{j=1}^{\mathcal{N}_{\mathcal{B}}}\Delta^{B}_{j}-\sum_{i=1}^{ \mathcal{N}_{\mathcal{A}}}\Delta^{A}_{i}=\frac{|U|}{2}|\mathcal{N}_{ \mathcal{B}}-\mathcal{N}_{\mathcal{A}}|, \tag{35}\]
where \(i\) (respectively \(j\)) runs now over the whole sublattice \(\mathcal{A}\) (respectively \(\mathcal{B}\)). In addition, Eq.(25) and Eq.(7) which give the filling dependence of the pairings and the SFW are valid as well.
Notice that BCS wavefunction being the exact ground-state in BL hosting isolated FBs when \(|U|\) is smaller than the gap [24], implies as well the exactness of our results in this limit. Thus, it would be of great interest to confirm this statement from exact methods such as DMRG, a reliable and well suited tool for quasi one-dimensional systems.
To conclude, using a hidden symmetry of the BdG eigenstates, we have rigourously demonstrated that in bipartite lattices the pairings and the SFW obey universal relations. Furthermore, these general properties are shown to hold in disordered systems as long as the bipartite character of the lattice is conserved. Our findings could have an important impact in the search of novel families of compounds exhibiting unconventional FB superconductivity.
[MISSING_PAGE_POST]
## References
* (1) Y. Cao, V. Fatemi, S. Fang, K. Watanabe, et al., Nature (London) **556**, 43 (2018).
* (2) D. Leykam, A. Andreanov and S. Flach, Advances in Physics: X, 3:1 (2018).
* (3) L. Tang, D. Song, S. Xia, et al., Nanophotonics, **9**, no. 5, 1161 (2020).
* (4) N. Regnault, Y. Xu, MR. Li, et al., Nature **603**, 824 (2022).
* (5) I. Hase, Y. Higashi, H. Eisaki et al., Sci Rep **13**, 4743 (2023).
* (6) V. A. Khodel' and V. R. Shaginyan, Pis'ma Zh. Eksp. Teor. Fiz. **51**, 488 (1990), Sov. J. Exp. Theor. Phys. Lett. **51**, 553 (1990).
* (7) N. B. Kopnin, T. T. Heikkila, and G. E. Volovik, Phys. Rev. B **83**, 220503(R) (2011).
* (8) S. Peotta and P. Torma, Nat. Commun. **6**, 8944 (2015).
* (9) S. Chan, B. Gremaud, and G. Batrouni, Phys. Rev. B. **105**, 024502 (2022) ; ibid Phys. Rev. B. **106**, 104514 (2022).
* (10) Y.R. Wu, X.F. Zhang, C.F. Liu, et al. Sci. Rep. **11**, 13572 (2021).
* (11) S. Chan, B. Gremaud, and G. Batrouni, Phys. Rev. B. **105**, 024502 (2022) ; ibid Phys. Rev. B. **106**, 104514 (2022).
* (12) The Supplemental Material illustrates the sum-rules and other properties in a specific bipartite lattice.
* (13) E. Lieb, M. Loss, and R. McCann,J. of Math. Phys. **34**, 891 (1993).
* (14) K.-E. Huhtinen, J. Herzog-Arbeitman, A. Chew, B. A. Bernevig, and P. Torma, Phys. Rev. B **106**, 014518 (2022).
* (15) M. Thumin and G. Bouzerar, Phys. Rev. B, **107**, 214508 (2023).
* (16) C. N. Yang, Phys. Rev. Lett. **63**, 2144 (1989)
* (17) S. Zhang, Phys. Rev. Lett. **65**, 120 (1990).
* (18) M. Mermele, Phys. Rev. B **76**, 035125 (2007).
* (19) B. S. Shastry and B. Sutherland, Phys. Rev. Lett. **65**,243 (1990).
* (20) D. J. Scalapino, S. R. White, and S. C. Zhang, Phys. Rev. Lett. **68**, 2830 (1992); Phys. Rev. B **47**, 7995 (1993).
* (21) J. P. Provost and G. Vallee, Comm. Math. Phys. **76**, 289 (1980).
* (22) D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys. **82**, 1959 (2010).
* (23) S. Peotta and P. Torma, Nature Communications,**6**, 8944, (2015).
* (24) Y.-R. Wu, X.-F. Zhang, C.-F. Liu, et al., Scientific Reports, **11**, 13572 (2021).
* (25) A. Julku, S. Peotta, T. Vanhala, D. Kim, and P. Torma, Phys. Rev. Lett. **117**, 045303 (2016).
## I Supplemental Material
The purpose of this supplemental material is to illustrate the sum-rules and other relations demonstrated in the general context of bipartite lattices (BLs) where flat bands (FBs) are either half-filled or partially filled. The prototype of two dimensional BL considered here, we will designate it by \(\mathcal{L}\)-lattice, is shown in Fig.1. The \(\mathcal{L}\)-lattice consists in two sublattices \(\mathcal{A}\) and \(\mathcal{B}\) which contain respectively \(\Lambda_{A}=3\) and \(\Lambda_{B}=5\) orbitals per unit cell, where \(\mathcal{A}=\{A_{1},A_{2},A_{3}\}\) and \(\mathcal{B}=\{B_{1},B_{2},...,B_{5}\}\). In the absence of electron-electron interaction, the one-particle spectrum consists exactly in \(N_{fb}=\Lambda_{B}-\Lambda_{A}=2\) FBs with energy \(E_{fb}=0\) and \(2\Lambda_{A}=6\) symmetric dispersives bands (chirality).
## II Symmetry in the \(H_{BdG}\) eigenstates
It has been shown in the main text, that eigenstates of the Bogoliubov de Gennes Hamiltonian \(H_{BdG}\) as given in Eq.(2) in the article, can be divided into two subsets \(\mathcal{S}_{+}\) and \(\mathcal{S}_{-}\) which are defined in what follows. Consider a normalized eigenstate of \(H_{BdG}\), \(|\Psi\rangle=(|u\rangle,|v\rangle)^{t}\) of energy \(E\), where \(|u\rangle=(|{\bf a}\rangle,|{\bf b}\rangle)^{t}\) and \(|v\rangle=(|\bar{\bf a}\rangle,|\bar{\bf b}\rangle)^{t}\). The columns \(|{\bf a}\rangle\) and \(|\bar{\bf a}\rangle\) (respectively \(|{\bf b}\rangle\) and \(|\bar{\bf b}\rangle\)) are of length \(\Lambda_{A}\) (respectively \(\Lambda_{B}\)),
\[|\Psi\rangle\in\mathcal{S}_{+}\Leftrightarrow|\bar{\bf a}\rangle=|{ \bf a}\rangle,|\bar{\bf b}\rangle=-|{\bf b}\rangle, \tag{1}\] \[|\Psi\rangle\in\mathcal{S}_{-}\Leftrightarrow|\bar{\bf a}\rangle=-|{ \bf a}\rangle,|\bar{\bf b}\rangle=|{\bf b}\rangle. \tag{2}\]
For a given value of the electron-electron interaction \(|U|\), here we have chosen \(|U|=3\), Fig.2 depicts the QP dispersions with negative energy in the half-filled \(\mathcal{L}-\)lattice and along the \(\Gamma M\) direction in the Brillouin zone. Unambiguously, for any value of the momentum \({\bf k}\), the spec
Figure 1: (Color online) Prototype of two-dimensional bipartite lattice (\(\mathcal{L}\)), with \(\Lambda_{A}=3\) atoms of type A and \(\Lambda_{B}=5\) atoms of type B per unit cell (shaded area). The hoppings are restricted to nearest neighbors only, they are all equal, set to 1. The single particle Hamiltonian has two degenerate flat bands.
trum consists in \(\Lambda_{A}=3\) eigenstates in \(\mathcal{S}_{-}\) and \(\Lambda_{B}=5\) eigenstates in \(\mathcal{S}_{+}\).
### Sum rule for the pairings in half-filled bipartite lattices
In the main text of the article we it has been rigourously proved that, in any half-filled bipartite lattice the pairings obey the following sum-rule,
\[\sum_{j=1}^{\Lambda_{B}}\Delta_{j}^{B}-\sum_{i=1}^{\Lambda_{A}}\Delta_{i}^{A}= \frac{|U|}{2}(\Lambda_{B}-\Lambda_{A}). \tag{3}\]
As an illustration, in Fig.3, the pairings for each orbital are plotted as a function of \(|U|\) in the half-filled \(\mathcal{L}\)-lattice. For obvious symmetry reasons (see Fig.1), one finds that \(\Delta_{1}^{B}=\Delta_{2}^{B}\) and \(\Delta_{3}^{B}=\Delta_{4}^{B}\). As it can be clearly seen, for any \(|U|\), Eq.(3) is exactly fulfilled.
If we define the average value of the pairing on both sublattices by \(\langle\Delta_{\lambda}\rangle\) where \(\lambda=A,B\). For any \(|U|\), we have shown in the main text that,
\[\langle\Delta_{B}\rangle\geq\langle\Delta_{A}\rangle, \tag{4}\]
and found as well a lower bound for the average value of the pairings on \(\mathcal{B}\)-sublattice,
\[\frac{\langle\Delta_{B}\rangle}{|U|}\geq\frac{r-1}{2r}, \tag{5}\]
where \(r=\Lambda_{B}/\Lambda_{A}\) has been introduced.
In the case of the half-filled \(\mathcal{L}\)-lattice, Fig.3 clearly shows that \(\langle\Delta_{B}\rangle\geq\langle\Delta_{A}\rangle\) for any \(|U|\). According to Eq.(5), in the present case, one expects that \(\langle\Delta_{B}\rangle\geq 0.2\,|U|\). This is indeed in perfect agreement with the results depicted in Fig.3. More precisely, the lower bound is found to coincide exactly with \(\frac{\partial\langle\Delta_{B}\rangle}{\partial|U|}\Big{|}_{U=0}\).
### The superfluid weight in partially filled FBs
In the main text we have proved a general relationship between the superfluid weight (SFW) \(D_{\mu}^{s}\) in partially filled FBs and that of the half-filled lattice. The SFW is defined as [2; 3],
\[D_{\mu}^{s}=\frac{1}{N_{c}}\frac{\partial^{2}\Omega(\mathbf{q})}{\partial q_{ \mu}^{2}}\Big{|}_{\mathbf{q}=\mathbf{0}}, \tag{6}\]
\(\Omega(\mathbf{q})\) is the grand-potential and \(q\) mimics the effect of a vector potential, introduced by a standard Peierls substitution in the hopping terms in the BdG Hamiltonian.
First, we have carefully checked in the case of the \(\mathcal{L}\)-lattice that (i) the corrections to Eq.(6) as discussed in Ref.[1] are vanishing and (ii) the quantum metric associated to the FBs is minimal for the geometry depicted in Fig.1. Using the pseudo-spin SU(2) symmetry of the Hamiltonian for \(\mu=-|U|/2\)[4; 5; 6], it has been shown in the main text that,
\[D_{\mu}^{s}(\nu)=f^{2}(\nu)D_{\mu}^{s}(\bar{\nu}), \tag{7}\]
where the filling dependent function is,
\[f(\nu)=\frac{2}{\nu_{max}-\nu_{min}}\sqrt{(\nu-\nu_{min})(\nu_{max}-\nu)}. \tag{8}\]
Figure 3: (Color online) Pairings (divided by \(|U|\)) as a function of \(|U|\) for the half-filled BL depicted in Fig.1. The open circles (respectively squares) are the average values of the pairings on sublattice \(\mathcal{A}\) (respectively \(\mathcal{B}\)). The horizontal black line corresponds to \(f_{AB}/|U|\) where \(f_{AB}=\frac{1}{2}(\sum_{j=1}^{\Lambda_{B}}\Delta_{j}^{B}-\sum_{i=1}^{\Lambda_ {A}}\Delta_{i}^{A})\).
Figure 2: (Color online) Negative part of the quasiparticle dispersions in the \((1,1)\)-direction for the half-filled BL \(\mathcal{L}\) as depicted in Fig.1. The green (respectively blue) line corresponds to QP eigenstates in \(\mathcal{S}_{+}\) (respectively \(\mathcal{S}_{-}\)). There are \(\Lambda_{A}=3\) bands in \(\mathcal{S}_{-}\) and \(\Lambda_{B}=5\) in \(\mathcal{S}_{+}\). Here, the on-site interaction parameter \(|U|=3\), the conclusion is the same for any \(|U|\).
Thus, the SFW for partially filled FBs always has a universal parabolic shape and \(D^{s}_{\mu}(\nu)\) vanishes for \(\nu=\nu_{min}\) and \(\nu=\nu_{max}\). These fillings correspond respectively to empty FBs for which \(\nu=\nu_{min}=2\Lambda_{A}=6\) and fully filled FBs where \(\nu=\nu_{max}=2\Lambda_{B}=10\). As an illustration, Fig.4 depicts \(D^{s}_{\varkappa}(\nu)\) as a function of \(\nu\) in the \(\mathcal{L}\)-lattice. As it is clearly seen, the agreement between the numerical data and the analytical expression given in Eq.(7) is excellent.
### The impact of the disorder: the case of randomly distributed vacancies.
In our manuscript, it has been shown that in the case of disorder that conserves the bipartite character of the lattice the sum-rules and other relations established in the case of clean systems still hold. Here, our puppose is to illustrate this feature. We consider the impact of vacancies randomly distributed in the \(\mathcal{L}-\)lattice. In the case of disordered half-filled system, it has been argued in the main text that Eq.(3) becomes,
\[\sum_{j=1}^{\mathcal{N}_{\mathcal{B}}}\Delta^{B}_{j}-\sum_{i=1}^{\mathcal{N}_ {\mathcal{A}}}\Delta^{A}_{i}=\frac{|U|}{2}|\mathcal{N}_{\mathcal{B}}-\mathcal{ N}_{\mathcal{A}}|, \tag{9}\]
where \(i\) (respectively \(j\)) runs now over the whole sublattice \(\mathcal{A}\) (respectively \(\mathcal{B}\)), and \(\mathcal{N}_{\mathcal{A}}\) (resp. \(\mathcal{N}_{\mathcal{B}}\)) are the total number of A-orbitals (respectively B-orbitals) in the disordered lattice.
Because of the loss of translation invariance, the calculations require multiple real space diagonalizations of the BdG Hamiltonian, until convergence in the self-consistent loop is reached. The size of the matrices is \(2\mathcal{N}\times 2\mathcal{N}\) where \(\mathcal{N}=\mathcal{N}_{\mathcal{A}}+\mathcal{N}_{\mathcal{B}}\). For our illustration, we have considered a system that contains about \(3200\) orbitals. In Fig.5, the pairing distribution in the disordered half-filled \(\mathcal{L}\)-lattice is depicted. The configuration of disorder corresponds to the introduction of \(5\%\) of vacancies randomly distributed. We have checked that Eq.(9) is exactly verified, as well as the relation \(\langle\Delta_{B}\rangle\geq\langle\Delta_{A}\rangle\) which could be anticipated from the plot of the pairing distributions. Additionally, we have checked that Eq.(5) is as well fulfilled \(\frac{\langle\Delta_{B}\rangle}{|U|}\geq\frac{1}{2}(1-\frac{1}{r})\), where in the disordered lattice \(r=\frac{\mathcal{N}_{\mathcal{B}}}{\mathcal{N}_{\mathcal{A}}}\).
|
2305.02559 | Madvex: Instrumentation-based Adversarial Attacks on Machine Learning
Malware Detection | WebAssembly (Wasm) is a low-level binary format for web applications, which
has found widespread adoption due to its improved performance and compatibility
with existing software. However, the popularity of Wasm has also led to its
exploitation for malicious purposes, such as cryptojacking, where malicious
actors use a victim's computing resources to mine cryptocurrencies without
their consent. To counteract this threat, machine learning-based detection
methods aiming to identify cryptojacking activities within Wasm code have
emerged. It is well-known that neural networks are susceptible to adversarial
attacks, where inputs to a classifier are perturbed with minimal changes that
result in a crass misclassification. While applying changes in image
classification is easy, manipulating binaries in an automated fashion to evade
malware classification without changing functionality is non-trivial. In this
work, we propose a new approach to include adversarial examples in the code
section of binaries via instrumentation. The introduced gadgets allow for the
inclusion of arbitrary bytes, enabling efficient adversarial attacks that
reliably bypass state-of-the-art machine learning classifiers such as the
CNN-based Minos recently proposed at NDSS 2021. We analyze the cost and
reliability of instrumentation-based adversarial example generation and show
that the approach works reliably at minimal size and performance overheads. | Nils Loose, Felix MΓ€chtle, Claudius Pott, Volodymyr Bezsmertnyi, Thomas Eisenbarth | 2023-05-04T05:25:33Z | http://arxiv.org/abs/2305.02559v2 | # Madvex: Instrumentation-based Adversarial Attacks on Machine Learning Malware Detection
###### Abstract
WebAssembly (Wasm) is a low-level binary format for web applications, which has found widespread adoption due to its improved performance and compatibility with existing software. However, the popularity of Wasm has also led to its exploitation for malicious purposes, such as cryptojacking, where malicious actors use a victim's computing resources to mine cryptocurrencies without their consent. To counteract this threat, machine learning-based detection methods aiming to identify cryptojacking activities within Wasm code have emerged. It is well-known that neural networks are susceptible to adversarial attacks, where inputs to a classifier are perturbed with minimal changes that result in a crass misclassification. While applying changes in image classification is easy, manipulating binaries in an automated fashion to evade malware classification without changing functionality is non-trivial. In this work, we propose a new approach to include adversarial examples in the code section of binaries via instrumentation. The introduced gadgets allow for the inclusion of arbitrary bytes, enabling efficient adversarial attacks that reliably bypass state-of-the-art machine learning classifiers such as the CNN-based Minos recently proposed at NDSS 2021. We analyze the cost and reliability of instrumentation-based adversarial example generation and show that the approach works reliably at minimal size and performance overheads.
Keywords:Malware Detection Adversarial Attack Binary Instrumentation Minos Cryptojacking
## 1 Introduction
With the introduction of WebAssembly (Wasm) in 2017, web applications are able to utilize a system's CPUs with near-native efficiency [1]. Wasm allows developers to make computationally heavy applications available in-browser and has since been used for games, text processing, visualizations, and media players [14, 21]. On the downside, malicious parties have also utilized Wasm to distribute malicious binaries to victims that visit an infected website and thus gain access to the victim's resources without having to gain access to their system. In particular, the near-native performance of Wasm and the support provided by
all major browsers make WebAssembly a prime target for cryptojacking attacks [14, 21, 35]. In-browser cryptojacking or drive-by cryptocurrency mining allows an attacker to utilize their victim's computational resources for mining cryptocurrencies without their knowledge or consent, thus profiting from the returns without having to pay for the spent energy. To address this issue, various methods have been proposed to protect against cryptojacking attacks. However, while fast, traditional static approaches like blacklisting malicious hosts or matching signatures are easily bypassed [31]. Dynamic detection systems [17, 15, 30], on the other hand, rely on more sophisticated metrics that cause a runtime overhead and require the malicious binary to be executed. Minos, a lightweight machine learning-based detection system, provides a promising solution to this problem [23]. By transforming Wasm binaries to grey-scale images, Minos can utilize a convolutional neural network (CNN) for the classification of binaries. This provides a rapid and effective approach that can be applied prior to executing the binaries, thereby offering efficient protection against in-browser cryptojacking attacks. While promising, CNNs are known to be susceptible to adversarial attacks [39]. Malicious parties looking to distribute their malware have a high incentive to evaluate possible avenues for bypassing detection frameworks. In particular, the development of more sophisticated evasion techniques by attackers could render existing detection methods ineffective. Adversarial examples are usually crafted under the assumption that small changes to the input are neglectable. However, applying adversarial examples to binaries that follow strict syntactical and semantical rules requires specific placement of adversarial payloads without invalidating the binary or changing the semantics. Still, attacks leveraging adversarial examples to bypass visualization-based malware detectors have been proven to succeed on Windows Portable Executables [20, 16, 28].
In this paper, we evaluate the feasibility of utilizing adversarial examples against the Wasm-based classifier Minos[23] presented at NDSS 2021. We demonstrate the feasibility of inserting semantic-preserving gadgets using binary instrumentation into the code section of WebAssembly applications, allowing effective crafting of adversarial examples inside the gadget, thus enabling the evasion of the Minos detection system. In contrast to existing work, we add the adversarial payload directly into the application's control flow and introduce both size-efficient (SE) and optimization-resistant (OR) gadgets. Our findings shed light on the potential weaknesses of machine learning-based classifiers in detecting cryptojacking and highlight the need for ongoing efforts to improve their robustness and security, particularly when classifiers are applied in scenarios with incentives to evade classification. To summarize, our key contributions are:
* Comprehensive collection of malign Wasm samples from the _Cisco Umbrella 1 Million_ websites list.
* A novel approach for automatically crafting adversarial examples in code by introducing semantic-preserving instruction gadgets via instrumentation.
* Demonstrating a grey-box adversarial attack against the Minos classifier by training a substitute model and applying our gadgets.
* A comprehensive evaluation of the efficacy and costs of the attack.
## 2 Background
### WebAssembly
WebAssembly (Wasm) [1] is a binary instruction format for a stack-based virtual machine that enables high-performance applications that run seamlessly in web browsers. It is designed to provide near-native performance to web applications and allows developers to write applications in various programming languages, including C, C++, and Rust, while still being executed in the browser. Wasm is supported by all major web browsers and has gained significant traction in recent years, particularly in resource-intensive applications, where the performance benefits provided by Wasm are especially important. In most settings, Wasm is integrated into the JavaScript code of a website, from where the Wasm modules are loaded, and the respective functions are called. Its stack-based architecture, widespread support, and versatility make it an essential tool for modern web development.
### Cryptojacking Malware
_Cryptocurrency mining_ is the process of solving complex mathematical problems in order to validate transactions and add new blocks to a blockchain network [22]. The process requires a significant amount of computational power and energy. As compensation for the computation time, miners are rewarded with new units of the respective cryptocurrency. This reward mechanism is a key component of the decentralized nature of many cryptocurrencies, as it incentivizes individuals and organizations to participate in the network and maintain its security. However, as the difficulty of mining increases and the competition among miners grows, the margin between the resources spent on mining and the returned profits diminishes. If a malicious actor manages to utilize a victim's resources for mining, the computational cost is removed from the equation. In general, the unauthorized use of a device's computing power to mine cryptocurrencies, typically without the knowledge or consent of the device's owner, is referred to as _cryptojacking_. This type of attack can occur via host- or browser-based mining and can have significant impacts on both individual users and organizations. Host-based cryptojacking requires the installation of a cryptocurrency miner on the victim's machine through, i.e., malicious software installed by the victim [35]. Browser-based cryptojacking is a method of exploiting a victim's device through a malicious website. The attacker inserts a script into the website's code that runs in the victims' browser upon visiting the site and uses their device's processing power to mine cryptocurrency while profiting the owner of the operation. With the introduction of WebAssembly and its near-native speed, the efficiency of browser-based mining has significantly increased, making the attack lucrative. Unlike traditional malware, browser-based cryptojacking does not require the victims to download any files, making it subtle and difficult to prevent.
### Malware Detection
Identifying whether a binary contains malicious functionality is an active area of research across different types of binaries. Various approaches have been proposed for detecting _cryptojacking_, one of the primary malicious usages of Wasm binaries [21]. Due to the reliance of cryptojacking malware on network communication, network-based detection systems have been proposed, analysing the network traffic [32]. Host-based detection frameworks rely, in general, on either static or dynamic analysis to identify malware. Dynamic approaches observe the execution of a binary while monitoring key metrics such as memory consumption [25], the number of executed arithmetic operations [37], or through CPU profiling [17]. Prevention techniques that identify malware based on resource consumption can be circumvented through throttling [12]. Additionally, a number of machine learning classifiers have been proposed that require dynamic features such as API calls and resource information [30] or runtime information such as the number of web sockets or workers [15]. In order to generate dynamic features, the potentially malicious binary needs to be executed on the host's machine. Static approaches, on the other hand, do not require the evaluated code to be executed; instead, the binary is directly evaluated, for example, by matching known signatures or URL blacklisting [12]. However, these techniques can be circumvented using obfuscation [31]. MinerRay [31] relies on the static detection of hash semantics to make obfuscation-based prevention harder as the semantics of the functions are evaluated.
In general, efficiently detecting whether a WebAssembly binary utilizes the host's resources for mining cryptocurrencies without relying on dynamic features allows a detection framework to warn the user that a malicious binary is loaded before the execution of the binary. Nassem _et al._ developed Minos[23], a lightweight real-time detection system that aims to efficiently detect whether a WebAssembly binary utilizes the host's resources for cryptomining using a CNN. Minos is designed to be implemented as a browser plug-in which uses the detection framework to warn users about any detected cryptomining binaries before they are executed. Upon visiting a website that loads a Wasm binary, the detection framework transforms the bytes contained inside the binary into a two-dimensional grey-scale image which is then evaluated by a pre-trained CNN. This architecture allows the system to classify a binary, on average, in \(25.9\ ms\) while achieving an overall accuracy of \(98.97\%\) against an in-the-wild dataset [23].
### Adversarial Attacks
Deep neural networks, along with other machine learning models, have been discovered to be susceptible to adversarial attacks on their input data [34, 4]. Given a target model \(\theta\), an input \(x\) and a target class \(t\neq\theta(x)\), an adversaries objective is to find a minimal perturbation \(\delta_{x}\) under a norm \(\mathcal{N}=||\ \cdot\ ||\) s.t.
\[\theta(x+\delta_{x})=t \tag{1}\]
Minimizing the perturbation vector \(\delta_{x}\) under a norm \(\mathcal{N}\) ensures that the original input \(x\) and the newly generated input, or _adversarial example_, \(x^{*}=x+\delta_{x}\) are close to each other under a given distance metric \(\mathcal{D}\). However, finding a perturbation \(\delta_{x}\) that satisfies Equation 1 is generally a hard problem due to the nonlinearity of the evaluated model \(\theta\)[34]. Existing methods for crafting an adversarial example, such as the L-BFGS, solve the problem using approximations [39]. Carlini and Wagner (C&W) proposed a different approach by transforming the constraint shown in Equation 1 into an optimization problem using an appropriately chosen objective function \(\mathcal{L}\), s.t. if \(\theta(x+\delta_{x})=t\) is satisfied, \(\mathcal{L}(x^{*})\leq 0\) holds [8]. By moving the constraint into the minimization term, the problem of finding an adversarial example is an optimization task that minimizes \(\mathcal{N}(\delta_{x})+\epsilon\cdot\mathcal{L}(x^{*})\) such that \(x^{*}\in[0,1]^{n}\) where \(\epsilon>0\) is a suitably chosen constant. The optimization problem is solved using gradient-based optimization methods [5]. The gradient of the objective function with respect to the input \(x\) is used to update the perturbation \(\delta_{x}\) in each iteration of the optimization process. The process is repeated until the minimum perturbation, which results in the adversarial example being classified as the target class t, is found. Without access to the gradients of the target model \(\theta\), the aforementioned attack cannot be utilized. However, given query access, the adversary can train a local substitute network [27] by querying the target classifier with synthesized or otherwise gathered data. Using the results obtained through inference against the target network as labels, the local model is trained. Due to the _transferability_ between models, it is possible to train a machine learning model that mimics the behaviour of a target model [13]. In a black-box scenario [27], a network with unknown architecture is attacked, requiring a custom architecture for the local substitute network. In the grey-box scenario, additional information about the target network, such as parameters or its architecture, is known, and hence the substitution network architecture can be chosen similarly to the target model. The local model can then be utilized to generate adversarial examples that are transferable to the target network [27].
## 3 Madvex: Crafting Functional Adversarial Binaries
The Minos classifier [23] uses an image-based machine learning technique to quickly identify malicious WebAssembly binaries. However, such classifiers are shown to be vulnerable to adversarial attacks [34]. This section describes the attack methodology used to craft binaries that are misclassified by Minos. To illustrate the applicability of such an attack, we limit the adversary and assume a grey-box scenario where the attacker has query access to the model and knowledge of the network's architecture. Although the Minos classifier's architecture was published by Naseem _et al._, the training data and model were not made available. Therefore, we use a Minos classifier trained by Cabrera-Arteaga _et al._[7] as the target of our attack experiments.
### Data Acquisition
The performance of the attack correlates with the quality of the local substitute model trained by the adversary. Therefore a comprehensive dataset of malicious and benign WebAssembly binaries is required to train a suitable substitute network. The original Minos model was trained on a balanced dataset containing 300 samples [23]. The data preparation and training procedure for the substitute model is schematically visualized in Fig. 1 and described below in detail. To obtain benign samples, we used WasmBench1, a WebAssembly dataset containing more than 23.000 real-world binaries published by Hilbig _et al._ as part of an empirical study [14]. We obtained 34 malicious samples from a dataset2 published in the context of Minesweeper [17]. Additionally, we ran a crawler to increase the number of malware samples and gather up-to-date malware. By iterating over the Cisco Umbrella 1 Million list [11], we were able to download 187 WebAssembly binaries. Each domain on this list is visited by the crawler, which resides on any page for three seconds. By hooking a JavaScript function into each document load, we are able to dump any WebAssembly binary before it is executed. Considering that the malware may not reside on the homepage directly, the crawler additionally visits three randomly chosen internal links. Overall 40% of the crawled binaries resided on subdomains and were found either through accessing internal links or redirects. The Minesweeper [17] classifier categorized ten out of the 187 crawled binaries as being malicious. Even after combining the samples of public datasets with the results of our crawling campaign, the number of obtained malicious binaries is considerably lower than that of benign binaries. In order to compensate for this difference and additionally increase the number of samples, we utilize the Wasm-fuzzer wasm-mutate[6] as a diversifier. By utilizing wasm-mutate, one can generate a variety of different WebAssembly binaries that retain the original semantic. Mutation cores available in wasm-mutate enable semantic-preserving transformations. A sample function that performs the addition of two integers and two mutations of the function
Figure 1: Systematic overview of the training procedure for the substitute model. Malicious (M) samples are augmented to generate a balanced dataset. To generate labels, the target model is queried. The labelled benign (B) and malicious data is used to train the substitute model using 5-fold cross-validation.
are shown in Fig. 2. Each mutation is generated using a different seed, allowing us to generate a larger variety of syntactically different binaries with identical semantics. To generate appropriate adversarial examples, a shadow model that is as similar to the target model as possible must be utilized. To achieve this, the internal labels assigned to the samples are only used for balancing and not used for training. Instead, the pre-trained Minos network [7] is employed for label generation. After augmentation of the malicious samples, we obtain a dataset containing \(2.3\times 10^{4}\) malicious and \(2.3\times 10^{4}\) benign binaries that are used for training the substitute model.
### Substitute Network Training
We use the architecture employed by Minos for the substitute model because we assume a known architecture in the grey-box attack. The architecture of the CNN is shown in Fig. 3. Convolutional neural networks typically receive an image as the input for classification. The Minos classifier requires the input to be a grey-scale image of size \(100\times 100\). To allow binaries of varying sizes to be represented as a fixed-dimensional image, the bytes are reshaped into the largest possible two-dimensional array with the same width and height. The remaining bytes are discarded. Initially, each byte of the binary corresponds to one pixel. However, the image is downscaled to a \(100\times 100\) image. A detailed description of the downsampling process is given in Section 3.3. The original model was trained using an 80% training and 20% testing split. However, we use 5-fold cross-validation for training. Hence five models are trained each on 80% of the dataset described in Section 3.1, while 20% of samples are withheld for validation. For the evaluation, Minos was trained with one epoch (M-1) to prevent overfitting, followed by 50 epochs (M-50), the same number as the
Figure 2: Wasm function performing the addition of two integers (a) and two semantic-preserving mutations (b),(c) of the original function using different seeds in wasm-mutate [6].
target model. The area under the curve (AUC) and loss after the final epochs are reported in Table 1. Even after training the substitute network for only one epoch, the validation AUC reaches 99% with a validation loss of 0.14. After training for 50 epochs, the validation loss decreases to 0.04.
### Attack Methodology
Performing an adversarial attack against an image-based classifier requires slight modifications of the original image to manipulate the generated response in the desired direction. The alterations are often transparent to the naked eye as they result in a small amount of noise added to the original image. However, in the case of binaries, slightly manipulating the value of a pixel, for example, changing a value from \(0x2A\) to \(0x2B\), changes the original instruction from f32.load to f64.load invalidating the binary. We require a procedure that allows us to manipulate certain areas of the binary without changing the behaviour. Using instrumentation, we can add, manipulate or remove instructions from the malware and provide areas inside the code section that can be utilized for the adversarial attack. While we are still unable to manipulate arbitrary pixels, adding specially crafted gadgets into the binary enables specific bytes to be utilized for the adversarial attack. Generating an adversarial example requires iterative manipulation of the target value in small increments. Hence, an area of bytes that are
\begin{table}
\end{table}
Table 1: Substitute network training evaluation after the last epoch for (a) one epoch (M-1) and (b) 50 epochs (M-50).
Figure 3: Architectural overview of the Minos classifier from Naseem _et al._[23]. The CNN contains three convolution layers, three pooling layers, and one fully connected layer. The input image shows a Wasm binary that is transformed into a grey-scale image.
arbitrarily manipulable is ideal. Each WebAssembly binary is split into several sections, each with a different purpose. As shown in Fig. 3(a), the code section represents, in most cases, the largest section inside both malicious and benign binaries that were analyzed. When separately evaluating the section distribution for malicious and benign binaries (cf. Fig. 3(b) and Fig. 3(c)), it is apparent that in both cases, the code section remains the largest section. The code section contains all functions with their instructions, whereas the data section represents a linear array of memory accessible through instructions in the code section. While an attack against the data section is also possible by extending the size of the linear memory and using this area for crafting the attack, we chose to target the code section as it represents the largest section of the binaries. An overview of our attack methodology is given in Fig. 5. Each step is described in detail below.
#### 4.2.2 Semantic-preserving Gadgets
To enable manipulation inside the code section, we require an instruction that has a number of bytes that are freely choosable. In particular, instructions that load constants onto the stack cause specific values to be present inside the code section. Hence, constructing a gadget that loads an arbitrary constant onto the stack and removes it allows a number of bytes to be arbitrarily chosen. Additionally, it can be inserted anywhere into the control flow because, after the gadget's execution, the stack will be in the same state as before. WebAssembly allows four number types to be pushed onto the stack as constants - 32 and 64 bit variants of integers and floats. We opt to use 64 bit constants, as the ratio between the number of bytes that are available for the adversarial attack and the number of bytes required for the overall gadget is higher. Generally, both integers and floats can work. However, WebAssembly encodes all integers using the _LEB128_ variable-length encoding in either the
Figure 4: Histogram of relative section size (a) for code section, data section and all remaining sections for all binaries as described in Section 3.1. Cumulative density for relative section size for all malicious binaries (b) and benign binaries (c).
signed or unsigned variant. Compared to the encoding utilized for floating point values, _IEEE-754_[2], the integer encoding enforces a number of restrictions on the bytes representing the integer. _IEEE-754_, on the other hand, allows all bytes to assume all possible values. Hence we use 64 bit floating point constants to craft the attack. The f64.const x:f64 instruction can be used to push the 64 bit floating point number x onto the stack. We initialize the constant to \(0x80808080\) to allow both positive and negative perturbations. To ensure that the functionality of the target binary is not modified, the value must be removed from the stack before normal execution resumes. We demonstrate two gadgets that can be inserted after arbitrary instructions, as the execution of the gadget only changes the contents of the stack temporarily. A _size-efficient gadget_ (SE) is shown in Fig. 5(a). After the constant is pushed onto the stack, it is immediately removed again using the drop instruction. Each inserted gadget of this type increases the size of the binary by ten bytes, out of which the adversarial attack can utilize eight bytes (compare Fig. 5(b)). Hence, only 20% of the size overhead is attributed to bytes that cannot be manipulated during the attack phase.
Due to the low complexity of the size-efficient gadget, it is easy to discern that the two instructions will retain the program's semantics. However, optimizers such as wasm-opt[38] can remove all gadgets of this type from the binary. Note that using an optimizer before classifying the binary is not part of the Minos framework [23] because it would counteract the high efficiency of the detection system. Nevertheless, we are able to craft a gadget that is not removed by wasm-opt, even when using its most aggressive optimization setting. This resilience, however, is only made possible by increasing the gadget's complexity. The composition of our _optimizer-resistant gadget_ (OR) is shown in Fig. 5(c) and the binary representation in Fig. 5(d). The basic idea remains unchanged; we still load a constant onto the stack, thus introducing a value that can be manipulated during the attack phase. However, instead of directly loading the value onto the stack and dropping it, we use it as the increasing constant for a loop counter. However, as the value can be an arbitrary float value, i.e. negative and positive, we divide it by itself to have a known value, i.e. one. We then check whether
Figure 5: Schematic overview of the attack methodology. A malicious binary is instrumented to add the gadgets used for carrying the adversarial payload. After downsampling, the adversarial attack is performed against the substitute model. To recreate the original binary, we upsample the adversarial image and recreate the original binary.
this new value is less than some constant, i.e. 42, which is always true, and break the loop. While it is intuitively understandable that this loop will never be executed more than once, it is not easily determined by an algorithm since loops are difficult to analyze. While this gadget survives optimization passes, only eight out of 32 bytes can be utilized for the adversarial attack. Gadgets are inserted into the code section at randomly drawn insertion points with a predetermined frequency. The relation between the number of inserted gadgets and the success rate of the attack is evaluated in Section 4.1. In Section 4.2, we evaluate the execution speed of both gadgets in relation to the number of gadgets inserted into the binary. Insertion of either gadget into the target binary can be performed once per binary before distribution and requires linear time in the size of the binary, making the instrumentation efficient.
#### 4.1.2 Downsampling
A given binary can be of any size between a few kilobytes and many megabytes. Hence, the authors of Minos[23] downsample each binary into an image of fixed dimensionality, i.e. \(100\times 100\) pixels (Fig. 3). As our shadow model utilizes the same architecture, it also requires an input image of that size. However, as we need to keep track of the positions that allow for a change within the instrumented binary, i.e. the constants within our gadgets, we use a custom downsampling algorithm for crafting the attacks. Yet, at inference time, the original downsampling method is used. At first, we transform the sequence of bytes \(b\) from the binary into a squared image with a dimension of \(\lfloor\sqrt{|b|}\rfloor\). Hence, a few bytes at the end are discarded. From this squared image, we combine as many pixels as needed in order to downsample the image to \(100\times 100\) pixels. For this
Figure 6: Size-efficient (a) and optimizer-resistant (c) gadget and their binary representation (b,d). Bytes that can be manipulated during the adversarial attack are highlighted in blue.
purpose, we calculate the mean of a group of pixels, which then become a single pixel. To keep track of what pixels contain a byte that is used for the adversarial attack, we maintain a mask \(M_{1}\). The mask has the same dimensionality as the image and marks all positions that contain editable values. To easily revert the downsampling when restoring the binary, we store the coordinates of the original group of pixels for each downsampled pixel.
#### 4.2.3 Adversarial Attack
After downsampling, the image \(x\) is perturbed iteratively until our shadow model misclassifies the image as benign using the method proposed by Carlini & Wagner [8]. However, instead of optimizing for a fixed number of iterations, we keep iterating until the shadow model prediction reaches a threshold \(\tau\). Experimentally we determined \(\tau=10^{-13}\). However, we also terminate the optimization after \(1\times 10^{4}\) iterations. During our experiments, we found that the lower the threshold for the prediction score is, the higher the chance that an original model will share the classification of the shadow model. In order to only perturb pixels related to the gadgets, we multiply the mask \(M_{1}\) that was saved during downsampling before adding the perturbation \(\delta_{x}\) to the sample. Given the model \(\theta\), a normalization \(|\cdot|\) and the constant \(\epsilon\), the perturbation of the input under the objective function \(\mathcal{L}\) is given as:
\[x=x+M_{1}\cdot\epsilon\cdot\left|\frac{\mathrm{d}}{\mathrm{d}x}\mathcal{L}( \theta(x),0)\right|\]
In our experiments, we chose \(\epsilon=0.05\) and \(\mathcal{L}\) as binary cross-entropy [5]. We derive the change needed for the input \(x\) within the normalization term so that the prediction \(\theta(x)\) gets closer to zero, i.e. benign. However, instead of adding the whole perturbation to \(x\), only a small factor is added. This can be compared to the learning rate in classical machine learning. As we cannot perturb the whole input image but rather just the constants within the gadgets, our crafted mask is multiplied before the summation. As the mask has zeros on all non-editable pixels, i.e. the original code of the binary, and a one wherever there is at least a single gadget, the perturbation is only applied to pixels that relate to gadgets.
#### 4.2.4 Upsampling
The result of the adversarial attack is a perturbed image \(x^{*}\) where the perturbation is only applied to the pixels that initially belonged to at least a single gadget. Those changes must now be mapped back to the original binary. For the perturbed image, we look at every pixel that belonged to at least one gadget. If such a pixel is found, we retrieve the corresponding group of pixels \(\mathcal{G}\). To correctly update \(\mathcal{G}\), the bytes belonging to an adversarial payload need to be modified s.t. the mean value of \(\mathcal{G}\) equals the corresponding pixel value of \(x^{*}\). Given the sum of the pixel values \(\sum_{p\in\mathcal{G}}p\), the number of pixels \(|\mathcal{G}|\) and the target pixel \(p^{*}\) the update factor \(f_{adv}\) can be derived using the following equation:
\[f_{adv}=p^{*}\cdot|\mathcal{G}|-\sum_{p\in\mathcal{G}}p\]
To apply the factor \(f_{adv}\) to the adversarial payload, we create a mask \(M_{2}\) that has a one at every editable position within \(\mathcal{G}\). \(\overline{M_{2}}\) contains the same values as \(M_{2}\) but flipped, s.t., ones become zeros and vice versa. We can update the group of pixels using the following equation:
\[\mathcal{G}_{adv}=\begin{cases}M_{2}\dfrac{f_{adv}}{\sum M_{2}}+\overline{M_{2 }}\mathcal{G}&\text{if }\sum M_{2}\geq 1\\ \mathcal{G}&\text{otherwise}\end{cases}\]
The left term of the addition in the first case replaces all the editable pixels within the image with a shared factor. The second term adds the original values. This way, the new mean value of \(\mathcal{G}_{adv}\) equals the target value of the downsampled image. In case there are no gadgets in the particular group, i.e. \(\sum M_{2}=0\), \(\mathcal{G}\) is simply copied. After the termination of the adversarial attack, the image is flattened into a byte array \(b_{adv}\), and the bytes that were cropped during downsampling are appended again.
#### 3.3.2 Possible Countermeasures
In Section 4.1, we show that Minos[23] is susceptible to the presented adversarial attack. However, it is essential to also discuss possible improvements that could prevent such adversarial attacks and aid in hardening the detection framework. The option to remove semantic-preserving gadgets using an optimizer was already discussed in Section 3. While an additional optimization step prevents an adversary from relying on the size-efficient gadget, the more complex optimization-resistant gadget still allows effective adversarial attacks. Machine learning models can be directly hardened against adversarial attacks using, for example, defensive distillation [26], which is a technique where the class probability vectors of a trained DNN are used to train another DNN of the same dimensionality. As the name suggests, defensive distillation is derived from the concept of distillation [3], where one trained DNN is used to train a smaller DNN without losing accuracy. Another promising method for hardening models against adversarial attacks is presented by Goodfellow _et al._[13]. They create adversarial examples and use them as training data for their model. However, the presented countermeasures were shown not to be effective against a thoughtful attacker [36].
## 4 Evaluation
### Gadget Effectiveness
Using our corpus of malicious samples (Section 3.1), we evaluate the effectiveness of our attack by creating adversarial examples for each binary. We consider the insertion density \(d\) as the relative frequency of occurrence of our gadget, s.t. for a given density \(d\in[0,1]\), for every 1000 instructions \(d\cdot 1000\) gadgets are added. Fig. 7 shows the misclassification rates of binaries with the size-efficient gadget (Fig. 6(a)) and the optimization-resistant gadget (Fig. 6(b)) against the Minos
classifier [23] trained by Cabrera-Arteaga et _al._[7]. To the best of our knowledge, Minos is the only WebAssembly malware classifier that utilizes machine learning to classify malware directly on a representation of the binary itself. To evaluate the effectiveness of our adversarial payloads at invoking misclassifications, we plot the misclassification rates for the original binary, the instrumented binary without adversarial payload and the adversarially crafted binaries. The original binaries are unaffected by the gadget density and never result in misclassification. For instrumented binaries without adversarial payloads, it becomes apparent that after a sufficiently large number of insertions, the classifier cannot detect the malicious binary even without the adversarial attack. Fig. 7(b) shows the size increase of the binary through the addition of our gadgets. For each gadget, the misclassification rates of the instrumented binaries start to increase significantly at a size of roughly \(1.5\times\) the original binary. Considering that the larger the binary gets, the higher the compression rates and information loss are during downsampling, an increase in misclassification rates that correlates with a size increase can occur. Due to the difference in the number of added bytes per gadget, the misclassification rate for the larger optimization-resistant gadget increases at lower densities. However, for both gadgets, one can observe that the adversarially crafted binaries consistently outperform the binaries that are only instrumented, causing higher misclassification rates at lower densities. Additionally, adversarial payloads generated using the substitute models trained for one epoch consistently cause higher misclassifications at lower densities than payloads generated using the models trained for 50 epochs. To further evaluate the misclassification caused by instrumenting the malicious binary, we additionally
Figure 7: Minos misclassification rate of binaries with size-efficient gadgets (a) and optimizer-resistant gadgets (b) against the pre-trained Minos[23] classifier by Cabrera-Arteaga _et al._[7]. Each plot depicts the misclassification rate of the original binary (Original), the instrumented binary _without_ adversarial payload (Instr.), and the misclassification rate of the binaries _with_ adversarial payload derived using Minos trained for one epoch (Adv. M-1) and for 50 epochs (Adv. M-50). The adversarial misclassification rates are average over all five folds. The error bars depict the standard deviation.
instrumented 50 randomly selected benign binaries with the optimizer-resistant gadget that caused higher misclassification rates. At densities of both 0.1 and 0.01, the classifier correctly identified all evaluated benign binaries as benign, suggesting a tendency of the classifier to classify samples as benign. To evaluate the effectiveness of our method, we additionally generated adversarial payloads for the benign binaries that caused the substitute model to misclassify the binary as malicious. Using the substitute model trained for one epoch, we were able to successfully cause the target classifier to misclassify, on average, 77% of the binaries over all folds at a density of 0.1. Overall, at a density of 0.02, both gadgets are shown to be successful in evading the target classifier for at least 70% of evaluated malicious binaries, while the misclassification rates for the instrumented binary without the adversarial payload are at or below 20%, highlighting the effectiveness of our approach.
### Performance Analysis
To quantify the gadget's impact on the runtime of instrumented binaries, we measured the execution time in relation to the gadget density. This correlation is illustrated in Fig. 7(a). We utilized a WebAssembly hashing library [24] and performed \(5\times 10^{5}\) rounds of SHA-256 hashing. A baseline was established by measuring the execution time without inserting the gadgets. The execution time of both gadgets is shown in relation to the baseline. The insertion of the size-efficient gadget only results in a small constant increase in execution time, suggesting that the inserted gadget is not executed. WebAssembly is compiled using an ahead-of-time compiler, which includes optimization of the code. As the size-efficient gadget neither changes the data flow nor the control flow, the compiler likely identifies and removes those instructions during compilation. However, similar to wasm-opt[38], this optimizer cannot detect the optimization-resistant
Figure 8: Correlation between the insertion density and the relative increase in execution time (a) and size (b). Both the size-efficient gadget (SE) and the optimization-resistant gadget (OR) are evaluated. The \(x\)-axis represents the density of the gadgets, while the \(y\)-axis represents the relative execution time compared to the baseline (no gadget insertion) (a) and the relative increase of the binaryβs size in bytes (b). The average over the evaluated binaries is plotted, and the error bars represent the standard deviation.
gadget. As a result, the execution time increases linearly in the number of inserted gadgets. However, considering that a density of \(0.02\) is enough to trick the target classifier, the increase in runtime is reasonable.
Additionally, we evaluated the requirements for generating an adversarial example, which heavily depends on the gadget density. The number of iterations required to achieve a confidence of less than \(1\times 10^{-13}\) within the shadow model was measured as a function of the chosen gadget density. The results are depicted in Fig. 9, which displays the average number of iterations required during the adversarial example generation over the applied gadget density. As both gadget types hold the same number of bytes utilized for the adversarial payload, they require a similar number of iterations to reach the confidence level. The adversarial training optimization loop was run for a maximum of \(1\times 10^{4}\) iterations. Overall, the lower the chosen density, the more iterations are required to reach the target confidence, as fewer bytes are available for adversarial crafting. While the adversarial examples crafted using the substitute model trained for one epoch outperform the adversarial examples crafted using the model trained for 50 epochs, the adversarial example reaches the target confidence with fewer iterations on the model trained for 50 epochs. The execution time of a single iteration is 9.84 ms on an AMD Ryzen 9 7950X 16-Core Processor, which renders the attack feasible. Note that this optimization needs to only be performed once per malware. However, an attacker could potentially exploit the low cost of generating new adversarial examples by regularly distributing new binaries to website visitors.
## 5 Related work
The use of machine learning-based classifiers for detecting malware has been shown to be fast and effective in identifying binaries as malicious or benign.
Figure 9: Average number of iterations (\(y\)-axis) required to achieve a confidence of \(1\times 10^{-12}\) for a given gadget density (\(x\)-axis). Both the size-efficient gadget (SE) and the optimizer-resistant gadget (OR) are evaluated on the substitute model trained for one epoch (M-1) and 50 epochs (M-50). The error bars show the standard deviation.
However, the robustness of these classifiers against adversarial inputs is often limited. As more machine learning-based classifiers are utilized for detecting malware, malicious actors who want to distribute their malware have a high incentive to utilize evasion techniques to prevent detection. Especially for Windows Portable Executables (PEs), a number of classifiers and evasions exist. Existing adversarial evasions on classifiers that utilize a gray-scale image representation of the target binary [16] rely on FSGM [13] or Carlini & Wagner [8], to generate a perturbation vector for the image [20, 16, 28]. However, in contrast to our attack, Liu _et al._[20] directly apply the perturbation to the image representation of the binary. While they show a successful attack against the classifier, the generated adversarial example is not a valid binary anymore, rendering their evasion ineffective. Khormali _et al._[16] generate the adversarial example and append the adversarial payload to the end of the file or at the end of a section. This ensures that the adversarial example is added into nonexectable areas, and hence the original functionality remains. While this enables the addition of the adversarial payload into the malicious binary, a sophisticated defender can easily remove the payload by statically identifying unused bytes and masking them before classification, as they should have no impact on the classification performance. Using our attack methodology, the adversarial payload is placed inside the code section and directly baked into the control flow of the target binary, preventing a defender from easily removing the payload. Additionally, we have presented the optimization-resistant gadget that cannot be generally removed using an optimization pass. Evasions against other network architectures that directly consider the sequence of bytes from Windows PE files generally insert adversarial payloads in unused bytes between sections [18, 33, 29], in a new section [18] or at the end of the file [9, 29]. While these approaches generate executable binaries, it is rather easy to circumvent for a slightly more sophisticated detection model, e.g. one that first removes unused bytes or truncates sections or files. Either of our proposed gadgets is inserted directly into the instructions so that more sophisticated static analysis techniques, such as data flow and control flow analysis, are required to detect them fully. However, there are also numerous adversarial attacks against classifiers that classify a binary on more sophisticated features than just an image from its raw binary data, e.g. based on extracted features such as control flow, data flow, API calls, libraries, or dynamic features [10, 19]. While the general procedure for generating the perturbation vector is similar, the application to the binary relies on transforming the target in a way that the corresponding features change. The interested reader is referred to Ling _et al._[19], who provide an in-depth evaluation of different evasion techniques against Windows PE malware. Cabrera-Arteaga _et al._[7] proposed a malware evasion system against Wasm malware detectors and, in particular, Minos. However, their system relies on obfuscation to bypass detection frameworks, and they do not utilize adversarial attacks.
## 6 Conclusion
In this paper, we introduced a novel technique for placing adversarial payloads directly into the instruction stream using binary instrumentation to bypass machine learning-based malware detectors. We have demonstrated the effectiveness of our technique by crafting a grey-box adversarial attack against Minos[23], a lightweight cryptojacking detection framework for WebAssembly presented at NDSS 2021. To place payloads inside the code section of the binary, we have introduced two semantic-preserving gadgets for Wasm binaries with a focus on size-efficiency and optimization-resistance, respectively. We have collected an extensive dataset with both benign and malicious binaries by utilizing two existing benchmark datasets [17, 14] as well as results from a crawling campaign of one million websites from the Cisco Umbrella list [11]. To populate this dataset, we used wasm-mutate[6] to generate augmented binaries. Every sample was then assigned a label by querying the target model, i.e. Minos[23] provided by Cabrera-Arteaga _et al._[7]. All samples with their corresponding label were then used to train a substitute model of our targeted model. The challenge of creating a functional adversarial example inside a binary without altering the semantics was met by carefully inserting novel semantic-preserving gadgets. These gadgets can be injected freely into the code section of a Wasm binary without changing the semantics using binary instrumentation. Each gadget contains a number of bytes that carry the adversarial payload and can be manipulated freely during the attack phase. By attacking our substitute model, we successfully craft functional adversarial examples for cryptojacking binaries. Using an insertion density of 0.02 and the better-performing substitute network trained for one epoch (M-1), we are able to cause the target detector to misclassify all of the evaluated malicious binaries, demonstrating the effectiveness of our attack. Additionally, we show that our size-efficient gadget is removed during compilation resulting in only a negligible runtime overhead. The optimizer-resistant gadget, by design, is not removed before execution and thus leads to a linear overhead in the density. However, as a small insertion density of 0.02 is sufficient in bypassing the classifier, the execution time is only increased by roughly 10%. To prevent such attacks, we addressed typical countermeasures; However, as discussed by Tramer _et al._[36], as long as the adversary is able to manipulate features used by a classifier, the threat of adversarial attacks cannot be fully mitigated. The success of our grey-box adversarial attack on Minos highlights the need for continued research and improvement of defences against adversarial attacks on machine learning-based malware detection frameworks.
#### 6.0.1 Acknowledgements.
We thank the reviewers and our shepherd for their helpful comments and suggestions. This work has been supported by ERDF through the EMSIK project and by BMBF through the PeT-HMR project.
|
2307.14542 | Symmetry of the emergent inductance tensor exhibited by magnetic
textures | Metals hosting gradually varying spatial magnetic textures are attracting
attention as a new class of inductor. Under the application of an alternating
current, the spin-transfer-torque effect induces oscillating dynamics of the
magnetic texture, which subsequently yields the spin-motive force as a back
action, resulting in an inductive voltage response. In general, a second-order
tensor representing a material's response can have an off-diagonal component.
However, it is unclear what symmetries the emergent inductance tensor has and
also which magnetic textures can exhibit a transverse inductance response. Here
we reveal both analytically and numerically that the emergent inductance tensor
should be a symmetric tensor in the so-called adiabatic limit. By considering
this symmetric tensor in terms of symmetry operations that a magnetic texture
has, we further characterize the magnetic textures in which the transverse
inductance response can appear. This finding provides a basis for exploring the
transverse response of emergent inductors, which has yet to be discovered. | Soju Furuta, Wataru Koshibae, Fumitaka Kagawa | 2023-07-26T23:18:08Z | http://arxiv.org/abs/2307.14542v2 | # Symmetry of the emergent inductance tensor exhibited by magnetic textures
###### Abstract
Metals hosting gradually varying spatial magnetic textures are attracting attention as a new class of inductor. Under the application of an alternating current, the spin-transfer-torque effect induces oscillating dynamics of the magnetic texture, which subsequently yields the spin-motive force as a back action, resulting in an inductive voltage response. In general, a second-order tensor representing a material's response can have an off-diagonal component. However, it is unclear what symmetries the emergent inductance tensor has and also which magnetic textures can exhibit a transverse inductance response. Here we reveal both analytically and numerically that the emergent inductance tensor should be a symmetric tensor in the so-called adiabatic limit. By considering this symmetric tensor in terms of symmetry operations that a magnetic texture has, we further characterize the magnetic textures in which the transverse inductance response can appear. This finding provides a basis for exploring the transverse response of emergent inductors, which has yet to be discovered.
## Introduction
An inductor is a component that exhibits an inductive counter-electromotive force, \(V\), under a time-varying electric current, \(I\), following
\[V=L\frac{\mathrm{d}I}{\mathrm{d}t}, \tag{1}\]
where \(L\) denotes the inductance. The electric work done by the external power supply, \(IV\), is hence
\[\int\mathrm{d}t\:IV=\int\mathrm{d}t\:IL\frac{\mathrm{d}I}{\mathrm{d}t}=\int \mathrm{d}\left(\frac{1}{2}LI(t)^{2}\right), \tag{2}\]
which shows that the inductor stores an energy of \(\frac{1}{2}LI^{2}\). Thus, it can also be said that an inductor is a component that can store an energy of \(\Delta E=\frac{1}{2}LI^{2}\), under the application of an electric current. A textbook example is a solenoidal inductor, which stores energy as a magnetic-field energy [1]. Other inductors possess similar energy-storing properties. An established example is the so-called kinetic inductor, in which the energy is stored as the kinetic energy of mobile charge carriers. When considering the Drude model of conduction electrons, one can immediately find that the inductance defined using the imaginary part of the angular-frequency (\(\omega\))-dependent resistivity, \(\rho(\omega)\), agrees with the inductance defined using the total kinetic energy of electrons [2].
Recently, a new class of inductor, now referred to as emergent inductors, has been proposed theoretically [3] and confirmed experimentally [4; 5; 6]. In these inductors, the flowing conduction electrons exert a spin-transfer torque (STT) [7; 8; 9; 10] on the underlying magnetic texture; as a result, the magnetic texture exhibits time-dependent elastic deformations under an alternating current (AC) in the linear-response regime. Such current-induced magnetic texture dynamics exert a back action on the flowing conduction electrons, yielding the so-called spin-motive force or emergent electric field (EEF) [11; 12; 13; 14; 15]. This phenomenon can be derived microscopically in terms of the so-called spin-Berry phase or the effective U(1) gauge field, and the resulting EEF can be described by
\[e_{i}(\mathbf{r},t)=\frac{\hbar}{2|e|}\mathbf{m}(\mathbf{r},t)\cdot[\partial_{i}\mathbf{m}( \mathbf{r},t)\times\partial_{t}\mathbf{m}(\mathbf{r},t)], \tag{3}\]
where \(e\) (\(>0\)) is the elementary charge, \(\mathbf{m}(\mathbf{r},t)\) is the unit vector of the local magnetic moment at position \(\mathbf{r}\) and time \(t\), and \(\partial_{i}\) (\(i=x,y,z\)) and \(\partial_{t}\) denote spatial and time derivatives, respectively (when the conduction-electron spins are not fully polarized, the
so-called spin-polarization factor \(P\) is further multiplied on the right-hand side of Eq. (3) [12; 15]). It has been numerically demonstrated that in the so-called adiabatic limit (i.e., \(\beta=0\); see the Methods section), the inductance value defined using the EEF under an AC quantitatively agrees with that defined using the current-induced magnetic-texture-deformation energy [16]. Thus, in the adiabatic limit, the emergent inductance is well defined, and both the electric and energetic responses are correctly captured by Eq. (1). On the other hand, when nonadiabaticity is concerned (i.e., \(\beta\neq 0\)), the inductance values derived independently from the two definitions do not match, implying that the system responses are beyond the framework of Eq. (1) and hence the inductance interpretation does not apply.
An interesting aspect of emergent inductors is that the inductive electric response is potentially not limited to the applied current direction but may also appear along the perpendicular directions, as inferred from Eq. (3). Thus, in general, the emergent inductance, when it is well defined, should be represented by a tensor: \(V_{i}=L_{ij}\frac{\mathrm{d}I_{j}}{\mathrm{d}t}(i,j=x,y)\) or
\[\begin{pmatrix}V_{x}\\ V_{y}\end{pmatrix}=\begin{pmatrix}L_{xx}&L_{xy}\\ L_{yx}&L_{yy}\end{pmatrix}\frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix}I_{x} \\ I_{y}\end{pmatrix}. \tag{4}\]
In classical electrodynamics, such an inductance tensor with \(i,j=1,2\) may be introduced to describe two mutually coupled coils, \(1\) and \(2\). It therefore appears that an emergent inductor possesses a function similar to that of a coupled classical inductor system. However, such an intuitive analogy requires careful consideration because the microscopic mechanism is quite different between classical and emergent inductors. For instance, in a coupled classical inductor system, one can analytically express the mutual inductance and find \(L_{12}=L_{21}\equiv M\)[1]; moreover, the fact that the coupled system stores a positive energy in the quadratic form of \(\frac{1}{2}L_{11}I_{1}^{2}+\frac{1}{2}L_{22}I_{2}^{2}+MI_{1}I_{2}\) for arbitrary values of \(I_{1}\) and \(I_{2}\) leads to a constraint, \(L_{11}L_{22}\geq M^{2}\)[17], in addition to the obvious one, \(L_{11},L_{22}\geq 0\). Such classical electrodynamics considerations, however, are not helpful for emergent inductors consisting of an arbitrary spin texture including disorder, and thus, the relation between \(L_{xy}\) and \(L_{yx}\) appears to be nontrivial.
When considering the nature of \(L_{ij}\), it is instructive to review the resistivity tensor, \(\rho_{ij}\), as a textbook example. Note that any second-order tensor, \(K_{ij}\), can always be decomposed into a symmetric part, \(K_{ij}^{\mathrm{S}}\), and an antisymmetric part, \(K_{ij}^{\mathrm{A}}\); namely, \(K_{ij}=K_{ij}^{\mathrm{S}}+K_{ij}^{\mathrm{A}}\) with
\(K_{ij}^{\rm S}=(K_{ij}+K_{ji})/2\) and \(K_{ij}^{\rm A}=(K_{ij}-K_{ji})/2\). In the case of \(\rho_{ij}\), the symmetric part represents dissipative transport, whereas the antisymmetric part represents nondissipative transport, that is, the Hall resistivity. Thus, the symmetric and antisymmetric parts of \(\rho_{ij}\) have their own physical meanings with quite different characteristics. Therefore, the symmetry of the emergent inductance tensor is also an important issue in understanding the underlying physics.
In this paper, focusing on the adiabatic limit, in which the emergent inductance is well defined by Eq. (1) [16], we aim to reveal the symmetry of the emergent inductance tensor and discuss the physical implications of the revealed symmetry. Our approach is two-fold. First, we consider the tensor-expressed circuit equation [Eq. (4)] in detail and draw a conclusion regarding the symmetry of \(L_{ij}\): this also enables us to discuss how the inductor tensor should behave under the time-reversal operation. Second, we numerically investigate \(L_{ij}\) for various magnetic textures using micromagnetic simulations. These two approaches consistently show that \(L_{ij}\) is a symmetric tensor (that is, \(L_{xy}=L_{yx}\)) and \(L_{ij}\) is even under the time-reversal operation. By combining the numerical results and symmetry arguments, we also find what kinds of magnetic textures can or cannot exhibit a transverse emergent inductance, \(L_{yx}\). We note that the present conclusion is for the case where the emergent inductance is well defined (i.e., the adiabatic limit, \(\beta=0\)). The effect of nonadiabaticity (i.e., \(\beta\neq 0\)), which makes the emergent inductance ill-defined [16], is discussed in the Supplementary Information.
## Results
### Considerations for the circuit equation
We discuss the consequences that are prescribed in the tensor-expressed circuit equation, Eq. (4). Following the general arguments on a second-order tensor, we decompose \(L_{ij}\) into symmetric and antisymmetric parts: \(L_{ij}=L_{ij}^{\rm S}+L_{ij}^{\rm A}\), or explicitly,
\[\begin{pmatrix}L_{xx}&L_{xy}\\ L_{yx}&L_{yy}\end{pmatrix}=\begin{pmatrix}L_{xx}^{\rm S}&L_{xy}^{\rm S}\\ L_{xy}^{\rm S}&L_{yy}^{\rm S}\end{pmatrix}\,+\begin{pmatrix}0&L_{xy}^{\rm A} \\ -L_{xy}^{\rm A}&0\end{pmatrix}. \tag{5}\]
To gain insight into the physical meaning of the symmetric and antisymmetric tensors, we consider the work done by the power source along a closed loop in the \(I_{x}\)-\(I_{y}\) plane, which is
expressed as:
\[\begin{split}\oint\mathrm{d}t\left(I_{x}V_{x}+I_{y}V_{y}\right)=\oint \mathrm{d}\left(\frac{1}{2}L_{xx}^{\mathrm{S}}I_{x}^{2}+\frac{1}{2}L_{yy}^{ \mathrm{S}}I_{y}^{2}+L_{xy}^{\mathrm{S}}I_{x}I_{y}\right)\\ +\oint\mathrm{d}t\,L_{xy}^{\mathrm{A}}\left(I_{x}\frac{\mathrm{d}} {\mathrm{d}t}I_{y}-I_{y}\frac{\mathrm{d}}{\mathrm{d}t}I_{x}\right).\end{split} \tag{6}\]
Note that the first term in the right-hand side consists of only the symmetric tensor components and the integrand takes the form of a total derivative; hence, the contour integral results in zero. The expression, \(\frac{1}{2}L_{xx}^{\mathrm{S}}I_{x}^{2}+\frac{1}{2}L_{yy}^{\mathrm{S}}I_{y}^{2 }+L_{xy}^{\mathrm{S}}I_{x}I_{y}\), is essentially the same as that derived for mutually coupled classical inductors, representing an energy stored in the emergent inductor under a current. In contrast, the second term consists of only the antisymmetric-tensor components, and the integrand is not the form of a total derivative, indicating that the second term is nonzero and dependent on the path. These features imply that \(L_{xy}^{\mathrm{A}}\) is associated with a non-conserved quantity.
To see the consequences of the antisymmetric component \(L_{xy}^{\mathrm{A}}\) more clearly, it is helpful to consider a specific closed path for the integral(s) of Eq. (6). Suppose \(L_{xy}^{\mathrm{A}}>0\); we consider a specific cycle \(C\) that consists of three paths, \(C_{1}\), \(C_{2}\) and \(C_{3}\), as shown in Fig. 1: \((I_{x},I_{y})=(0,0)\xrightarrow{C_{1}}(I_{0},I_{0})\xrightarrow{C_{2}}(I_{0},0)\xrightarrow{C_{3}}(0,0)\) with constraints of \(I_{x}=I_{y}\) on \(C_{1}\), \(I_{x}=I_{0}\) on \(C_{2}\), and \(I_{y}=0\) on \(C_{3}\). Thus, taking the contour integral along the cycle in the clockwise direction results in:
\[\oint_{C}\mathrm{d}t\left(I_{x}V_{x}+I_{y}V_{y}\right)=-L_{xy}^{\mathrm{A}}I_{0 }^{2}<0. \tag{7}\]
The result indicates that if a positive \(L_{xy}^{\mathrm{A}}\) were present, the power source could acquire energy by cycling the closed loop. Such behaviour is obviously not allowed for a passive element, such as a stable material. Similarly, one can consider the case of \(L_{xy}^{\mathrm{A}}<0\), and the same conclusion can be drawn by considering the same closed loop \(C\) but in the counterclockwise
Figure 1: **A specific closed loop used to prove the absence of the antisymmetric components of an inductance tensor.**
direction. Thus, Eq. (4) concludes that even for the case of an emergent inductor, the inductance tensor cannot have an antisymmetric component; that is, \(L_{xy}^{\rm A}=0\), and an emergent inductor tensor should be a symmetric tensor (below, we therefore omit the superscript, S),
\[L_{xy}=L_{yx}\equiv L_{tr}. \tag{8}\]
Hence, the energy stored in an emergent inductor under current is found to be expressed by \(\frac{1}{2}L_{xx}I_{x}^{2}+\frac{1}{2}L_{yy}I_{y}^{2}+L_{tr}I_{x}I_{y}\), and for this quadratic form to be nonnegative, \(L_{ij}\) should satisfy
\[L_{xx}L_{yy}\geq(L_{tr})^{2}, \tag{9}\]
in addition to \(L_{xx},L_{yy}\geq 0\)[16].
Thus, although the microscopic mechanism is quite different between classical and emergent inductors, it turns out that there is no difference in the constraints that inductance tensors should satisfy. These characteristics are implicitly prescribed by the relation between voltage and current, Eq. (4), not depending on the microscopic mechanism for inductors.
Having established the symmetry of \(L_{ij}\), we can discuss the behaviour of \(L_{ij}\) under the time-reversal operation. Since an emergent inductance arises from a magnetic texture \(\{\mathbf{m}(\mathbf{r})\}\), the behaviour of an emergent inductance under the time-reversal operation is an interesting issue. In fact, in experiments, the magnetic-field \((\mathbf{B})\)-dependence of an emergent inductance has been frequently investigated [4; 5; 6]. To incorporate a case where \(\{\mathbf{m}(\mathbf{r})\}\) shows hysteretic behaviour with respect to changes in \(\mathbf{B}\), \(L_{ij}\) may be expressed as a function with \(\{\mathbf{m}(\mathbf{r})\}\) and \(\mathbf{B}\) as variables. Note that regardless of the details of the variables, \(L_{ij}\) should be a symmetric tensor as discussed above, and hence, \(L_{ij}(\mathbf{B},\{\mathbf{m}(\mathbf{r})\})=L_{ji}(\mathbf{B},\{\mathbf{m}(\mathbf{r})\})\) should always be satisfied. Moreover, with respect to the complex resistivity, Onsager's reciprocal theorem concludes \(\operatorname{Im}\rho_{ij}(\omega,\mathbf{B},\{\mathbf{m}(\mathbf{r})\})=\operatorname{Im }\rho_{ji}(\omega,-\mathbf{B},\{-\mathbf{m}(\mathbf{r})\})\) (the real part also satisfies the same relation) [18]; hence, the inductance tensor should also satisfy \(L_{ji}(\mathbf{B},\{\mathbf{m}(\mathbf{r})\})=L_{ij}(-\mathbf{B},\{-\mathbf{m}(\mathbf{r})\})\). By combining the two relations regarding \(L_{ij}\), one can thus conclude
\[L_{ij}(\mathbf{B},\{\mathbf{m}(\mathbf{r})\})=L_{ij}(-\mathbf{B},\{-\mathbf{m}(\mathbf{r})\}). \tag{10}\]
This relation indicates that \(L_{ij}\) is even under time-reversal, or equivalently, \(L_{ij}\) is a polar symmetric tensor. In particular, we note \(L_{yx}(\mathbf{B},\{\mathbf{m}(\mathbf{r})\})=L_{yx}(-\mathbf{B},\{-\mathbf{m}(\mathbf{r})\})\), distinct from the Hall resistivity, which satisfies \(\rho_{yx}(\mathbf{B},\{\mathbf{m}(\mathbf{r})\})=-\rho_{yx}(-\mathbf{B},\{-\mathbf{m}(\mathbf{r})\})\). For this reason, we call \(L_{yx}\) the transverse inductance, not the Hall inductance.
### Micromagnetic simulations
To observe the symmetry of the inductance tensor of emergent inductors, we consider magnetic textures that slowly vary in space; for such magnetic textures, the EEF can be calculated according to Eq. (3). We further consider the pinned regime, in which a magnetic texture does not exhibit a steady flow under a direct current [19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. The procedure for calculating the emergent inductance arising from a slowly varying magnetic texture in the pinned regime is detailed in the literature [16] and also in the Methods section. We consider a spin Hamiltonian based on the continuum approximation that can exhibit helical and skyrmion-lattice (SkL) [29; 30; 31; 32] magnetic textures and calculate the current-induced dynamics of a magnetic texture by numerically solving the Landau-Lifshitz-Gilbert (LLG) equation [33] (see the Methods section). To be more specific, the magnetic texture dynamics under the application of an AC along the \(x\)-direction are calculated by micromagnetic simulation; then, by referring to Eq. (3), the time-dependent EEFs along both the \(x\) and \(y\) directions are further derived; and finally, by referring to Eq. (4), \(L_{xx}\) and \(L_{yx}\) are obtained. Similarly, we obtain \(L_{yy}\) and \(L_{xy}\) by simulating the case of an AC along the \(y\) direction. The emergent inductance \(L_{ij}\) depends on the system dimension in the form of \(L_{ij}=\tilde{L}_{ij}\frac{\ell}{S}\), where \(\tilde{L}_{ij},\ell\), and \(S\) represent the normalized inductance (we call it "inductivity"), system length, and sample cross-section area. Below, we therefore present \(\tilde{L}_{ij}\), rather than the system-size-dependent \(L_{ij}\). The inductivity tensor may be defined by
\[e_{i}=\tilde{L}_{ik}\frac{\mathrm{d}j_{k}}{\mathrm{d}t}, \tag{11}\]
where \(j\) represents the current density. The following simulation results are obtained for the case of \(\beta=0\) (i.e., the adiabatic limit).
Figure 2 summarizes the magnetic textures investigated in this study and the corresponding inductance tensors. We studied four examples of helical magnetic textures, for which the helical \(\mathbf{q}\)-vector forms approximately an angle \(\theta=0^{\circ},\pm 20^{\circ}\), and \(\pm 45^{\circ}\) with respect to the \(x\) direction (Fig. 2a-e, respectively); a maze-helix texture (Fig. 2f); and an SkL (Fig. 2g). The intensity and concentration of disorder were minimized as much as possible while confirming the linear response of the pinned dynamics. As a result, more disorder had to be included when examining the maze-helix and SkL, as summarized in Table 1: Selecting a much lower current density while keeping the disorder density as low as 0.3 % was not appropriate in terms of the required numerical accuracy. As shown in Fig. 2, we find that
Figure 2: **Various metastable magnetic textures and corresponding inductance tensors.****a-e** Helical magnetic textures with the helical \(\mathbf{q}\)-vector that forms approximately an angle \(\theta=0^{\circ}\) (**a**), \(20^{\circ}\) (**b**), \(-20^{\circ}\) (**c**), \(45^{\circ}\) (**d**), and \(-45^{\circ}\) (**e**) with respect to the \(x\) direction. **f** Maze helix. **g** Skyrmion lattice. The corresponding fast-Fourier-transform (FFT) images are also shown in each panel. Color wheels specify the \(x\)-\(y\) plane magnetization direction. The brightness of the color represents the \(z\) component of the magnetization, and white represents the local magnetizations pointing toward the \(z\) direction. The current-induced magnetic texture dynamics are calculated under the application of a weak AC. The parameters used for the simulation are tabulated in Table 1; they are chosen so that the resulting emergent voltage is in the linear-response and low-frequency regimes (see the Methods section). The simulations were done for \(\beta=0\).
invariably holds within the numerical error, consistent with the conclusion derived from the circuit equations.
The diagonal components of the inductance tensor are invariably positive, whereas the off-diagonal components can be either positive or negative. Nevertheless, we emphasize that the inductance tensor retains energetic interpretations; that is, the energy increase in the
\begin{table}
\begin{tabular}{c c c c c c} \hline Magnetic texture & Magnetic field & \(K_{\text{imp}}\) & Disorder density & Current density & Frequency \\ & (T) & \(\times 10^{7}\) (J m\({}^{-3}\)) & (\%) & \(\times 10^{10}\) (A m\({}^{-2}\)) & (MHz) \\ \hline \hline Helix (\(\theta=0,\pm 20^{\circ},\pm 45^{\circ}\)) & 0 & 0.1 & 0.3 & 5.0 & 50 \\ Maze helix & 0 & 2.0 & 3 & 2.0 & 50 \\ SkL & 0.3 & 1.0 & 3 & 1.0 & 10 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used for the micromagnetic simulations displayed in Fig. 2. The disorder density was chosen as low as possible while confirming the linear response of the pinned magnetic textures under a given current density.
magnetic system, \(\Delta E(I_{x},I_{y})\), caused by the application of an electric current agrees with \(\frac{1}{2}L_{xx}I_{x}^{2}Fig.\)\(3+\frac{1}{2}L_{yy}I_{y}^{2}+L_{tr}I_{x}I_{y}\). As an example, we discuss the results for the helical texture with \(\theta=-20^{\circ}\), in which the off-diagonal components of \(\tilde{L}_{ij}\) are negative. The \(\Delta E(I_{x},I_{y})\) is calculated for the following three cases independently: (i) (\(I_{x}\neq 0,I_{y}=0\)), (ii) (\(I_{x}=0,I_{y}\neq 0\)), and (iii) (\(I_{x}\neq 0,I_{y}\neq 0\)). Then, by solving the three simultaneous equations regarding \(\Delta E(I_{x},I_{y})=\frac{1}{2}L_{xx}I_{x}^{2}+\frac{1}{2}L_{yy}I_{y}^{2}+L_ {tr}I_{x}I_{y}\), we can obtain: \((\tilde{L}_{xx},\tilde{L}_{yy},\tilde{L}_{tr})=(2.67,0.63,-1.00)\times 10^{-21}\) H m. These values are in quantitative agreement with \(\tilde{L}_{ij}\) calculated from the EEF (Fig. 2c), indicating that the emergent inductivity is well defined by Eq. (11). We also confirmed \(\tilde{L}_{xy}(\mathbf{B},\mathbf{m}(\mathbf{r}_{k}))=\tilde{L}_{xy}(-\mathbf{B},-\mathbf{m}(\mathbf{r }_{k}))\) numerically (Fig. 3), in which the disorder density and strength \(K_{\text{imp}}\) (see Methods) are fixed to 3 % and 1.0\(\times 10^{7}\) J m\({}^{-3}\), respectively, and the single-\(\mathbf{q}\) helix with \(\theta=45^{\circ}\) was considered. Thus, our numerical study confirms that \(\tilde{L}_{ij}\) for an emergent inductor is a polar symmetric tensor.
When comparing the three helical textures quantitatively, one can find that as the \(\theta\) increases from \(0^{\circ}\) to \(45^{\circ}\), \(\tilde{L}_{xx}\) decreases, whereas \(\tilde{L}_{xy}\) increases. We also note that in the maze-helix and SkL textures, the transverse component, \(\tilde{L}_{xy}\), is more than one order of magnitude smaller than the longitudinal components, \(\tilde{L}_{xx}\) and \(\tilde{L}_{yy}\). As discussed below, these observations can be explained by considering an orthogonal transformation of \(\tilde{L}_{ij}\) and the rotational symmetry that each magnetic texture has.
## Discussion
In the following, we aim to categorize inductance tensors of a magnetic texture origin and consider how our numerical results obtained in the adiabatic limit can be explained in terms of the symmetry operations that each magnetic system has. Note that because an inductance tensor is real and symmetric, it can be diagonalized by performing an appropriate orthogonal transformation, \(R\), or equivalently by choosing appropriate Cartesian coordinates:
\[\begin{pmatrix}\tilde{L}_{xx}&\tilde{L}_{tr}\\ \tilde{L}_{tr}&\tilde{L}_{yy}\end{pmatrix}\xrightarrow{R}\begin{pmatrix} \lambda_{1}&0\\ 0&\lambda_{2}\end{pmatrix}, \tag{12}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) (\(\lambda_{1},\lambda_{2}\geq 0\)) represent the eigenvalues of the inductance tensor. Hence, to classify an emergent inductance tensor, it is sufficient to consider the diagonalized form. This approach does not lose generality because a representation in different Cartesian coordinates can be immediately obtained by performing the corresponding orthogonal transformation.
Following the group theory arguments for a polar symmetric tensor, one can conclude that: (i) when the system has three-fold or higher rotational symmetry with respect to the \(z\) axis (i.e., \(C_{3z},C_{4z},C_{6z}\) or \(C_{\infty z}\)), \(\lambda_{1}\) and \(\lambda_{2}\) should be equal, whereas (ii) when the system has only two-fold with respect to the \(z\) axis (\(C_{2z}\)) or no rotational symmetry, \(\lambda_{1}\) and \(\lambda_{2}\) should be inequivalent: For the details of the derivation, see Supplementary Note 3. Thus, the diagonalized two-by-two tensor can be classified as one of the two categories, which are characterized by \(\lambda_{1}=\lambda_{2}\) and \(\lambda_{1}\neq\lambda_{2}\), respectively.
The first category, \(\lambda_{1}=\lambda_{2}\), is represented by an isotropic tensor \(\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\), and thus, the off-diagonal components are always zero for arbitrarily chosen Cartesian coordinates; that is, the transverse inductance response does not appear. From group theory, a magnetic texture that has \(C_{3z},C_{4z}\), \(C_{6z}\) or \(C_{\infty z}\) symmetry should belong to this category. Note that our numerical calculations deal with finite-size systems including randomly distributed disorder, and therefore, the simulated magnetic textures do not have any rotational symmetry in a strict sense. Nevertheless, we numerically find that the inductance tensors of the maze-helix and SkL textures satisfy \(\tilde{L}_{xx}\approx\tilde{L}_{yy}\) and \(\tilde{L}_{xy},\tilde{L}_{yx}\ll\tilde{L}_{xx},\tilde{L}_{yy}\) (Fig. 2f and g, respectively), indicating that the obtained tensors are close to isotropic. These results appear reasonable, considering that in a macroscopic system, the maze-helix and SkL textures have global approximate \(C_{\infty z}\) and \(C_{6z}\) symmetries, respectively. The symmetry of the macroscopic systems can be imagined by looking at the corresponding fast-Fourier-transform (FFT) images. To be precise, the rotation symmetry of the SkL confined in the finite-size system is \(C_{2z}\), rather than \(C_{6z}\), as indicated by the FFT image (Fig. 2g): This perturbative symmetry lowering from \(C_{6z}\) to \(C_{2z}\) explains the small but finite symmetric off-diagonal component, which is originally prohibited under \(C_{6z}\) symmetry. When nonadiabaticity is not negligible, the effective inductivity tensor defined by \(\tilde{L}_{ij}^{\text{eff}}=\text{Im}[\rho_{ij}(\omega)-\rho_{ij}(0)]/\omega\) is discussed, but it should be noted that \(\tilde{L}_{ij}^{\text{eff}}\) is a different quantity from the inductivity tensor in Eq. (11). For instance, the \(\tilde{L}_{ij}^{\text{eff}}\) of the SkL has antisymmetric off-diagonal components when \(\beta\neq 0\), although the antisymmetric component in \(\tilde{L}_{ij}\) is energetically prohibited; for more details, see Supplementary Notes 1 and 3.
The second category consists of tensors that have two inequivalent components, \(\begin{pmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{pmatrix}\), and thus, off-diagonal components can appear if arbitrary Cartesian coordinates are chosen.
For instance, the matrix \(R(\theta)\) that rotates Cartesian coordinates clockwise by \(\theta\) transforms the diagonalized tensor into a nondiagonal form:
\[\begin{pmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{pmatrix}\xrightarrow{R(\theta)}\begin{pmatrix}\lambda_{1}\cos^ {2}\theta+\lambda_{2}\sin^{2}\theta&(\lambda_{1}-\lambda_{2})\sin\theta\cos \theta\\ (\lambda_{1}-\lambda_{2})\sin\theta\cos\theta&\lambda_{1}\sin^{2}\theta+ \lambda_{2}\cos^{2}\theta\end{pmatrix}. \tag{13}\]
Thus, \(\tilde{L}_{xy}=\tilde{L}_{yx}\) can be either positive or negative depending on the selection of Cartesian coordinates. An example of this category is a single-\(\mathbf{q}\) helix, in which the inductance tensor is diagonalized, for instance, when the \(x\)-axis is chosen parallel to the helical-\(\mathbf{q}\) vector. An important feature of an ideal single-\(\mathbf{q}\) helix is that the local magnetic moments show no modulation along the direction perpendicular to \(\mathbf{q}\). Hence, no STT effect is expected for the current along the \(y\)-axis, resulting in \(\lambda_{2}=0\). Thus, for the case of an ideal single-\(\mathbf{q}\) helix, the diagonalized form and its orthogonal transformation are given as:
\[\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\xrightarrow{R(\theta)}\begin{pmatrix}\cos^{2}\theta&\sin \theta\cos\theta\\ \sin\theta\cos\theta&\sin^{2}\theta\end{pmatrix}. \tag{14}\]
Figure 4 displays the comparison between the numerically obtained inductivity tensors of various \(\mathbf{q}\)-direction helices and the orthogonal transformation of \(\lambda_{1}=3.11\times 10^{-21}\) H m and \(\lambda_{2}=0\), which is an approximate inductivity tensor of the single-\(\mathbf{q}\) helix with \(\theta=0^{\circ}\)
Figure 4: **Numerically obtained inductivity tensors of helical magnetic textures for various \(\mathbf{q}\)-directions.** The \(\theta=0\) represents the \(\mathbf{q}\)-direction parallel to the \(x\) axis. The solid symbols are the data obtained by the micromagnetic simulations, and the solid curves represent the corresponding trigonometric functions multiplied by \(\lambda_{1}=3.11\times 10^{-21}\) H m. The simulations were done for \(\beta=0\).
(Fig. 2a). Although the simulated single-\(\mathbf{q}\) helices are more or less affected by random disorder and the open boundaries of the system, the overall tendency is well reproduced by orthogonal transformation. This observation demonstrates that the above arguments based on orthogonal transformation of the symmetric inductivity tensor are helpful when considering a single-\(\mathbf{q}\) helix with arbitrary \(\mathbf{q}\) direction.
For a more complicated magnetic texture, positive \(\lambda_{1}\) and \(\lambda_{2}\) with \(\lambda_{1}\neq\lambda_{2}\) may be expected. For instance, a mutidomain of single-\(\mathbf{q}\)-helices obviously belongs to this category. In contrast, a long-range ordered state belonging to this category may not be so clear. A candidate of this category is likely a magnetic texture that has multiple \(\mathbf{q}\) vectors with different wavenumbers, such as those observed in EuAg\({}_{4}\)As\({}_{2}\)[34] and EuAl\({}_{4}\)[35]; however, such an anisotropic magnetic texture is beyond the scope of our model Hamiltonian based on the continuum approximation [see Eq. (15) in the Methods section].
To conclude, we have revealed analytically and numerically the symmetry of the emergent inductance tensor exhibited by pinned magnetic textures. We focused on the adiabatic limit, where the inductance tensor is well defined by \(e_{i}=\tilde{L}_{ik}\frac{\mathrm{d}j_{k}}{\mathrm{d}t}\). We thus found that the inductance tensor is a real symmetric tensor, and hence, the presence and magnitude of the transverse component are determined by the degree to which the measurement axis is tilted from the principal axis that generates the diagonalized tensor. As a natural consequence of the real symmetric tensor, the transverse component does not change sign with respect to the magnetic field reversal. These fundamental aspects of the emergent inductance tensor will be useful when exploring the transverse inductive response in a magnetic texture. However, it must also be noted that when nonadiabaticity is not negligible, the electric response produced by magnetic textures may not be described by an inductance tensor in the strict sense defined by Eq. (4).
## Methods
### Numerical model
In this study, we consider long-period helical magnetic textures that are stabilized by the Dzyaloshinskii-Moriya (DM) interaction [36; 37]. We consider both a clean system without any disorder and dirty systems including randomly distributed disorder. Our model
Hamiltonian is:
\[\begin{split}\mathscr{H}=&\int\frac{\mathrm{d}^{3}r}{a^{ 3}}\left[\frac{J}{2}(\nabla\mathbf{m})^{2}+D\mathbf{m}\cdot(\nabla\times\mathbf{m})\right]\\ &-\sum_{k\in\Lambda}\int_{V_{k}}\mathrm{d}^{3}r\:K_{\mathrm{imp}} (\mathbf{m}_{k}\cdot\mathbf{n}_{\mathrm{imp},k})^{2}\end{split} \tag{15}\]
where \(J\) is the Heisenberg exchange energy, \(D\) is the DM interaction and \(a\) is the lattice constant. The extrinsic pinning effect is controlled by the last term of Eq. (15), which is introduced to randomly selected cells to break the translational symmetry: \(K_{\mathrm{imp}}(>0)\) represents the magnetic-easy-axis anisotropy along a randomly chosen direction, \(\mathbf{n}_{\mathrm{imp},k}\), at the \(k\)-th cell (the cell volume \(V_{k}\) is \(3^{3}\) nm\({}^{3}\)), and \(\Lambda\) is a set of random numbers. The disorder density displayed in Table 1 represents the ratio of the number of cells with finite \(K_{\mathrm{imp}}\) to the total number of cells (243\(\times\)243).
When simulating the current-induced dynamics of a given helical magnetic structure, we insert the spin Hamiltonian into the following Landau-Lifshitz-Gilbert (LLG) equation [33]:
\[\begin{split}\frac{\mathrm{d}\mathbf{m}_{\mathbf{r}}(t)}{\mathrm{d}t}=- \frac{|\gamma|}{1+\alpha^{2}}\frac{\mathrm{d}\mathscr{H}}{\mathrm{d}\mathbf{m}_{ \mathbf{r}}}\times\mathbf{m}_{\mathbf{r}}-\frac{\alpha|\gamma|}{1+\alpha^{2}}\left[\mathbf{m}_ {\mathbf{r}}\times\left(\frac{\mathrm{d}\mathscr{H}}{\mathrm{d}\mathbf{m}_{\mathbf{r}}} \times\mathbf{m}_{\mathbf{r}}\right)\right]\\ +\frac{1}{1+\alpha^{2}}\{(1+\beta\alpha)\mathbf{m}_{\mathbf{r}}\times[ \mathbf{m}_{\mathbf{r}}\times(\mathbf{u}\cdot\mathbf{\nabla})\mathbf{m}_{\mathbf{r}}]\\ +(\beta-\alpha)[\mathbf{m}_{\mathbf{r}}\times(\mathbf{u}\cdot\mathbf{\nabla})\mathbf{ m}_{\mathbf{r}}]\},\end{split} \tag{16}\]
where \(\mathbf{u}\) represents the spin drift velocity, \(\alpha\) is the Gilbert damping constant, \(\beta\) is a dimensionless constant that characterizes the nonadiabatic electron spin dynamics, and \(\gamma\) (\(>0\)) is the gyromagnetic ratio; \(\mathbf{u}\) is related to the electric current density \(\mathbf{j}\) by \(\mathbf{u}=\frac{P\mu_{\mathrm{B}}}{2|e|M_{\mathrm{s}}(1+\beta^{2})}\mathbf{j}\), where \(\mu_{\mathrm{B}}\) is the Bohr magneton and \(M_{\mathrm{s}}\) is the saturation magnetization. When implementing the micromagnetic simulation, we use the open software MuMax3 [38; 39]. We choose the following parameter set: \(J/(2a^{3})=1.8\times 10^{-11}\) J m\({}^{-1}\), \(D/a^{3}=2.8\times 10^{-3}\) J m\({}^{-2}\), \(M_{\mathrm{s}}=2.45\times 10^{5}\) A m\({}^{-1}\), \(P=1\), and \(\alpha=0.04\).
In the simulation, we apply a current density of a sufficiently small magnitude so that the magnetic system is certainly in the linear-response regime; that is, with respect to the input alternating electric current along the \(x\) or \(y\) direction, \(j_{i}(t)=j_{0,i}\sin\omega t\) (\(i=x,y\)), the magnetic system is in the pinned regime, and the output AC emergent voltage, \(V_{e,i}(t)\), is \(\propto j_{0,k}\omega\cos\omega t\) (\(i,k=x,y\)). Based on these observations, \(L_{ij}\) is derived from the following equations:
\[V_{e,i}(t)=\langle e_{i}(t)\rangle\ell=L_{ij}\frac{\mathrm{d}(I_{j}(t))}{ \mathrm{d}t}, \tag{17}\]
where \(\langle\cdots\rangle\) denotes a spatially averaged value, the system length \(\ell\) is \(243\times 3\) nm, and \(I=jS\) with the cross-section area \(S=243\times 1\times 3^{2}\) nm\({}^{2}\). In the present frequency range (\(\leq\) 100 MHz), it is confirmed that the inductivity is independent of \(\omega\) (i.e., \(\langle e_{i}\rangle\propto\omega\)) (Fig. S2) and the \(\alpha\) dependence of the numerical results is negligibly small (Fig. S3) (see also Supplementary Note 4). The numerical accuracy of MuMax3 is \(\Delta\mathbf{m}/|\mathbf{m}|\sim 10^{-7}\), and the typical increment in one time step (4 ps) is \(\sim\)\(10^{-5}\) under the current application of \(\sim\)\(10^{10}\) A m\({}^{-2}\). This finite accuracy eventually gives rise to an uncertainty of \(\sim\)\(10^{-23}\) H m in the calculated inductivity.
In the numerical simulation, a uniform current density is considered to understand fundamental aspects of the inductivity tensor. On the other hand, the local \(\rho_{xx}\) and \(\rho_{yx}\) may be non-uniform in real material, reflecting spatial variations in magnetic textures. Nevertheless, the uniform current is a good approximation as long as \(\langle\rho_{xx}\rangle\gg\delta\rho_{xx},\delta\rho_{yx}\), where \(\delta\rho_{xx}\) and \(\delta\rho_{yx}\) represent magnitude of the spatial variations. For instance, in the chiral magnet MnSi at 10 K, the presence or absence of the metastable skyrmion lattice changes \(\rho_{xx}\) and \(\rho_{yx}\) by \(\approx\)50 n\(\Omega\) cm and \(\approx\)30 n\(\Omega\) cm, respectively, whereas \(\rho_{xx}\approx 5\)\(\mu\Omega\) cm [40]. Such magnetic-texture-dependent \(\rho_{xx}\) and \(\rho_{yx}\) imply that \(\langle\rho_{xx}\rangle\gg\delta\rho_{xx},\delta\rho_{yx}\) holds, although the precise estimation of the spatial variations is experimentally difficult; thus, the current uniformity is well expected. If \(\delta\rho_{xx}\) and \(\delta\rho_{yx}\) are significant, the current distribution should be determined self-consistently; for instance, see [41].
### Initial-state preparation
To obtain various metastable magnetic textures, a pristine helical texture with a different oblique angle of the helical \(\mathbf{q}\)-vector, a random spin configuration, or an SkL is prepared as an initial state and then relaxed under zero current. Note that imposing the open-boundary condition and introducing impurity sites are key in obtaining the intended magnetic textures.
### Data availability
The data used in this work are available from the corresponding author upon reasonable request.
###### Acknowledgements.
The authors thank N. Nagaosa and Y. Fujishiro for their valuable discussions. This work was partially supported by JSPS KAKENHI (Grants No. 20K03810, No. 18H05225, No. 23K03291 and No. 21H04442), JST CREST (Grants No. JPMJCR1874 and No. JPMJCR20T1).
## Competing interests
The authors declare no competing interests.
## Author contributions
S.F. conducted the calculations and analyzed the data. F.K. conceived the project and wrote the draft with S.F. and W.K. All the authors discussed the results and commented on the manuscript.
|
2309.02854 | A Critical Review of Common Log Data Sets Used for Evaluation of
Sequence-based Anomaly Detection Techniques | Log data store event execution patterns that correspond to underlying
workflows of systems or applications. While most logs are informative, log data
also include artifacts that indicate failures or incidents. Accordingly, log
data are often used to evaluate anomaly detection techniques that aim to
automatically disclose unexpected or otherwise relevant system behavior
patterns. Recently, detection approaches leveraging deep learning have
increasingly focused on anomalies that manifest as changes of sequential
patterns within otherwise normal event traces. Several publicly available data
sets, such as HDFS, BGL, Thunderbird, OpenStack, and Hadoop, have since become
standards for evaluating these anomaly detection techniques, however, the
appropriateness of these data sets has not been closely investigated in the
past. In this paper we therefore analyze six publicly available log data sets
with focus on the manifestations of anomalies and simple techniques for their
detection. Our findings suggest that most anomalies are not directly related to
sequential manifestations and that advanced detection techniques are not
required to achieve high detection rates on these data sets. | Max Landauer, Florian Skopik, Markus Wurzenberger | 2023-09-06T09:31:17Z | http://arxiv.org/abs/2309.02854v1 | A Critical Review of Common Log Data Sets Used for Evaluation of Sequence-based Anomaly Detection Techniques
###### Abstract
Log data store event execution patterns that correspond to underlying workflows of systems or applications. While most logs are informative, log data also include artifacts that indicate failures or incidents. Accordingly, log data are often used to evaluate anomaly detection techniques that aim to automatically disclose unexpected or otherwise relevant system behavior patterns. Recently, detection approaches leveraging deep learning have increasingly focused on anomalies that manifest as changes of sequential patterns within otherwise normal event traces. Several publicly available data sets, such as HDFS, BGL, Thunderbird, OpenStack, and Hadoop, have since become standards for evaluating these anomaly detection techniques, however, the appropriateness of these data sets has not been closely investigated in the past. In this paper we therefore analyze six publicly available log data sets with focus on the manifestations of anomalies and simple techniques for their detection. Our findings suggest that most anomalies are not directly related to sequential manifestations and that advanced detection techniques are not required to achieve high detection rates on these data sets.
log data analysis, anomaly detection, data sets
## I Introduction
Sound evaluations of machine learning algorithms are essential to validate their correct functioning, measure the accuracy of classifications, and conduct comparative studies with state of the art algorithms. Data sets are the basis for these evaluations and the selection of appropriate data sets is key to obtain representative results with general validity.
To be suitable for the purpose of evaluations, data sets are generally expected to fulfill several quality criteria [1], such as correctness, completeness, relevance, timeliness, realism, etc. When it comes to the evaluation of anomaly detection techniques that aim to identify rare and unexpected events in otherwise normal data [2], data sets and specifically the manifestations of anomalies must also meet the characteristics suitable for the type of detection under test. For example, anomaly detection techniques that process and analyze data instances as chronologically ordered sequences need data sets where anomalies manifest as changes in sequential patterns rather than, for example, appearance of entirely new sequence elements that could be more effectively detected by other approaches.
There is an active research community that focuses on anomaly detection in system log data. Logs create a permanent record of almost all activities that take place on a system or within an application and are therefore a valuable source of information when it comes to failure analysis or forensic investigations of cyber incidents [2, 3]. Due to their large size and repeating patterns, anomaly detection approaches are capable of capturing the normal system behavior as reflected in the log data, disclose any sudden deviations from these normal behavior models as anomalies, and trigger alerts for system operators in case that anomalous states are detected. Given that log data keeps track of the underlying workflow of applications, generated logs often form similar patterns of chronologically ordered event sequences. Logically, it is fair to assume that undesired activities (failures, attacks, etc.) also manifest as sequences and are thus detectable within the sequential patterns, such as changes of positions of certain events [4]. Accordingly, many anomaly detection algorithms that make use of sequential patterns have been proposed in the past; specifically, deep learning methods such as Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs) that are also commonly used for text processing have been widely used in recent years [3]. Thereby, studies showed that approaches based on deep learning are generally able to outperform conventional machine learning methods such as clustering [5].
Accordingly, it is easy to get the impression that advanced detection techniques are necessary to achieve high detection performance on log data sets that are commonly used in scientific evaluations [3]. However, during manual analysis of these data sets, we noticed that many anomalies are either straightforward to detect or not directly related to changes of sequential patterns. Inspired by the work of Wolsing et al. [6], who show that SIMPLE (Sufficient, Independent, Meaningful, Portable, Local & Efficient) detection methods achieve competitive detection rates in comparison to complex approaches using neural networks in the field of industrial control systems, we develop a set of simple yet effective and broadly applicable detection techniques and evaluate them on commonly used log data sets, including detection of previously unseen event types and sub-sequences, unusually short or long sequences, sequences with deviating event occurrence counts, changes of event ordering, and delayed event occurrences. With this experimental study we aim to answer the following two research questions: _How do anomalies manifest themselves in common log data sets? What are drawbacks that render these data sets inadequate for evaluation of sequence-based anomaly detection techniques?_
We point out that this study does not recreate or compare any results from state of the art approaches, which has already been carried out in other surveys [5, 7, 8]. Instead our focus lies on log data sets and their appropriateness for evaluation of anomaly detection techniques. We provide the scripts to reproduce the results presented in this paper online1. We summarize our contributions as follows:
Footnote 1: Anomaly-detection-log-datasets GitHub repository available at [https://github.com/air-acci/anomaly-detection-log-datasets](https://github.com/air-acci/anomaly-detection-log-datasets) (accessed 04-09-2023)
* A review of common log data sets and their properties relevant for anomaly detection,
* a baseline evaluation using a set of simple detection techniques, and
* a critical discussion on the suitability of the data sets for evaluation of sequence-based anomaly detection approaches.
The remainder of this paper is structured as follows. Section II provides some background of this research area and reviews related publications. Section III describes the log data sets covered in this study and highlights important data properties. Section IV first outlines the detection techniques applied on the data sets and then provides the results of our evaluation study. Section V comprises a critical discussion of our findings with respect to the appropriateness of the data sets for scientific evaluations. Section VI concludes the paper.
## II Background & Related Work
In their survey on anomaly detection in log data with deep learning, Landauer et al. [3] found that only a few publicly available log data sets are used by almost all of the reviewed publications. The most commonly used data sets stem from applications that produce heterogeneous log events, i.e., each log message corresponds to a specific type of event, comprising static parts and variable parameters following a specific syntax. Consider the logs in the top of Fig. 1 as an example. In each log line except for the second one, "Receiving block" as well as "src:" and "dest:" are static, but the block identifier as well as source and destination address are variable. Event templates (sometimes also referred to as log key or simply events [5]) are used to describe these event syntaxes and can be automatically generated by algorithms such as Drain [9]. The aforementioned sample lines correspond to event type 5, while the second line in the sample logs corresponds to event type 22 that are visible in the second block in Fig. 1.
The first step in the common anomaly detection workflow depicted in Fig. 1 consists of parsing the log data with templates, for the purpose of (i) assigning an event identifier to each log line and (ii) extracting parameters in a structured way. Several of the commonly used log files involve a so-called sequence identifier that can be extracted as one of the parameters and used to group events together that belong to the same process or trace. In the sample logs, this identifier corresponds to a data block (starting with "blk_" and followed by a unique ID) that is processed by the application. When such identifiers are not available, sequences are sometimes also generated by sliding a window of a fixed size over the parsed data set [7]. Either way, the resulting sequences consist of ordered lists of event types that are represented as distinct integers in Fig. 1. Some approaches additionally apply a sliding window on the sequences (a window of size 5 with step width of 2 is displayed in the figure as an example) [7], compute event counts in sequences or windows [10], transform log messages into numeric vectors with embedding techniques [4], apply weighting schemes such as TF-IDF [11], use neural networks on the raw log messages [12], etc. Eventually, the data fed into anomaly detection systems usually directly relates to event sequences, as sequential patterns are assumed to be the key indicator for anomalies.
Existing surveys on sequence-based anomaly detection techniques generally focus on deep learning applications. For example, Le et al. [7] quantitatively compare five state of the art anomaly detection models on four data sets with focus on data pre-processing strategies, data set imbalance, and robustness to noise. Their results suggest that detection capabilities of complex models are heavily influenced by these aspects and that actually achieved detection rates are often not as good as expected as a consequence. Chen et al. [5] provide another quantitative survey that involves four unsupervised and two supervised deep learning approaches that are evaluated on two data sets. They found that anomalies incorrectly inserted in training data as well as unknown event types can strongly impact detection capabilities. In addition, they point out that conventional machine learning models such as clustering, PCA, or invariant mining, are typically more efficient than their deep learning counterparts in terms of runtime.
Landauer et al. [3] provide a qualitative survey of 62
Fig. 1: Workflow for anomaly detection in log data.
state of the art approaches. Their work emphasizes the diversity of deep learning models, pre-processing strategies, and methods to transform log events into representations suitable for ingestion by neural networks. Yadav et al. [8] provide another qualitative survey of detection approaches and discuss common challenges, model architectures, and pre-processing strategies. They also summarize the data sets used in scientific publications, but do delve into their individual properties.
The focus of aforementioned publications lies always on detection techniques; we are not aware of any works that critically analyze data sets used to evaluate these approaches. Kenyon et al. [1] review the appropriateness of publicly available data sets in the intrusion detection domain [1], however, these are not the data sets typically used to evaluate sequence-based anomaly detection techniques [3]. With this paper we therefore aim to close this gap and support researcher to better understand the data they use for evaluations.
## III Analysis of Log Data Sets
This section provides a general description of the most common log data sets used in scientific evaluations. The data sets are selected and ordered based on the survey by Landauer et al. [3]; furthermore, we include one additional data set that we suggest as a potentially useful alternative. The following sections go over one data set after another and outline their origin and properties. Thereby, we generally refer to Table I, which quantitatively summarizes the data sets with respect to the distribution of normal and anomalous instances.
### _Hdfs_
The HDFS log data set is the most frequently used data set for evaluations of anomaly detection techniques [3] and thus the focus point of this study. The logs stem from the Hadoop Distributed File System (HDFS), which allows storage and processing of large files. Each log event contains one or more data block identifiers that enable grouping of events into sequences as discussed in the previous section. In fact, the sample logs shown in Fig. 1 are taken from the HDFS data set. The main idea of detecting anomalies in this data set is that some data blocks are not processed correctly by the system, which is reflected by the log events generated as the block goes through an abnormal execution flow, in which case the entire sequence should be detected as anomalous [13]. Note that lines containing multiple block identifiers have to be replicated for each corresponding sequence, thus the number of parsed events exceeds the number of lines (cf. Table I).
The data set was originally collected in 2008 by Xu et al. [13, 14, 15] on a Hadoop cluster comprising more than 200 nodes. The data set was labeled by domain experts on the granularity of individual block identifiers. As stated by the original authors, labels were assigned to more than 575,000 event sequences by clustering them into event count vectors, which reduced their size to the manageable amount of only 680 unique vectors [14]. Note that we obtain a total of 666 unique count vectors (cf. Table I), likely due to the fact that publicly available event templates do not fully align with the templates used by the original authors. Even though the original logs are still available, the data set has also been provided by the log data set collection project Loghub [16]. A close inspection of that data set reveals that around 22,000 lines are missing in comparison to the original data set for unknown reasons. In addition, many authors rely on parsed versions of the HDFS data set that are available in public repositories, such as the LogDeep project2. Given that the parsed versions lack sequence identifiers, it is difficult to ascertain their completeness. Moreover, since this version of the data set only comprises sequences of event types but lacks their timestamps, some authors incorrectly assume that timestamp information is not available even though it is present in the original logs [7].
Footnote 2: Available at [https://github.com/donglee-afar/logdeep](https://github.com/donglee-afar/logdeep)
Figure 2 depicts the frequencies of event types represented as integer values in anomalous (top) and normal (bottom) sequences, sorted in ascending order. The plot shows that the eleven leftmost event types have a similar distribution of relative occurrence frequencies in both normal and anomalous sequences (note that their absolute numbers diverge as normal sequences are more frequent). However, many event types only occur in anomalous sequences and can thus be regarded as basic indicators for anomalies. On the other hand, event type 33 can be seen as an indicator for normal events, even though it is relatively rare, and event type 20 occurs in both normal and anomalous sequences, but is more likely to indicate anomalies as it is far more frequently part of anomalous sequences.
Figure 3 shows that many normal and anomalous sequences also differ in length. In particular, while all normal event sequences have lengths larger than 12, more than a third of the anomalous sequences have short lengths of 2, 3, or 4.
Fig. 3: Distribution of HDFS log event sequence lengths.
Fig. 2: Event frequencies in HDFS log event sequences.
Another important aspect to consider when evaluating anomaly detection techniques with the HDFS data set is that many sequences are identical. As stated in Table I, the total number of 575,061 event sequences can be reduced to only 26,814 (4.7% of total sequences) unique sequences. Even more peculiar is the fact that these unique sequences comprise only 666 (2.5% of unique sequences or 0.1% of total sequences) unique count vectors, i.e., lists of event frequencies that are independent from event positions in sequences. One of the reasons for this is that events that occur more or less simultaneously end up in random order. Consider the most common normal sequences displayed in Fig. 4, where the three most common sequences are identical except for the position of event type 22 among the first four events, which usually occur simultaneously according to their timestamp. Given that this effect also occurs with other event types in the sequence, the number of possible event combinations exhibited by otherwise identical sequences becomes enormous, thus resulting in the large gap between unique sequences and unique count vectors. Figure 4 also shows the compositions of the most common anomalous sequences, which only comprise few elements. While event type 7 acts as an indicator for anomalies, some of these sequences only involve normal events and thus need to be detected through their short lengths. The last two sequences show that anomalous sequences generally involve the same patterns as normal sequences, but contain additional event types such as 28 or 20 that are indicators for anomalies as visible in Fig. 2.
### _BlueGene/L (BGL)_
The BlueGene/L (BGL) log data set is provided by the Computer Failure Data Repository3 (CFDR) and was originally described by Oliner et al. [17]. The log data set stems from a supercomputer located at the Lawrence Livermore National Laboratory (LLNL) that first comprises 32,768 dual-processor nodes and was upgraded to 65,536 nodes during log data collection in 2005 and 2006. We refer to the publication by Taerat et al. [18] for a detailed technical description of the system. Similar to the HFDS log data set, Loghub [16] provides another version of the data set where few lines are missing or labeled differently for unknown reasons.
Footnote 3: Available at [https://www.usenix.org/cfdr](https://www.usenix.org/cfdr)
Aside from log messages, the data set contains location identifiers for components such as individual compute nodes, I/O nodes, service cards, links cards, etc., that can be used to group events into sequences. Interestingly, only some authors make use of the node identifiers to generate sequences [7], while others state that it is not possible to distinguish different job executions and thus simply partition the whole data set into time windows [5]. According to our analysis, leveraging the node identifiers to group events into sequences is a reasonable approach since many similar sequences emerge, indicating that different nodes go through similar event execution flows.
One major difference compared to the HDFS log data set is that labels are provided on the granularity of events rather than sequences. Thereby, the labeled events form almost completely
Fig. 4: Top seven most common normal (top) and anomalous (bottom) sequences and their respective occurrence counts in the HDFS data set.
disjoint sets, i.e., the same event types are consistently labeled throughout the data set. In other words, the label of a specific event does not depend on the context of occurrence of the event but can be inferred from the event message itself. To enable comparability in Table I, we therefore adopt the approach of existing works [5, 7] that tackle this issue by considering the whole sequence as anomalous if it contains at least one anomalous event. Analogous to our analysis of the HDFS file, we plot the event frequencies of normal and anomalous sequences in Fig. 5, which indicates that a large fraction of events act as basic indicators for either normal or anomalous sequences.
Figure 6 shows the distribution of sequence lengths in the BGL data set. Other than for the HDFS logs, there is no simple way to discern anomalous and normal sequences based on their lengths; however, the plot reveals that there are a few long anomalous sequences comprising more than 10,000 events. Given that a single labeled event in an otherwise normal sequence is enough so that the whole sequence is regarded anomalous, it stands to reason to partition long sequences into contiguous sub-sequences with individual labels. To keep the evaluation consistent across data sets, we leave the sequences unchanged for the purpose of this paper.
### _Thunderbird_
Thunderbird is yet another supercomputer log data set provided by the Computer Failure Data Repository (CFDR) and originally described by Oliner et al. [17]. The data set was collected at Sandia National Labs (SNL) at the same time as the BGL data set, but involves considerably more lines and distinct event types. Due to the high complexity of the data set, authors commonly only use a fraction of the data set, such as the first 5 million [7] or 20 million lines [19]. For the purpose of this paper, we consider the whole data set for completeness.
While there are sequence identifiers in the data, they are usually not leveraged by existing works, which resort to window- or time-based grouping [7]. These sequence identifiers mainly correspond to node or interface names and may be divided into groups: 77.1% comprise two characters and digits (e.g., "bn251"), 17.1% comprise a period and digits (e.g., ".741"), 4.8% comprise complex descriptors (e.g., "dr13iblsw2"), 0.4% are user names (e.g., "tbird-admin1"), 0.3% start with "ibr" (e.g., "ibr6-northern"), 0.2% start with number signs (e.g., "#31#"), and 0.1% are IP addresses; sequence patterns have different characteristics across groups, e.g., identifiers starting with periods generally involve shorter sequences than others. As for the BGL data set, the labels of the data set are assigned to single events. Analogous to the BGL data set, our analysis shows that the events labeled as anomalous are discernible from normal events by the syntax of the log message even without considering their context of occurrence, i.e., events occurring before or after.
### _OpenStack_
The OpenStack log data set was generated by the authors of DeepLog [20], one of the most influential papers in the research area around anomaly detection in log data [3]. Other than the previous log data sets, the OpenStack data set was synthetically produced for the purpose of evaluating anomaly detection techniques. In particular, the authors executed a script that repeatedly carries out tasks from a pre-defined list, such as starting, stopping, pausing, and resuming virtual machines. At specific points in time while running this script, the authors injected three types of anomalies, including a timeout and errors during destruction and cleanup of virtual machines. Some of the logs contain identifiers for virtual machine instances that enable log grouping and the formation of event sequences. However, as visible in Table I, only 27.8% of all lines contain such an identifier; all other lines are omitted from our analysis. Unfortunately, the original data set seems to be no more available. We therefore resort to the version provided by Loghub [16].
As Table I shows, there is a high overlap of 98.5% between normal and anomalous sequences, i.e., these sequences are
Fig. 5: Event frequencies in BGL log event sequences.
Fig. 6: Distribution of BGL log event sequence lengths.
Fig. 7: Event inter-arrival times in the OpenStack data set.
identical. This renders application of anomaly detection techniques that only consider event sequences without contextual information infeasible. This fact by itself is not surprising: According to the original authors, anomalies should manifest by the time taken to build instances, which is reflected by the inter-arrival time of events in sequences; however, this does not seem to be the case for the log data at hand. Figure 7 depicts the distribution of inter-arrival times between all consecutive events as boxplots for normal and anomalous sequences, only to reveal that there are no significant deviations with respect to event inter-arrival times, specifically the times to reach events corresponding to starting a virtual machine (event type 22), stopping a virtual machine (event type 23), and deleting a virtual machine (event type 24). Only 5 out of 198 anomalous sequences show an increase of inter-arrival time between event type 3 and event type 22. Similar issues regarding anomalies have also been observed by other authors using the OpenStack data set [12, 21]. Therefore, we do not further investigate the data set in this study.
### _Hadoop_
Similar to the OpenStack data set, the Hadoop data set was synthetically generated in a lab environment to evaluate log sequence clustering [22]. The authors follow the approach from Shang et al. [23] and execute WordCount and PageRank applications on a Hadoop cluster. After some normal runs, the authors manually inject three distinctly labeled anomaly cases by turning off the server (machine down), disconnecting the server (network disconnected), and filling up the hard disk (disk full). While the original data set is not available anymore, a Hadoop data set containing the same anomalies is provided by Loghub [16]. Other than the previous data sets, logs from each run of an application are placed in separate files, which makes it easy to group events into sequences.
Another similarity to the OpenStack data set is that there is a high overlap between normal and anomalous sequences, with 83.2% of normal sequences having at least one identical counterpart in the anomalous set and 75.5% of anomalous sequences being identical to one or more normal sequences. Accordingly, analysis of sequences alone is not an adequate approach for anomaly detection. Figure 8 shows that the overall distribution of event frequencies is similar across all classes, except for few anomalous sequences that involve new event types that do not occur in the normal case. Closer inspection shows that many of the events only occurring in anomalous classes are the result of a single application run, for example, the log message "Failed to renew lease" is printed every single second in affected applications. Given that most sequences are identical, it is also not possible to recognize anomalies by their sequence lengths. We manually compared the event parameters of common identical sequences but were also unable to identify any differences that discern normal from anomalous instances. Despite these problems, we use this data set in our evaluation study to emphasize issues with misleading evaluation metrics.
### _Adfa_
The Australian Defence Force Academy Linux Dataset (ADFA-LD) was generated by Creech et al. [24] in 2013 to overcome issues with log data sets that were commonly used for evaluation of intrusion detection techniques at that time. Other than the aforementioned data sets that focus on system failures, the ADFA data set makes use of cyber attacks to generate anomalies in log data. In particular, the authors of the data set created a test environment with web applications vulnerable to known attacks and collected low-level system call logs during normal operation (e.g., web browsing or document processing) as well as execution of six attack cases.
Figure 9 shows that all event types that occur in the data set also occur in sequences of the normal class, i.e., there are no event types that act as indicators for specific attacks, except for a single occurrence of event type 173 occurring during the "Java Meterpreter" attack. The overall event frequency distributions show that the system calls generated by attacks differ
Fig. 8: Event frequencies in Hadoop log event sequences.
Fig. 9: Event frequencies in ADFA log event sequences.
from the event frequencies during normal operation, which suggests that detection of anomalies and even classification of attack cases is feasible. We also analyzed the sequence lengths and confirm that anomalous sequences are not shorter or longer than normal ones.
Even though this data set is rarely used for evaluation of anomaly detection techniques (not a single publication considered this data set in the survey by Landauer et al. [3]), we add the data set to our evaluation study in an attempt to propose a new data set for future evaluation that overcomes issues with commonly used data sets. In particular, using system calls avoids the need for parsing since the operations are represented as distinct integer numbers and thus reduces the influence of parsing on the detection accuracy. Moreover, system calls are generally ordered in a consistent way and are therefore not affected by permutations of simultaneously occurring events as it is the case in the HDFS log data set.
### _Data Set Complexity_
We compare the data sets described in the previous sections with respect to the complexities of their sequential patterns. To this end, we apply measures for entropy and repetitiveness.
#### Iii-G1 Entropy
We use entropy to measure how evenly distributed certain parts of the sequences (i.e., contiguous sub-sequences) are across the data set, where low entropy indicates that some of these parts are occurring much more often than others and high entropy corresponds to a more random distribution. Since entropy does not account for sequential ordering, we leverage \(N\)-grams of various sizes and compute the entropy for each \(N\) separately. To compensate for the varying numbers of distinct event types in the data sets, we also compute the normalized entropy that bases on the maximum entropy that is reached if all \(N\)-grams occur the same number of times. Figure 10 shows that the Thunderbird and ADFA data sets yield the highest total entropy, while the entropy computed for OpenStack data set is comparatively low and does not increase for higher \(N\). Considering also the normalized entropy, we see that the high total entropy of the Thunderbird data set is caused by the large number of distinct events. The ADFA data set yields the highest normalized entropy and is closely followed by the HDFS data set, especially for large \(N\). The normalized entropy of the OpenStack data set on the other hand starts out with a comparatively high value for \(N=1\), meaning that single event frequencies are more evenly distributed than in other data sets, but quickly diminishes for larger \(N\).
#### Iii-G2 Repetitiveness
Arguably, the number of distinct event types appearing in a data set is an indicator for its complexity as larger numbers of events have the potential to form more diverse patterns (the numbers of distinct events are stated in Table I). To assess whether they actually form such complex sequences or instead occur in the same patterns that repeat over and over, we leverage the Lempel-Ziv complexity [25]. In short, this measure counts the number of different contiguous sub-sequences encountered when processing all sequences from beginning to end, where already observed sub-sequences are stored in a dictionary for all sequences in a data set. Figure 11 plots the Lempel-Ziv complexity with respect to the total number of events within each consecutively processed sequence. The plot shows that all data sets except for the OpenStack data set roughly exhibit the same level of complexity, with the HDFS data set ending up with a slightly lower complexity than the other data sets. The OpenStack data set, however, breaks out of the overall trend at around 1,000 processed events, after which most of the subsequently analyzed sequences contain many already observed sub-sequences, causing that the total complexity levels off.
## IV Evaluation Study
In this section we evaluate how anomalies manifest in common log data sets. We first outline the setup of our experiment and then briefly describe every detection technique we apply on the data sets before summarizing the results.
### _Setup_
The purpose of our study is to assess the appropriateness of five of the previously described data sets (HDFS, BGL, Thunderbird, Hadoop, and ADFA) for evaluating anomaly detection techniques leveraging event sequences. We therefore design an experiment that evaluates simple detection mechanisms on the data sets and measures whether they are sufficient to achieve competitive detection rates. Comparing different detection mechanisms with each other then allows us to better understand how anomalies manifest themselves in each data set. We ensure that the selected detection techniques are as generic as possible to be applicable to all sorts of data sets and simple enough so that it is easy to understand why specific instances are reported as anomalies. Similar to common deep learning methods [3], our detection techniques also only process grouped event type sequences without any contextual information other than the timestamp.
Fig. 11: Sequence repetitiveness measured with Lempel-Ziv complexity.
Fig. 10: Entropy (left) and normalized entropy (right) of N-grams.
The focus of our evaluation study lies on semi-supervised detection, where only instances of a fraction of the normal class are available for training. This is the most common scenario for anomaly detection as anomalies generally correspond to unexpected or unusual system behavior that is non-trivial to define in advance [3]. We follow the strategy from Du et al. [20] and randomly sample 1% of the normal sequences for training, except for the Hadoop data set where we use 10% since only 167 normal sequences are available. After training, we run the detector on the test data, which comprises the remaining normal sequences as well as the anomalous sequences, and measure its ability to discern these two classes. In particular, we count true positives (TP) as correctly detected anomalous sequences, true negatives (TN) as correctly undetected normal sequences, false positives (FP) as incorrectly detected normal sequences, and false negatives (FN) as incorrectly undetected anomalous sequences. Based on these counts we then compute precision (\(Prec=\frac{TP}{TP+FP}\)), recall or true positive rate (\(Rec=TPR=\frac{TP}{TP+FN}\)), true negative rate (\(TNR=\frac{TN}{TN+FP}\)), and F1 score (\(F1=\frac{2Prec\cdot Rec}{Prec+Rec}\)). We repeat sampling and evaluation 25 times for each data set and each detector to also capture the variance of our results. Our detection techniques rely either on none or on a single threshold for detection. In the following, we iterate over all thresholds between 0 and 1 in steps of 0.01 and select the one that maximizes F1. We investigate the influence of the threshold separately in Sect. IV-C2. Given that two of our data sets - BGL and Thunderbird - are labeled on the granularity of events rather than sequences, we additionally evaluate them by computing aforementioned metrics accordingly and present the results in Sect. IV-C3.
### _Detection Techniques_
This section describes each detection technique that is applied on the selected data sets.
#### Iv-B1 New Event Types
The detection for new event types monitors all events that appear in any sequence of the training data set to build a model of distinct event types that are known to occur during normal operation. In the detection phase, all sequences that contain one or more event types that are not known from the training phase are reported as anomalies. This is the most basic detection technique but has been effectively used in intrusion detection systems such as the AMiner [26]. It is expected to work well in data sets where normal and anomalous event types resemble disjoint sets, such as the BGL and Thunderbird data set, but also achieve good performance in data sets with many event types that act as indicators for anomalies, such as the HDFS data set.
#### Iv-B2 Deviating Sequence Lengths
This detection technique learns the minimum and maximum sequence lengths of all sequences in the training data. Subsequently, all sequences in the test data set with lengths shorter than the minimum or longer than the maximum are reported as anomalies. This detection technique specifically aims to recognize unusually short sequences in the HDFS data set (cf. Fig. 3). We also consider the combination of this technique with the detection for new events, so that sequences that are reported by either one of those two techniques are considered anomalies.
#### Iv-B3 Event Count Vector Clustering (ECVC)
The idea behind this detection technique is that normal sequences that occur in the test data are similar to one or more sequences in the training data in terms of event types and their respective frequencies, while anomalous sequences are dissimilar to every sequence of the training data. For this purpose, we create event count vectors for sequences, where each vector index corresponds to a specific event type and the value at that index reflects the number of times the event type occurs in a sequence. After transforming all training sequences into count vectors, we use the \(L_{1}\) norm4 as a similarity metric that classifies count vectors from the test data set as anomalous if their similarity to all of the training instances is lower than a threshold. Note that we use the \(L_{1}\) norm due to the fact that it handles high-dimensional data better than higher-order norms [27] and has already been applied with count vectors for anomaly detection in user behavior patterns [28]. In the following, we also consider a variant of this method where the count vector indices are weighted higher when the corresponding event types only appear in few sequences, analogous to the well-known TF-IDF measure [29]. In addition, we combine this technique with detection based on new events and sequence lengths in the following, where sequences are detected as anomalous if any of the combined techniques reports them as such.
Footnote 4: Distance between vectors \(a\) and \(b\) is computed by \(\sum_{i}|a_{i}-b_{i}|\).
#### Iv-B4 N-grams
This detection technique runs a sliding window of size \(N\) at a step width of 1 over all training sequences and learns the ordered sub-sequences of event types inside the window (note that the term sub-sequences always refers to contiguous chunks of sequences in this paper). In the detection phase, a window of the same size slides over the test sequences and the sub-sequences inside the window are compared with the ones known from training. Forrest et al. [30] proposed to count the number of mismatching sub-sequences and determine the whole sequence as anomalous if the normalized count exceeds a certain threshold. The optimal value for \(N\) is non-trivial to determine and is highly dependent on the data. In our experiments, we therefore use 2, 3, and 10 as values for \(N\) as these window sizes have shown to be useful choices in other works [3, 20, 24, 30].
#### Iv-B5 Edit distance
This detection technique is similar to the ECVC in the sense that it computes a distance between a test sequence and every training sequence and classifies it as anomalous if no sufficiently similar pair is found. However, other than the ECVC that makes use of unordered count vectors, this detector computes the normalized edit distance, which counts the number of insertions, deletions, and replacements of event types to transform one sequence into another, which inherently relies on the ordering of event types in sequences [29]. As such, this detection technique is able to detect anomalies that manifest as additional or missing event types as well as changes of event ordering. Accordingly, this detection technique should work better than ECVC for anomalies with sequential manifestations, since ECVC only considers event occurrences independent from their position in the sequences.
#### Iv-B6 Event Timing
Most activities that occur in specific states of processes take a certain amount of time that remains relatively steady or at least within a certain range throughout multiple executions. This is reflected in the inter-arrival times of log events that usually mark the start, stop, or intermediate steps of such processes. For example, consider the inter-arrival times displayed in Fig. 7, where the time to start a virtual machine is always in the range of 12 to 15 seconds. The assumption of this detection method is that in some anomalous executions, the event types remain the same, but the activities carried out in between take much longer or shorter. For our simple detection method we therefore compute the minimum and maximum passed time between each pair of consecutive event types in the training data set and then classify test sequences as anomalies if the inter-arrival time between any of the involved event pairs deviates too much from the range learned specifically for this event pair, i.e., the relative difference to the range boundary exceeds a certain threshold. We opted against statistical tests (e.g., for normal distributions) on the inter-arrival times since some event pairs occur few or even just a single time in the training data.
### _Results_
This section summarizes the results obtained from sequence- and event-based detection on the selected data sets.
#### Iv-C1 Sequence-based detection
We present the evaluation results of 25 runs with randomly sampled training sequences in Table II, which contains the average and maximum (in brackets) F1 scores obtained by applying aforementioned detection techniques and their combinations on the data sets. Highest average scores achieved for each data set are in bold. In the bottom of the table, we also provide some benchmark results that have been reported in state of the art surveys and publications on semi-supervised anomaly detection. We point out that comparability is limited as these works use different splits between training and test data sets and often only report the best scores achieved from multiple runs [5]. Note that for the HDFS data set, we use both the pre-processed version from the LogDeep repository as well as the original data set provided by Xu et al. [15] since the obtained results do not coincide. In particular, the combination of detection based on new events and sequence lengths yields an F1 score of 90.4% on the LogDeep version, but only an average of 72.0% on the original data set. A closer inspection of the reason for this peculiarity is that the training sequences in the LogDeep version of the HDFS data set do not involve event type 20, which is an indicator for anomalies (cf. Sect. III-A) but also occurs in 219 (0.04%) of the normal sequences and thus deteriorates detection performance if randomly drawn into the set of training sequences. This effect shows the importance of repeating evaluations with multiple randomly drawn samples to obtain representative and comparable results.
Figure 12 provides a boxplot for a more detailed view on the results that also includes TPR (identical to recall), precision, and TNR in addition to the F1 score. The plot confirms that for the HDFS data set, more than half of all anomalies are very simple to detect based on the combination of new event types and sequence lengths. ECVC detection further improves the scores and achieves competitive performance compared to advanced deep learning approaches (cf. Table II). Another interesting observation is that ECVC generally yields better scores compared to edit distance and N-gram based detection, which indicates that event position in sequences is less relevant for detection than event occurrence frequencies, and that random event orders due to simultaneous occurrence has an adverse effect on the detection rates. IDF weighting appears to have a positive influence on the ECVC detection, which is reasonable as relevant event types such as event type 20 receive a higher weight compared to event types that occur in many sequences and are not related to anomalies.
The results obtained for the BGL data set show that simple detection of new event type occurrences is sufficient to obtain competitive results. This is intuitively reasonable, since the normal and anomalous event types are almost completely
disjoint sets as stated in Sect. III-B. A sample size of 1% appears to be sufficient to train the detector for almost all normal event types and avoid many false alarms. Interestingly, the best results for the Thunderbird data set are achieved by edit distance detection, even though ECVC and event based detection only fall short by a small margin. This indicates that event order is a relevant factor when discerning normal from anomalous sequences in this data set.
Regarding the Hadoop data set, the results for F1 (91.5%), TPR (100%), and precision (84.4%) indicate adequate detection performance at first glance. However, TNR of 0% reveals that in fact the detector just reports every single sequence as anomalous and thus the detection results are of no practical value. Clearly, no detection technique is able to yield good results due to the fact that many normal and anomalous sequences are identical as described in Sect. III-E. This demonstrates the importance of considering metrics such as TNR in addition to F1, precision, and recall, to avoid misleading results in imbalanced data sets such as the Hadoop data set where 82,9% of all sequences are anomalous [7].
The ADFA data set appears to be more challenging in comparison to other data sets. Specifically, the achieved precision is relatively low across all detection techniques and there is no significant difference between techniques that leverage sequence ordering and those that focus on event frequencies. Note that log events do not include timestamps and thus detection based on event timing is omitted from the plot. We suggest this data set as a useful candidate for future evaluations of sequence-based anomaly detection techniques that improve upon our baseline results.
#### Iv-B2 Influence of detection thresholds
As stated in Sect. IV-A, we optimized the results in the previous section by fine-tuning the detection thresholds to maximize F1. In practice, however, such a fine-tuning is not feasible due to the lack of a ground truth. We therefore investigate the parameter influence on the results of 2-gram, ECVC, and edit distance detection, by iterating the threshold in the range 0 to 1 in steps of 0.01. Figure 13 shows the progression of evaluation metrics for each combination of data set and detection technique from a single evaluation run. For the HDFS data set, ECVC is clearly the most preferable detection technique and yields high F1 scores for thresholds lower than 0.1. Both ECVC and 2-gram achieve nearly perfect TNR for any thresholds, indicating that almost all normal sequences are straightforward to classify as such. Regarding the BGL data set, both ECVC and edit distance perform well for thresholds smaller than 0.25 and yield high precision independent from the threshold, while 2-gram detection shows a much narrower band where adequate
Fig. 12: Evaluation results for sequence-based detection techniques on HDFS, BGL, Hadoop, and ADFA data sets.
results are achieved. On the Thunderbird data set, no true positives are detected by the 2-gram method for any threshold larger than 0, because a few very dissimilar sequences cause that almost all anomaly scores are close to 0 after normalizing them across all sequences. ECVC and edit distance detection techniques are not affected by this problem as their anomaly scores are normalized per sequence rather than across all sequences; accordingly, both techniques yield better results than detection based on 2-grams. The results obtained for the Hadoop data set provide more insights into the problems arising from a dominating anomaly class that we mentioned in the previous section. Specifically, the highest F1 scores are achieved when the threshold is 0 and all instances are detected as anomalous, even though TNR is 0% at that point. While TNR increases for higher thresholds, the F1 score decreases as TPR drops. All detectors yield comparatively low precision on the ADFA data set across all thresholds; 2-gram detection has high precision for high thresholds, but this is due to the fact that only few true positives are found. Overall, the diversity of the plots suggests that parameter tuning needs to be carried out on each data set separately to obtain optimal results.
#### Iv-B3 Event-based detection
Since single events are labeled in the BGL and Thunderbird data sets, we also apply the detection for new events on these data sets analogous to sequences. As we already discussed in Sect. III-B and Sect. III-C, almost all anomalous events belong to different types than normal events; thus, it is generally easy to detect anomalies, but false positives may be problematic in case that the training data set is not large enough to cover all normal events, which are subsequently reported in the detection phase. However, Fig. 14 shows that very high detection scores and TNR of around 99.9% is achieved in both data sets, indicating that 1% is a sufficiently large training sample for this type of detection.
## V Discussion
This section contains the discussion of our analysis and evaluation results. We first focus on the data sets themselves before moving to the way the data is used in evaluations. We end this section with some recommendations for future work.
### _Appropriateness of Data Sets_
To be suitable for anomaly detection evaluations, data sets need to meet characteristics that fit the type of detection. Given that the data sets analyzed in this study are widely used for the evaluation of deep learning anomaly detection techniques that ingest the data as sequences of event types and possibly event parameters, one would expect that changes in sequential patterns is exactly how the anomalies manifest in these data sets. This claim is supported by the fact that authors synthetically inject noise as part of evaluations by randomly shuffling sub-sequences as well as addition and deletion of certain events [4]. However, the results of our study suggest that the link between anomalies and sequential patterns is less pronounced than expected.
These findings are also reflected in Table III, which contains the answers to our research questions. In particular, we state the main types of anomaly manifestations for each of the reviewed data sets and describe identified issues and drawbacks for versions of publicly available data sets. As visible in the table, anomalies generally do not change sequential patterns and a major part of them is straightforward to differentiate from normal instances with simple detection methods.
The HDFS data set, which is the most popular data set in this research area [3], involves anomalies that manifest as new event types that do not occur within normal instances, unusually short sequence lengths, and event occurrence frequencies that do not require ordered sequences. In fact, random permutations of events that are generated simultaneously and not related to anomalies complicate sequence-based anomaly detection compared to other analysis techniques that are robust against permutations, such as count vector clustering. Another indicator that sequential patterns are less relevant in the HDFS data set than expected is that the ground truth of that data set was generated by clustering unordered count vectors [15].
Our study further shows that a majority of the anomalies are straightforward to identify even for very simple detection approaches, which are capable of achieving competitive detection
Fig. 14: Evaluation results for event-based detection on BGL and Thunderbird data sets.
Fig. 13: Influence of the detection threshold on evaluation metrics.
rates on the HDFS data set. While approaches such as Deeplog [20] apply advanced detection models, a major part of their correctly detected instances thus implicitly relies on simple detection of new events [7] or detection of short sequences as a result of padding, i.e., augmenting short sequences with additional event types that make them more likely to be detected [33]. Unfortunately, this means that evaluation metrics reported on the HDFS data set give the misleading impression that a majority of anomalies are disclosed by the complex sequence-based detection techniques, even though they make up a much smaller share.
Anomalies in the BGL and Thunderbird data sets primarily manifest as events corresponding to types that never occur in normal data. Accordingly, for the simple task of identifying single events as anomalies, it is not required to analyze logs as sequences. In addition, grouping log data into sequences is often carried out using sliding windows [5], which is problematic when different processes interleave [3]. However, due to the length and complexity of these data sets, they appear well suited for many types of log analysis, such as automatic parser generation [34] or word embedding to compensate evolution of log messages [4].
Both Hadoop and OpenStack data sets involve a high fraction of identical event sequences in normal and anomalous classes. No other artifacts suitable for anomaly detection in these data sets were identified in course of this study. We therefore advise against using these data sets for evaluation of anomaly detection techniques.
Our study suggests that the ADFA data set is a promising alternative to aforementioned data sets, because anomalies are not detected by simple techniques such as new events or sequence lengths. In course of our evaluation study we were able to determine that some of the anomaly classes are easier to detect than others as suggested by related work [35]. Moreover, system call logs form ordered sequences that only involve discrete event types, which means that the composition of parsing templates has no influence on detection performance. However, we point out that our experiments are not suitable to confirm that the anomalies in the ADFA data set indeed manifest as changes of sequential patterns, which is a task we leave for future work.
Just because a data set is widely used in scientific publications does not necessarily mean that it is automatically a good choice. When Creech et al. [24] published the ADFA data set in 2013, they attempted to replace data sets such as KDD99 that were already criticized and considered outdated at that time [36]. Nonetheless, the data sets were still widely used in the research community many years later [37]. This shows that researchers are drawn towards data sets that are convenient to use (e.g., because they are labeled, sufficiently large, or available in pre-processed format) and accepted as a benchmark data set in the community, even though they are not ideal for evaluations. We thus expect that despite our findings, data sets such as the HDFS data set will continue to be used in the future unless superior alternatives are proposed.
### _Appropriateness of Evaluations_
A major issue with most published evaluations is that the results are hardly reproducible or comparable. Since many authors do not publish their code, parameters of the algorithms, and data in a way that enables others to recreate the same results as stated in the respective paper, it is difficult to obtain an accurate overview of the detection capabilities achieved by state of the art approaches. Even though some authors re-implement well-known models [5, 7], the evaluation results hardly ever align across multiple papers even though the same detection techniques are applied on the same data sets.
One of the contributing factors for this issue is that authors usually fine-tune model parameters and repeat evaluation runs multiple times only to report the best results [5]. Unfortunately, this makes it difficult to comprehend the variance of the detection scores and the influence of selected thresholds. In addition, we observed that splits between training and test data vary strongly across different evaluations, including 1% [20], 20% [31], 50% [12], and 80% [5, 7]. The choice of sampling strategies for the training data has also been shown to have a strong impact on the detection performance, in particular, the size of the training data and whether samples are drawn randomly from the whole data set (as we do in our study) or only the chronologically earliest samples are taken [7].
Another issue is that some papers only measure precision and recall to compute the F1 score [19], but omit false positive
rate or true negative rate. Given that anomaly detection data sets are highly imbalanced, it is important to consider either of these metrics to avoid misinterpretation of results as we demonstrate on the Hadoop data set in Sect. IV-C1.
Anomaly detection techniques that leverage deep learning and neural networks generally have a lower explainability of classification results than conventional machine learning methods [6]. Unfortunately, this also means that the actual reasons why instances are reported as anomalies can often go unnoticed. Accordingly, it is important to come up with evaluation methodologies and fitting data sets that are capable of demonstrating the advantages of these deep learning models in a clear way, in particular, to justify the higher runtime and computational effort in contrast to conventional machine learning methods [5].
### _Recommendations & Future Work_
Based on the problems we identified in the previous sections, we formulate a set of recommendations to be addressed by future works.
1. Create new data sets that specifically support evaluation of sequence-based detectors. System call logs such as the ADFA log data set appear beneficial as they are ordered, easy to group into sequences, avoid the need for parsing, and enable anomaly injection by executing adverse functions on the host where logs are collected.
2. Repeat evaluation runs multiple times and report scores with variances. Presenting only the results from the best run with fine-tuned parameters causes that detection capabilities appear better than they are.
3. Ensure that data sets are suitable for evaluation. In particular, the way anomalies manifest in the data should be verified and explained beforehand.
4. Use simple detection techniques fitting to the anomaly manifestations as baselines for comparison.
5. Ensure reproducibility of reported results. This involves publishing all code and data that is necessary to repeat the conducted experiments and confirm the presented results. Moreover, relevant settings such as sampling strategies, splits between training and test data sets, and model parameters should be stated and their respective influence discussed in the paper.
Our simple detection methods used in this paper focus on event occurrences, i.e., event timestamps are the only contextual information derived from the logs other than sequences of events. However, anomalies often manifest in event parameters, and incorporating them in the detection procedure is thus a reasonable approach. For example, simple parameter-based detection could leverage new value occurrences in categorical event parameters similar to our detection of new event types.
Finally, our study focuses on semi-supervised anomaly detection, where only normal data is used for training. However, supervised classification of anomalies is also an actively researched field that leverages the data sets used in this paper [3]. We therefore plan to adopt our simple detection techniques for supervised anomaly detection and classification of anomalies into their respective classes. As this is considered out of scope for this paper, we leave this task for future work.
## VI Conclusion
Quality and appropriateness of data sets are crucial for sound and representative evaluations of anomaly detection techniques. In this paper, we analyzed five log data sets that are commonly used in state of the art and one additional data set from security research for the purpose of determining whether they are suitable for the evaluation of sequence-based detection algorithms. While these algorithms are primarily designed to recognize changes of sequential patterns, such as log event types generated in different order than during normal operation, our analysis suggests that these artifacts hardly occur as part of anomalies. In the HDFS data set, for example, shuffled sub-sequences in event executions result from simultaneously generated events rather than anomalies. Moreover, anomalous sequences sometimes do not differ from normal behavior at all, rendering some data sets unusable. We tested the most suitable data sets with a small set of simple detection techniques that base on the detection of new events, sequence lengths, count vector similarity, sub-sequence similarity, and event timing. Our evaluation results suggest that these simple detectors are able to achieve competitive detection rates compared to advanced approaches from state of the art, further indicating that high detection rates are easy to achieve in some data sets. To counteract these issues, we recommend to work on new data sets that are specifically designed to include sequential anomalies and improve evaluation methodologies to avoid that misleading results are obtained for proposed detection approaches.
## Acknowledgments
This work was partly funded by the European Defence Fund (EDF) project Alnception (101103385) and the FFG project PRESENT (FO999899544).
|
2306.13104 | Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual
Prostheses | Neuroprostheses show potential in restoring lost sensory function and
enhancing human capabilities, but the sensations produced by current devices
often seem unnatural or distorted. Exact placement of implants and differences
in individual perception lead to significant variations in stimulus response,
making personalized stimulus optimization a key challenge. Bayesian
optimization could be used to optimize patient-specific stimulation parameters
with limited noisy observations, but is not feasible for high-dimensional
stimuli. Alternatively, deep learning models can optimize stimulus encoding
strategies, but typically assume perfect knowledge of patient-specific
variations. Here we propose a novel, practically feasible approach that
overcomes both of these fundamental limitations. First, a deep encoder network
is trained to produce optimal stimuli for any individual patient by inverting a
forward model mapping electrical stimuli to visual percepts. Second, a
preferential Bayesian optimization strategy utilizes this encoder to optimize
patient-specific parameters for a new patient, using a minimal number of
pairwise comparisons between candidate stimuli. We demonstrate the viability of
this approach on a novel, state-of-the-art visual prosthesis model. We show
that our approach quickly learns a personalized stimulus encoder, leads to
dramatic improvements in the quality of restored vision, and is robust to noisy
patient feedback and misspecifications in the underlying forward model.
Overall, our results suggest that combining the strengths of deep learning and
Bayesian optimization could significantly improve the perceptual experience of
patients fitted with visual prostheses and may prove a viable solution for a
range of neuroprosthetic technologies. | Jacob Granley, Tristan Fauvel, Matthew Chalk, Michael Beyeler | 2023-06-16T18:49:51Z | http://arxiv.org/abs/2306.13104v2 | # Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses
###### Abstract
Neuroprostheses show potential in restoring lost sensory function and enhancing human capabilities, but the sensations produced by current devices often seem unnatural or distorted. Exact placement of implants and differences in individual perception lead to significant variations in stimulus response, making personalized stimulus optimization a key challenge. Bayesian optimization could be used to optimize patient-specific stimulation parameters with limited noisy observations, but is not feasible for high-dimensional stimuli. Alternatively, deep learning models can optimize stimulus encoding strategies, but typically assume perfect knowledge of patient-specific variations. Here we propose a novel, practically feasible approach that overcomes both of these fundamental limitations. First, a deep encoder network is trained to produce optimal stimuli for any individual patient by inverting a forward model mapping electrical stimuli to visual percepts. Second, a preferential Bayesian optimization strategy utilizes this encoder to optimize patient-specific parameters for a new patient, using a minimal number of pairwise comparisons between candidate stimuli. We demonstrate the viability of this approach on a novel, state-of-the-art visual prosthesis model. We show that our approach quickly learns a personalized stimulus encoder, leads to dramatic improvements in the quality of restored vision, and is robust to noisy patient feedback and misspecifications in the underlying forward model. Overall, our results suggest that combining the strengths of deep learning and Bayesian optimization could significantly improve the perceptual experience of patients fitted with visual prostheses and may prove a viable solution for a range of neuroprosthetic technologies.
## 1 Introduction
Sensory neuroprostheses are devices designed to restore or enhance perception in individuals with sensory deficits. They often interface with the nervous system by electrically stimulating neural tissue in order to provide artificial sensory feedback to the user [1; 2]. For instance, visual prostheses have the potential to restore vision to people living with incurable blindness by bypassing damaged parts of the visual system and directly stimulating the remaining cells in order to evoke visual percepts (phosphenes) [3; 4; 5; 6]. However, patient outcomes with current technologies are limited, with patients
requiring extensive training to learn to interpret the evoked percepts, which are typically described as "fundamentally different" from natural vision [7]. Moreover, phosphene appearance varies widely across patients [8], making personalized stimulus optimization a key open challenge [9].
A major outstanding challenge is translating stimulation into a code that the brain can understand. Much work has gone into developing computational models that can predict the neuronal or perceptual response to an electrical stimulus [8; 10; 11] (often called forward models). Once the forward model is known, a deep neural network can approximate its inverse, thereby identifying the required stimulus to elicit a desired percept [12; 13; 14]. However, these inverse models typically assume perfect knowledge of any patient-specific parameters of the forward model (which is often not practically feasible) and are heavily reliant on the forward model's accuracy over the entire stimulus space.
Alternatively, Bayesian optimization has been successful in personalizing stimulation strategies for many existing neural interfaces [15; 16]. However, this approach is often not practically feasible because it requires the stimulus dimension to be small (typically \(<30\)[17], which is orders of magnitudes smaller than the number of stimulus parameters in current implants), and optimization must be repeated for every new input. Moreover, visual prosthesis users can typically only give indirect feedback (e.g., verbal phosphene descriptions), unsuitable for traditional Bayesian optimization.
To address these challenges, we propose a novel framework that integrates deep learning-based stimulus inversion into a preferential Bayesian optimization strategy to learn a patient-specific stimulus encoder (Fig. 1). First, a deep stimulus encoder (DSE) is trained to optimize stimuli assuming perfect knowledge of a set of patient-specific parameters (Fig. 1, _left_). Second, we embed the DSE within a human-in-the-loop optimization (HILO) strategy based on preferential Bayesian optimization, which iteratively learns the ground-truth patient-specific parameters through a series of 'duels', where the patient is repeatedly asked their preference between two candidate stimuli. The resulting DSE can then be deployed as a personalized stimulation strategy.
To this end, we make the following contributions:
* We introduce a forward model for retinal implants that achieves state-of-the-art response predictions. Unlike previous models, this allows us to train a deep stimulus encoder to predict optimal stimuli across 13 dimensions of patient-specific parameters.
* We propose a personalized stimulus optimization strategy for visual prostheses, where a human-in-the-loop optimization (HILO) Bayesian optimization algorithm iteratively learns the optimal patient-specific parameters for a deep stimulus encoder.
Figure 1: _Left_: Deep stimulus encoder (DSE). A forward model (\(f\)) is used to approximate the perceptual response to electrical stimuli, subject to patient-specific parameters \(\phi\). An encoder (\(f^{-1}\)) is then learned to minimize the perceptual error between predicted and target percept. _Right_: Human-in-the-loop optimization (HILO). Patient-specific parameters \(\phi\) of the DSE are optimized with user preferences: the patient performs a series of binary comparisons between percepts evoked with different encoders. New pairs of parameters to compare are adaptively selected so as to efficiently find the parameters maximizing the patientβs preference. The target changes each iteration.
* We demonstrate the viability of our approach by conducting a comprehensive series of evaluations on a population of simulated patients. We show HILO quickly learns a personalized stimulus encoder and leads to dramatic improvements in the quality of restored vision, outperforming existing encoding strategies. Importantly, HILO is resilient to noise in patient feedback and performs well even when the forward model is misspecified. We make our forward model, encoder, and HILO algorithm publicly available.
## 2 Background and Related Work
Visual NeuroprosthesesNumerous groups worldwide are pursuing a visual prosthesis that stimulates viable neuronal tissue in the hope of restoring a rudimentary form of vision to people who are blind (Fig. 2, _left_) [3; 4; 5; 6]. Analogous to cochlear implants, these devices electrically stimulate surviving cells in the visual pathway to evoke visual percepts (phosphenes). Existing devices generally provide an improved ability to localize high-contrast objects and perform basic mobility tasks.
Much work has focused on characterizing phosphene appearance as a function of stimulus and neuroanatomical parameters [2; 10; 18; 19; 20]. In epiretinal implants, phosphenes often appear distorted due to inadvertent activation of nerve fiber bundles in the optic fiber layer of the retina [8], causing elongated percepts (Fig. 2, _center_). In addition, the exact brightness and shape of these elicited percepts depends on the applied stimulus [19] and differs widely across patients (Fig. 2, _right_). Granley _et al._[11] captured these individual differences with a set of patient-specific parameters, denoted by \(\phi\), which may include both neuroanatomical (e.g., implant location) and stimulus-related parameters (e.g., how brightness scales with current amplitude).
Deep Stimulus EncodingMany works attempt to mitigate distortions in prosthetic vision, but do not describe comprehensive stimulation strategies [21; 22; 23]. Those that describe strategies in detail typically require simplification [24] or strong assumptions [25] to be used in practice. Due to the complexities of optimization, deep learning-based stimulus encoders have risen in popularity [12; 13; 14]. In [12], authors proposed an innovative approach where the latent representations of an autoencoder are treated as stimuli and decoded with a phosphene model. However, they used an unrealistic binary phosphene model. Their approach has since been adapted for cortical models [26], and for non-differentiable forward models [13]. Granley _et al._[14] generalized the approach, showing it could work with realistic forward models across a small range of patients without needing to retrain.
Given a forward (phosphene) model \(f\) (mapping stimuli to percepts given \(\phi\)), it is straightforward to show that the optimal stimulus encoder (mapping target images to stimuli) is the pseudoinverse of \(f\)[14]. However, to account for the wide range of individual differences in phosphene perception, most realistic forward models are highly nonlinear and not analytically invertible. Thus, previous works have proposed to use the forward model [12; 14] as a fixed decoder within a deep autoencoder trained to minimize the reconstruction error between target images and the predicted percepts. After training, the encoder can be extracted and used to encode target visual inputs in real time. Deep
Figure 2: _Left_: Visual prosthesis. Incoming target images are transmitted from a camera to an implant in the retina, which encodes the image as an electrical stimulus pattern. _Center_: Electrical stimulation (red disc) of a nerve fiber bundle (gray lines) leads to elongated tissue activation (gray shaded region) and a phosphene (bottom). _Right_: The same stimulus parameters may lead to widely varying visual perceptions in different patients. Adapted with permission from [14].
stimulus encoders trained using this approach produce high quality stimuli, but assume knowledge of \(\phi\). Additionally, if the forward model \(f\) is not extremely accurate over the whole stimulus space, then the encoder network might learn to exploit inaccuracies in the model, producing stimuli that don't generalize to real patients [14]. We utilize an enhanced variant of this approach in our experiments.
Preferential Bayesian OptimizationPreferential Bayesian optimization (PBO) is an efficient method for optimizing expensive black-box functions based on binary comparisons [27; 28]. Since the subject's response to stimulation cannot be directly observed, PBO instead builds a Bayesian model of the subject's preferences, \(g\), typically modeled using a Gaussian process. An approximate inference algorithm (expectation propagation; [29; 30]) is used to infer the posterior distribution of the preference function given binary comparison data, \(p(g|\mathcal{D})\), which is then used to select new configurations for the next trial according to an acquisition rule. The acquisition rule must balance the exploration-exploitation trade-off inherent to any black-box optimization problem [31].
PBO was previously used to tune BCI stimulation parameters for transcranial [32] and spinal cord stimulation [33]. However, these works directly optimized only a handful of stimulation parameters, and cannot translate to visual prostheses, where complex and varying visual inputs have to be mapped to high-dimensional stimuli. To this end, Fauvel & Chalk [34] reduced optimization complexity by inverting a perception model, then using PBO to generate perceptually preferred encodings. However, a linear approximation was used to invert the perception model, which is unrealistic for real-world applications.
SummaryWe identify 3 main limitations of previous work that this study aims to address:
* **Generalizability of deep stimulus encoders.** Autoencoder-like deep stimulus encoders can accurately optimize stimuli, but require perfect knowledge of patient-specific parameters [14], which can be difficult or impossible to determine in practice [8; 11]. Further, these approaches heavily rely on the accuracy of the forward model [13; 14], while real patients will likely deviate from the forward model. We overcome this limitation by optimizing the learned stimulus encoder based on patients' preferences, which we show is not bounded by a misspecified forward model.
* **Applicability of Bayesian optimization.** Bayesian optimization is ideally suited for optimizing stimulation parameters based on limited, noisy measurements, but can only optimize a small number of parameters [17]. We use a deep stimulus encoder to reduce the stimulus search space, enabling Bayesian optimization.
* **Simplistic models of perception.** Most previous approaches use overly simplified forward models that do not match empirical data [8; 19]. More accurate models [11] are too computationally expensive to support deep stimulus optimization over a wide range of patients.
## 3 Methods
General FrameworkWe consider a system attempting to optimize stimuli for a new patient, specified by a set of (unknown) parameters \(\phi\). The goal of optimization is a patient-specific stimulus encoder mapping target perceptual responses \(\mathbf{t}\) (e.g., visual percepts) to stimuli \(\mathbf{s}\): \(\mathbf{s}=e(\mathbf{t};\phi)\).
We assume there exists a forward model \(f\) which predicts the patient's perceptual response to stimulation: \(\hat{\mathbf{t}}=f(\mathbf{s};\phi)\). It follows that the optimal stimulus encoder is the inverse of \(f\) under some distance metric \(d\) (_i.e.,_\(e=f^{-1}\)). The inverse can be approximated using an autoencoder-like deep neural network [14], with weights trained to minimize the reconstruction error between \(\hat{\mathbf{t}}\) and \(\hat{\mathbf{t}}\) across patients and a dataset of targets (Eq. 1). During training, \(\phi\) is sampled from a uniform random distribution spanning the ranges of empirically observed patient-specific parameters.
\[\min d(f(f^{-1}(\mathbf{t},\phi);\phi),\mathbf{t}) \tag{1}\]
Once trained, the encoder can accurately predict stimuli, but requires knowledge of the patient-specific parameters \(\phi\). For a new patient, Bayesian optimization is used to optimize \(\phi\) based on user feedback, thereby learning a personalized DSE. The underlying assumption is that the Bayesian optimization objective is related to the distance function used when training the deep stimulus encoder. Since the patient's response cannot be directly measured for visual prostheses, the user is presented with a 'duel', i.e. a binary comparison, where they are asked to decide which of two candidate stimuli they
prefer [34]. Finally, the posterior is updated based on the patient's response, and the process can be repeated to iteratively tune the DSE to the patient's preferences (Section 3).
Phosphene ModelThe phosphene model is a differentiable approximation of the underlying biological system (also called a forward model [14]), which maps an electrical stimulus to a visual percept. Although phosphene models exist for visual prostheses, current models either do not match patient data well [8, 10, 19], or are computationally expensive [11], prohibiting training a DSE that works across multiple patient-specific parameters.
Thus, we developed a new phosphene model for epiretinal prostheses. The model takes in a stimulus vector \(\mathbf{s}\in\mathbb{R}^{n_{e}\times 3}\) specifying the frequency, amplitude, and pulse duration of a biphasic pulse train on each electrode. The output phosphene for each electrode is a Gaussian blob centered over the electrode's location \(\mu_{e}(\phi)\) with covariance matrix \(\mathbf{\Sigma_{e}}(\mathbf{s},\phi)\) constructed so that the resulting percept will have area \(\rho_{e}(\mathbf{s},\phi)\), eccentricity \(\lambda_{e}(\mathbf{s},\phi)\) and orientation \(\theta_{e}(\phi)\). These functions allow phosphene properties to vary locally with stimulus (e.g., current spread) and anatomical parameters (e.g., electrode location, underlying axon nerve fiber bundle trajectory). The percept for each electrode is made from sampling a Gaussian distribution, renormalized to have maximum brightness \(b_{e}(\mathbf{s},\phi)\):
\[x\sim 2\pi b_{e}\det\left(\mathbf{\Sigma_{e}}\right)\,\mathcal{N}(x|\mu_{e}, \mathbf{\Sigma_{e}}), \tag{2}\]
where \(b_{e}\), \(\mu_{e}\), and \(\Sigma_{e}\) are implicitly parametrized by \(\mathbf{s}\) and \(\phi\). The covariance matrix \(\mathbf{\Sigma_{e}}=\mathbf{R}\mathbf{\Sigma_{0}}\mathbf{R}^{T}\) is calculated from the eigenvalue matrix \(\mathbf{\Sigma_{0}}\) and a rotation matrix \(\mathbf{R}\):
\[\Sigma_{0}=\begin{bmatrix}s_{x}^{2}&0\\ 0&s_{y}^{2}\end{bmatrix},\hskip 14.226378ptR=\begin{bmatrix}\cos\theta_{e}&- \sin\theta_{e}\\ \sin\theta_{e}&\cos\theta_{e}\end{bmatrix}.\]
The eigenvalues \(s_{x}\) and \(s_{y}\) depend on the intended phosphene area (\(\rho_{e}\)) and elongation (\(\lambda_{e}\)):
\[s_{x}^{2}=-\frac{\rho_{e}\sqrt{1-\lambda_{e}^{2}}}{2\pi},\hskip 14.226378pts_{y} ^{2}=-\frac{\rho_{e}}{2\pi\sqrt{1-\lambda_{e}^{2}}}.\]
Blobs from individual electrodes are summed into a global percept. Although the sum across electrodes is linear, modulating the size and eccentricity of phosphenes with stimulus parameters makes the final result a nonlinear function of stimulus parameters, preventing analytic inversion. Motivated by previous studies, we used a square \(15\times 15\) array of 150\(\mu\)m electrodes, spaced 400\(\mu\)m apart [14]. In total, the model is parameterized by 13 patient specific parameters, shown in Table 1. The ranges for each parameter were chosen to encompass all observed patients, centered on the mean value across patients [8, 10, 11, 19, 35]. Appendix A.1 describes the full phosphene model in detail.
Deep Stimulus InversionA deep stimulus encoder (DSE) is a deep neural network responsible for inverting the forward model to produce the optimized stimulus for a target image and a specific patient (\(\mathbf{s}_{\phi}=f^{-1}(\mathbf{t},\phi)\)). We used a network (45M parameters) consisting of fully-connected layers and blocks, each block containing 3 fully connected layers, batch normalization, and a residual connection. The flattened target image and the patient specific parameters were passed separately through one block each, concatenated, and passed through another block, after which the amplitude is predicted. The amplitudes were concatenated to the prior intermediate representation, fed through a final block, after which frequency and pulse duration were predicted. The output layers use ReLU activation; all others use leaky ReLU. During training, \(\phi\) were randomly sampled from the range of allowed parameters (Table 1) during training. Tensorflow 2.12, an NVIDIA RTX 3090, Adam optimizer, and batch size of 256 [36, 37] were used to train the network. The architecture is illustrated in Appendix B.1.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & Implant Parameters \\ \cline{2-13} & \(\rho\) (dva) & \(\lambda\) & \(\omega\) & \(a_{0}\) & \(a_{1}\) & \(a_{2}\) & \(a_{3}\) & \(a_{4}\) & \(OD_{x}\) (\(\mu\)m) & \(OD_{y}\) (\(\mu\)m) & x (\(\mu\)m) & y (\(\mu\)m) & rot (deg) \\ \cline{2-13} Lower & 1.5 &.45 &.9 &.27 &.42 &.005 &.2 & -0.5 & 3700 & 0 & -500 & -500 & -30 \\ Upper & 8 &.98 & 1.1 &.57 &.62 &.025 &.7 & -0.1 & 4700 & 1000 & 500 & 500 & 30 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Patient-Specific Parameters \(\phi\)
Human-in-the-Loop OptimizationWe propose using preferential Bayesian optimization (PBO) to optimize the patient-specific parameters \(\phi\) of the pretrained DSE. Given two sets of patient-specific parameters, \(\phi_{1}\) and \(\phi_{2}\), we assume that the probability of a subject preferring \(\phi_{1}\) to \(\phi_{2}\) (returning a response \(\phi_{1}\succ\phi_{2}\)) depends on a preference function \(g(\phi)\), modeled using a Gaussian process model:
\[P(\phi_{1}\succ\phi_{2}|g)=\Phi\big{(}g(\phi_{1})-g(\phi_{2})\big{)}, \tag{3}\]
where \(\Phi\) is the normal cumulative distribution inverse link function [38, 39]. The larger the value of \(g(\phi_{1})\) relative to \(g(\phi_{2})\), the higher the likelihood that the subject reports preferring \(\phi_{1}\) over \(\phi_{2}\).
We used the Maximally Uncertain Challenge [40] to select new comparisons to query, although other popular acquisitions performed similarly (Appendix C.2). Searching within the bounds in Table 1, this acquisition function selects a 'champion', \(\phi_{1}\), which maximizes the expectation of \(g\), and a 'challenger', \(\phi_{2}\), for which subjects' preferences are most uncertain:
\[\phi_{1} \rightarrow\arg\max_{\phi}\mathbb{E}_{p(g|\mathscr{G})}[g(\phi)], \tag{4}\] \[\phi_{2} \rightarrow\arg\max_{\phi}\mathbb{V}_{p(g|\mathscr{G})}[\Phi(g( \phi)-g(\phi_{1}))], \tag{5}\]
where \(\mathbb{V}\) denotes the variance. This algorithm is designed to balance exploitation (values of \(\phi\) that maximize \(g\)) and exploration (values of \(\phi\) for which the response is uncertain).
The performance of PBO crucially depends on the Gaussian process kernel and its hyperparameters, which encode our prior assumptions about the latent preference function. Inferring the kernel's hyperparameters online would slow down the algorithm and could lead to overfitting. Thus, we adopted a transfer learning strategy, which could also be applied to real-life patients. For each of 10 patients (with parameters different from those used in the following PBO experiment), we simulated 600 random duels and fit candidate hyperparameters for each of 4 commonly used kernels. We then selected the kernel and hyperparameters that generalized best to the other 9 patients (measured using Brier score on a held-out test set). The 5/2 Matern kernel performed best, and was used for all subsequent experiments. For more details, see Appendix C.1.
Simulated Patients_In silico_ experiments on simulated patients were used to demonstrate the viability of our approach. Each patient was assigned a set of patient-specific parameters \(\phi\), uniformly sampled from the ranges specified in Table 1. When challenged with a duel between two candidate stimuli \(\mathbf{s}_{\phi_{1}}\) and \(\mathbf{s}_{\phi_{2}}\), the simulated patient runs each stimulus through the phosphene model (using ground-truth patient-specific parameters), obtaining the predicted percepts \(\hat{\mathbf{t}}_{\phi_{1}}=f(\mathbf{s}_{\phi_{1}};\phi)\) and \(\hat{\mathbf{t}}_{\phi_{2}}=f(\mathbf{s}_{\phi_{2}};\phi)\). The users' preferences were modeled with a Bernoulli distribution, with probability \(p\) modulated by the difference in reconstruction error between each percept and the target image:
\[p=\frac{1}{1+\exp(-\frac{1}{\sigma}(d(\hat{\mathbf{t}}_{\phi_{2}},\mathbf{t}) -d(\hat{\mathbf{t}}_{\phi_{1}},\mathbf{t})))} \tag{6}\]
Here, \(\sigma\) is a configurable parameter that scales the width of the sigmoid, introducing noise into the response. We set \(\sigma\) to be \(0.01\), chosen empirically based on a conservative estimate: when the error difference was greater than 0.01 it was obvious which percept was better to human observers.
Data and MetricsWe used MNIST images as target visual percepts throughout the experiments. Images were resized to be the same size as the output of \(f\) (\(49x49\) pixels), and scaled to have a maximum brightness of 2 (aligned with range(\(f\))). Inspired by [12, 14], we used a perceptual similarity metric designed to capture higher-level differences between images [41]. Let \(v_{l}(\mathbf{t})\) be a function that extracts the downstream representations of target \(\mathbf{t}\) input to a VGG19 network pretrained on ImageNet [42]. The perceptual metric is then given by equation 7.
\[d(\mathbf{t},\hat{\mathbf{t}})=\frac{1}{|t|}(||\mathbf{t}-\hat{\mathbf{t}}||_ {2}^{2}+\beta||v_{l}(\mathbf{t})-v_{l}(\hat{\mathbf{t}})||_{2}^{2}) \tag{7}\]
This metric was used by the deep stimulus encoder as a training objective, by the simulated patient to choose a duel winner, and throughout HILO as an evaluation metric. \(\beta=2.5\)e-5 was selected via cross-validation. To aid in interpretability, we also report a secondary metric based on how identifiable the predicted percepts were. We first pretrained a separate deep net to 99% test accuracy on MNIST classification. We then measured the accuracy of this classifier on the predicted percepts at every iteration of HILO.
## 4 Results
### Phosphene Model
To verify that our phosphene model's predictions line up with observed results from real prosthesis users, we repeated analyses from previous state-of-the-art models, evaluating how phosphene appearance changes with electrode location [8] and stimulus parameters [10; 11; 19; 35]. We used the same datasets, consisting of thousands of phosphene drawings and brightness and size ratings collected across multiple epiretinal prosthesis [3] patients over several years. To evaluate phosphene appearances with electrode location, we calculated the correlation between predicted and observed phosphenes for three shape descriptors: area, eccentricity, and orientation. The final score reported is \(1-\sum_{i}R_{i}^{2}\), summed across shape descriptors [8]. To evaluate how phosphene appearance was modulated by stimulus parameters, we calculated the mean squared error between the size and brightness of predicted percepts and patient ratings as amplitude, frequency, and pulse duration were varied. The reported values correspond to Figures 4a-c and 5 in [11].
Evaluation results are presented in Table 2. Our model significantly outperforms the previous SOTA on the Beyeler _et al._ evaluation, and matches SOTA on the Granley _et al._ evaluations. Moreover, this model is much more amicable to inclusion in a deep neural network. We defer a more detailed description of evaluation methods and additional analysis to Appendix A.2.
### Deep Stimulus Encoder
We trained a deep stimulus encoder (DSE) to invert our phosphene model (decoder). The encoder was trained across 13 patient-specific parameters, randomly selected at every epoch, including for the first time implant position and rotation. This is in contrast to previous DSEs, which either require retraining for every new patient [12; 13; 26], or can only vary two patient-specific parameters [14].
We compared the performance of the DSE to a traditional ('naive') encoder [14] currently used by retinal prostheses [3], illustrated in Fig. 3. The DSE achieved a test perceptual loss of 0.05 and a MNIST accuracy of 95.6%, significantly outperforming the naive encoder (5.68 and 51% respectively). Note that this performance is when the true patient-specific parameters are known. This performance is similar or slightly better than the values reported in [14] despite training across 11 additional patient-specific parameters.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & [8] & & & [11] & \\ \cline{2-7} Model & S1 & S2 & S3 & 4A & 4B & 4C & 5 \\ \hline Previous state of the art [11] & 2.43 & 7.07 & 1.15 & 0.9 & **2.1** & 0.16 & 49.5 \\ Proposed & **0.28** & **0.57** & **0.38** & **0.73** & 2.3 & **0.1** & **48.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of Phosphene Model
Figure 3: Percepts resulting from a naive encoder and the trained DSE for two example target images across 25 randomly selected patients.
### Human-in-the-Loop Optimization
We ran deep learning-based HILO for 100 randomly selected simulated patients. After every duel, we evaluated the DSE parameterized by the current prediction of patient-specific parameters on a subset of the MNIST test set. The performance of the learned encoder over time ('HILO') is illustrated in Figure 4, which plots the joint perceptual loss (Figure 4.B) and MNIST accuracy (Fig. 4C). We also show example duels and percepts for an example patient.
As baselines for comparison we used a naive encoder, a non-personalized DSE where the patient-specific parameters are guessed (DSE-\(\phi_{Guess}\)), and an ideal DSE using the true \(\phi\) (DSE-\(\phi_{True}\)). To guess \(\phi\), we consider two approaches, one which selects the mean value across the ranges in Table 1, and another which selects random \(\phi\), averaged across all possible random selections from the same range. Note that we randomly selected our 100 simulated patients to be from this same range, so both of these approaches for guessing \(\phi\) are likely biased, especially the mean. In reality, we postulate that the performance of a deep stimulus encoder without patient-specific optimization would likely fall somewhere between these two methods, since the distribution of real patients is likely not perfectly aligned with Table 1. We therefore plot the region bounded by the performance of a DSE with either of these approaches for guessing \(\phi\). Example percepts after optimization are shown in Fig. 4D.
The HILO encoder starts with random predictions, but, after a short initial exploration period, quickly surpassed the baselines. After about 75 iterations, performance approached the ideal DSE encoder, however the HILO encoder still resulted in high-quality percepts after as few as 20 iterations. Averaged across patients, the final reconstruction error of the HILO encoder was.071 \(\pm\).0031 (SEM) and MNIST accuracy was 92% \(\pm\) 1.0%. DSE-\(\phi_{Guess}\) had an error of between.25 and 1.1 and
Figure 4: Human-in-the-loop optimization of a deep stimulus encoder. _A_: Two example duels, from which patient preferences are learned. _B_: Reconstruction error throughout optimization across 100 simulated patients. Insets show the predicted percept resulting from stimulation with various encoders. Note the y axis is on a log scale. _C_: MNIST accuracy of a pretrained classifier on reconstructed phosphenes. Both plots show smoothed median (window size of 3), with error bars denoting IQR. _D_: Example percepts after optimization for Naive, DSE without HILO, and HILO encoders.
MNIST accuracy between 58.6% and 78.3%, and the DSE with true \(\phi\) had an error of.05 \(\pm\).001 and accuracy of 95.5% \(\pm\).1%.
### Robustness
In reality, it is likely that a patient's responses will not be perfectly captured by the phosphene model. Further, patient responses for visual prostheses are notoriously noisy [7; 43]. To test HILO's resiliency to these variations, we conducted additional robustness experiments, each with the same 25 simulated patients (Figure 5). First, we varied the noise parameter \(\sigma\) that simulated patients use to make decisions (Figure 5A). Next we constructed various'misspecified' forward models, where the ground-truth model used to decode stimuli differed from the forward model assumed by the DSE. First, we varied the trajectories of the simulated axon bundles [44], thereby changing the orientation of phosphenes (Figure 5C). Second, threshold amplitudes for stimulation are notoriously hard to predict, and have been shown to drift by up to 300% over time [45]. Therefore, we tested a variant where the threshold assumed by the encoder was incorrect by up to 300% (Figure 5B). Lastly, we used the same forward model, but with patient-specific parameters outside the bounds of the stimulus encoder and PBO algorithm (Figure 5D).
At \(\sigma\)=1e-4, the patient response was noiseless. For \(\sigma\) equal to.005,.01, and.02, HILO performed similarly to the noiseless model, despite the patient on average making 'random' (\(p\in[0.35,0.65]\)) decisions in 26%, 38%, and 48% of duels. At \(\sigma\)=0.05, the decision was 'random' 2/3 of the time, and HILO performed similarly or slightly better than the baseline DSE-\(\phi_{Guess}\). The DSE itself is very resilient to misspecifications in axon trajectory, so HILO performs similarly for this misspecification to the original patients. When thresholds varied, HILO still outperformed the baselines, but converged to slightly worse encodings than without misspecification. Further, HILO surpassed the DSE encoded with the ground-truth \(\phi\), demonstrating HILO's improved resiliency. For out-of-distribution \(\phi\), HILO
Figure 5: Reconstruction error through optimization for noisy patient responses (_upper left_) and for various misspecifications in the forward model assumed by the DSE. Noise level denotes the percentage of duels where the decision was essentially random (\(p\in[0.35,0.65]\)), corresponding to \(\sigma\) of 1e-4, 0.005, 0.01, 0.02, and 0.05, respectively. All y axes are on log scales. Naive encoders and some error bars omitted for clarity.
again outperformed both the baseline and true DSEs, but performed worse than in-distribution patients.
## 5 Discussion
Our experiments show that HILO optimization of a deep stimulus encoder led to high-quality, personalized stimulation strategies that outperformed previous state-of-the-art techniques. HILO led to an increase in percept quality compared to using a non-personalized DSE for 99% of simulated patients, demonstrating the viability of our approach. To enable our HILO algorithm, we also developed a new phosphene model, which is computationally simpler and matches patient data better than previous models, and trained a new DSE, which is able to produce high-quality encodings across all 13 dimensions of patient-specific variations included in our phosphene model. Together, these significantly advance state-of-the-art in patient-specific stimulus encoding, and are important steps towards practically-feasible personalized prosthetic vision in real patients.
The proposed framework combining Bayesian optimization and deep stimulus encoding offers significant improvements over both components in isolation. Use of a DSE allows us to incorporate prior information, reducing the dimensionality of the Bayesian optimization search space from the large stimulus space to the much smaller model parameter space. Our results demonstrate that even when the DSE's predictions are incorrect, this parameterization is still useful for Bayesian optimization based on patient preferences. Additionally, DSEs are able to invert highly nonlinear forward models, enabling encoder-parameterized Bayesian optimization to be applied to a much larger set of problems. Lastly, the learned encoder can be applied for any target percept, without needing additional optimization. Conversely, without adaptive feedback from HILO, deep stimulus encoders have no method for learning the individual differences of a new patient, which we show leads to suboptimal stimuli. DSEs rely on the accuracy of their assumed forward model over the entire stimulus space. We show that our approach produces stimuli that work well for the patient, even when the forward model is misspecified, or when the patient's responses are noisy.
This approach is practical for stimulus optimization in the wild. The encoder learned during optimization is lightweight, and once deployed, can predict individual stimuli in less than \(5\,\mathrm{ms}\) on CPU, allowing for high frame rates for prosthetic stimulation. During HILO, updating the Gaussian process model and producing new stimuli on average took 3 seconds, meaning that stimulus optimization could be performed in a matter of minutes. A HILO strategy could be bundled with future visual prostheses, allowing for patients to periodically re-calibrate their devices when they feel the device is not performing adequately, without requiring a clinical professional.
Broader ImpactsAlthough we demonstrate this approach in the context of visual prostheses, our framework is general and could be applied to a variety of sensory devices. Our approach is applicable when the stimulus search space is large and there exists a forward model mapping stimuli to responses. Forward models [46; 47; 48; 49] and deep stimulus encoders [50; 51; 52] have been successfully used across multiple sensory modalities, and could potentially be adapted for personalization with HILO.
LimitationsAlthough promising, our approach is not without limitations. We assumed that the preference of patients for different stimuli is related to the distance metric used to measure perceptual similarity, which may not be true in practice. However, results by [34] suggest that PBO is robust to a mismatch between the distance metric used to invert the forward model and the preference of patients. Another limitation is that evaluation of our approach was only performed on simulated patients with a simulated perceptual model. However, this is mitigated by the fact that HILO showed robustness to model inaccuracies. Still, since it is difficult to predict the behavior of deep learning models, using a deep stimulus encoder in real patients could raise safety concerns. It may be possible for a deep encoder to produce unconventional stimuli, potentially leading to adverse effects. However, most devices come with firmware responsible for ensuring stimuli stay within FDA-approved safety limits.
In conclusion, our results suggest that combining the strengths of deep learning and Bayesian optimization could significantly improve the perceptual experience of patients fitted with visual prostheses and may prove a viable solution for a range of neuroprosthetic technologies.
## Acknowledgments
This work was supported by the National Library of Medicine of the National Institutes of Health under Award Number DP2-LM014268. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. |
2303.07977 | Direct generation of time-energy-entangled W triphotons in atomic vapor | Sources of entangled multiphotons are not only essential for fundamental
tests of quantum foundations, but are also the cornerstone of a variety of
optical quantum technologies today. Over past three decades, tremendous efforts
have been devoted to creating multiphoton entanglement by multiplexing existing
biphoton sources with linear optics and postselections. Different from all
previous protocols, here we report, for the first time, the observation of
continuous-mode time-energy-entangled W-class triphotons with an unprecedented
generation rate directly through the process of spontaneous six-wave mixing
(SSWM) in a four-level triple-Lambda atomic vapor cell. Facilitated by
electromagnetically induced transparency and coherence control, our SSWM scheme
enables versatile narrowband triphoton generation with many intriguing
properties including long temporal coherence and controllable waveforms, ideal
for implementing long-distance quantum communications, networking, and
information processing by interfacing photons and atoms. Most importantly, our
work paves a way for the development of a reliable and efficient genuine
triphoton source, thus making the research on multiphoton entanglement within
easy reach. | Kangkang Li, Jianming Wen, Yin Cai, Saeid Vashahri Ghamsari, Changbiao Li, Feng Li, Zhaoyang Zhang, Yanpeng Zhang, Min Xiao | 2023-03-14T15:32:07Z | http://arxiv.org/abs/2303.07977v2 | # Direct generation of time-energy-entangled W triphotons in atomic vapor
###### Abstract
Sources of entangled multiphotons are not only essential for fundamental tests of quantum foundations, but are also the cornerstone of a variety of optical quantum technologies today. Over past three decades, tremendous efforts have been devoted to creating multiphoton entanglement by multiplexing existing biphoton sources with linear optics and postselections. Different from all previous protocols, here we report, for the first time, the observation of continuous-mode time-energy-entangled W-class triphotons with an unprecedented generation rate directly through the process of spontaneous six-wave mixing (SSWM) in a four-level triple-\(\Lambda\) atomic vapor cell. Facilitated by electromagnetically induced transparency and coherence control, our SSWM scheme enables versatile narrowband triphoton generation with many intriguing properties including long temporal coherence and controllable waveforms, ideal for implementing long-distance quantum communications, networking, and information processing by interfacing photons and atoms. Most importantly, our work paves a way for the development of a reliable and efficient genuine triphoton source, thus making the research on multiphoton entanglement within easy reach.
pacs: 03.65.
photon numbers between the primary and secondary biphoton process, thereby making these sources very noisy and inefficient. Alternatively, the third technique [15, 16, 17] suggests to coherently mix paired photons with singles attenuated from a cw laser to trigger triphoton events. Akin to the first method, this solution depends on erasing the photon distinguishability by resorting to the Hong-Ou-Mandal interference effect [18]. Though polarization-entangled multiphotons of inequivalent classes were experimented with postselection, the low success rate and required interferometric stabilization make this proposal not so practical. As photons are always emitted in pairs in SPDC/SFWM, this attribute results in the fourth route [19, 20, 21] to make use of emission of multiple pairs by appropriately setting input pump powers. Though it seems easy to yield even-number states, yet, dominant biphotons from lower-order perturbation of the parametric process challenge detecting entangled multiphotons from higher-order perturbations. To have an acceptable fidelity, like the second way, a complicated detection system plus an interferometric setup is often inevitable in practice. What's more, this approach mainly allows to form polarization entanglement thus far. In spite of these impressive achievements, all foregoing mechanisms are difficult to offer a reliable and efficient triphoton source for research and applications. Additionally, so far there is no convincing realization of the entangled triphoton experiment in continuous modes. Driven by SPDC, one would expect that such photons could be naturally born from third-order SPDC [22, 23] by converting one pump photon of higher energy into three daughter photons of low energy. The idea looks simple and straightforward, but experimentally inaccessible owing to the lack of such a nonlinear optical material. As a result, developing a reliable triphoton source is still in its infancy even up to today.
Coherent atomic media [24], on the other hand, exhibit a wide range of peculiar properties including giant nonlinearities, prolonged atomic coherence, strong photon-atom interaction, and slow/fast light effects. Recently, these exotic properties have been skillfully employed to construct a novel narrowband biphoton source [25, 26, 27, 28] basing on SFWM. Specifically, giant nonlinearities promise efficient parametric conversion, long atomic coherence leads to narrowband wavepackets, and sharp optical response becomes a formidable knob for shaping photon waveforms and temporal correlations. Unlike solid state sources, one unique feature pertinent to atomic ensembles arises from the dual role played by the third-order nonlinear susceptibility \(\chi^{(3)}\) in biphoton generation [29, 30, 31]. That is, in addition to governing nonlinear conversion strength, the double-resonance structure in \(\chi^{(3)}\) signifies the coexistence of two sets of SFWMs in light quanta radiation. Alternatively, entangled photons output from these two stochastic but coherent SFWM processes interfere and give rise to a nontrivial two-photon interference, namely, the damped Rabi oscillations. In general, their waveforms are entirely patterned by the convolution of a complex phase-mismatch function and \(\chi^{(3)}\). Other than these attributes, the nonclassical correlations shared by paired photons can be additionally manipulated by exploiting various coherent control techniques including electromagnetically induced transparency [24] (EIT) to reshape optical responses. The interplay amongst diverse effects also enriches fundamental research and fosters technological innovations, inaccessible to other existing biphoton sources. Besides, flexible system layouts like backward detection geometry are more favorable to photon counting detection. Motivated by these
advantages, here we move one step forward and report the direct generation of continuous-mode triphotons entangled in time and energy from a hot atomic vapor cell. By utilizing the process of spontaneous six-wave mixing [32, 33] (SSWM), we have not only observed the striking three-photon interference but also witnessed the residual two-photon correlation by tracing one photon out, an intrinsic virtue of the W class of tripartite entanglement [34]. By adjusting the system parameters, we have further achieved waveform-controllable triphoton generation. Together with an unprecedented production rate, our scheme has substantiated to be the first reliable platform that leverages multipartite entanglement research to an unparalleled level.
As schematic in Figs. 1A-C, we are interested in yielding narrowband W triphotons from a 7-cm long \({}^{85}\)Rb vapor cell with a four-level triple-\(\Lambda\) atomic configuration at temperature \(80^{\circ}\)C (or \(115^{\circ}\)C). The detail of the experimental setup is provided in Methods. In the presence of three counter-propagating cw laser beams (one weak pump \((E_{1},\omega_{1},\vec{k}_{1})\) and two strong couplings \((E_{2},\omega_{2},\vec{k}_{2})\) and \((E_{3},\omega_{3},\vec{k}_{3})\)), backward photon triplets \((E_{Sj},\omega_{Sj},\vec{k}_{Sj}\) with \(j=1,2,3)\) are emitted via Doppler-broadened SSWM at an intersection angle of \(\theta\approx 4^{\circ}\) to the principle z-axis along the phase matching direction, \(\Delta\vec{k}=\left(\vec{k}_{S1}+\vec{k}_{S2}+\vec{k}_{S3}\right)-\left(\vec{ k}_{1}+\vec{k}_{2}+\vec{k}_{3}\right)=0\). As depicted in Figs. 1B and C, the three coaxial input lasers were coupled into the center of the \({}^{85}\)Rb vapor cell with tunable frequency detunings \(\Delta_{j}\) and powers \(P_{j}\); while the generated photon triplets were accordingly detected by three single-photon counting modules (\(\mathrm{SPCM}_{1}\) - \(\mathrm{SPCM}_{3}\)) for coincidence counts after spatial and frequency filtering. Here, to avoid unwanted accidental trigger events induced by singles and dual biphotons, we placed single-band filters and narrowband etalon Fabry-Perot cavities in front of \(\mathrm{SPCM}_{j}\) before detection. We notice that in three-photon joint clicks, the major source of accidental coincidences stems from double pairs from two different SFWMs simultaneously present in the detection system (Supplementary Information (SI)). Since these dual pairs may have similar central frequencies and polarizations as genuine triphoton modes, they cannot be filtered away simply by polarizers and frequency filters. To exclude such double-pair false trigger events, in experiment we further introduced an additional \(\mathrm{SPCM}_{4}\) synchronized with \(\mathrm{SPCM}_{3}\) to serve as the diagnosis detector in conjunction with the rest two, \(\mathrm{SPCM}_{1}\) and \(\mathrm{SPCM}_{2}\). To ensure the atomic population to be mainly distributed in the ground level \(|5S_{\frac{1}{2}},F=2\rangle\) throughout the measurement, an additional strong optical repumping beam \((E_{op})\) was applied to the atomic transition \(|5S_{\frac{1}{2}},F=3\rangle\rightarrow|5P_{\frac{1}{2}}\rangle\) in alignment with \(E_{2}\) but without spatial overlap. With these preparations, we carefully adjust the system parameters, especially \(P_{j}\) and \(\Delta_{j}\) of each input field \(E_{j}\), to promote the SSWM occurrence.
Physically, the SSWM process can be understood from the effective interaction Hamiltonian
\[H=\epsilon_{0}\int_{V}\;\;d^{3}r\chi^{(5)}E_{1}E_{2}E_{3}E_{S1}^{(-)}E_{S2}^{(- )}E_{S3}^{(-)}+H.\,c.\,(H.\,c.,\,\mathrm{Hermitian\ conjugate}), \tag{1}\]
with three input (output) beams treated as classical (quantized) fields and \(V\) being the interaction volume. In Eq. (1), \(\chi^{(5)}\) denotes the fifth-order Doppler-broadened nonlinear susceptibility and governs the nonlinear conversion efficiency. In the Schrodinger picture, after some algebra, the triphoton state at the two cell surfaces can be derived from first-order perturbation theory by ignoring the vacuum contribution (SI), and takes the form of
\[|\Psi\rangle\propto\int\!\!\!\int d\omega_{S1}d\omega_{S2}d\omega_{S3}\chi^{(5) }\Phi\left(\tfrac{\Delta kL}{2}\right)\delta(\Delta\omega)\,|1_{\omega_{S1}}, 1_{\omega_{S2}},1_{\omega_{S3}}\rangle. \tag{2}\]
Here, \(\Delta\omega=\sum_{j=1}^{3}\bigl{(}\omega_{Sj}-\omega_{j}\bigr{)}\), \(L\) is the interaction length, \(\Delta k=\Delta\vec{k}\cdot\hat{z}\) is the phase (or wavenumber) mismatch, the phase-mismatch longitudinal function \(\Phi(x)=\mathrm{sinc}(x)e^{-ix}\) ascribes the three-photon natural spectral width arising from their different group velocities. Besides conditioning the triphoton output rate, the \(\chi^{(5)}\)-resonance profile also specifies the generation mechanism along with the photon intrinsic bandwidths. Overall, the state (2) outlines a few peculiar features yet to be experimentally verified: First, because of its non-factorization, \(|\Psi\rangle\) is entangled in frequency (or time), instead of polarization. Second, characterized by two independent variables, \(|\Psi\rangle\) conforms to the essential characteristics of the tripartite W class, that is, by tracing one photon away, partial entanglement still exists in the remaining bipartite subsystem. Third, since the triphoton waveform is defined by the convolution of \(\Phi\) and \(\chi^{(5)}\), two distinct types of Glauber third-order (as well as conditional second-order) temporal correlations are expected to be manifested in threefold (and conditioned twofold) coincidence counting measurement. Consequently, two very differing scenarios are expected to be revealed in triphoton coincidence counting measurement. Last, but not the least, the triplet production rate is linear in the intensity of each input laser and can be dramatically enhanced by orders of magnitude by optimizing system parameters. It is worth pointing out that all these striking properties have been well affirmed in our series of experiments. Of importance, this is the first experimental proof of the time-energy-entangled triphoton W state discovered a decade ago [34] but never realized.
In experiment, we optimized the SSWM phase-matching condition via controlling the frequency detunings and incident angles of three driving fields so as to effectively collect emitted triphotons. Upon triggering \(\mathrm{SPCM_{j}}\), the temporal correlation was concealed in photon counting histograms saved in a fast-time acquisition card with 0.0244-ns bin width, where, within in every time window of 195 ns, the detection of an \(E_{S1}\)-photon triggered the start of a coincidence event that ended with the detection of subsequent \(E_{S2}\)- and \(E_{S3}\)-photons. In most measurements, we collected the total trigger events over an hour and then analyzed the corresponding three-photon coincidences from the histogram in the parameter space \((\tau_{21},\tau_{31})\), where \(\tau_{21}=\tau_{2}-\tau_{1}\) and \(\tau_{31}=\tau_{3}-\tau_{1}\) are respectively the relative time delays with \(\tau_{j}\) being the triggering time of the \(\mathrm{SPCM_{j}}\).
As an exemplar of such, Fig. 2A displays one set of measured threefold coincidence counts from one recorded histogram after subtracting the accidental noise, giving rise to an intriguing three-dimensional temporal correlation with the 18.6- and 19.0-ns effective measurement time window
along the \(\tau_{21}\)- and \(\tau_{31}\)-axis because of the employed factors. For the 0.25-ns time-bin width per detector, integrating all involved time bins yields the total of \(\sim\!\!6\times 10^{3}\) threefold trigger events, which result in a raw triphoton generation rate of \(102\pm 9\) per minute without account of the coupling loss and detection efficiency. This rate is orders of magnitude higher than any previous one, and can be further improved by applying more efficient SPCMs as well as optimizing the fiber coupling efficiency. From the raw data, the background accidentals were estimated to be \(6\pm 1\) per minute, mainly originating from the residual dual pairs as well as accidental coincidences of uncorrelated singles and dark counts of the SPCMs. This low background noise implies that the undesired third-order nonlinear processes were well filtered out in the experiment. On the other hand, the complicated pattern is a direct consequence of nontrivial W-triphoton interferences due to the occurrence of multiple coexisting SSWM processes in the regime of damped Rabi oscillations. As described previously, these processes arise from the multi-resonance structure of \(\chi^{(5)}\). According to our dressed-state calculations (SI), there are four such coexisting channels, as schematic in Fig. 2B, coherently contributing to the observed quantum interference. To confirm that the emitted triphoton state belongs to the W class, we then used the acquired data to investigate the correlation properties of different bipartite subsystems. To do so, we integrated the coincidence counts by tracing away one photon from every triphoton event over that photon's arrival time. In this way, we acquired the conditional two-photon temporal waveforms with \(\tau_{21}\) or \(\tau_{31}\) as variables, and plotted them, respectively, in Figs. 2C and D. Interestingly, the conditioned \(\tau_{3}\)-waveform in Fig. 2D exhibits a damped periodic oscillation with a period of \(\sim\!\!6.2\) ns (SI); while the \(\tau_{21}\)-waveform in Fig. 2C reveals two superimposed damped periodic oscillations with another 1.7-ns period in addition to the 6.2-ns one (SI), an interference effect unusual to any existing biphoton source. In contrast, the triphoton waveform has flexible temperal widths, for instance, 28 ns along the direction of \(\tau_{21}+\tau_{31}=15\) ns (Fig. 2E). This contrasting phenomenon also supports our theoretical picture from alternative aspect, that the observed interference is caused by at least three sets of coherently coexisting SSWM processes. As demonstrated in SI, our qualitative analysis gives a good account of the experimental data.
Since the attributes of triphoton waveforms are dependent on the system parameters, this prompts us to manipulate and control their quantum correlations by means of tuning the input lasers as well as the atomic density or optical depth (OD). To this end, we carried out a series of experiments to tailor temporal correlation by shaping their waveforms by varying various parameters. Two sets of such representative experimental data are presented in Fig. 3. In comparison to Fig. 2A, Fig. 3A shows the steered waveform by reducing the power and frequency detuning of the input \(E_{2}\) laser. As one can see, the profile of the triphoton temporal correlation is dramatically changed in spite of the reduced generation rate \(77.4\pm 7.8\) minute-1. Especially, the conditional two-photon coincidence counts manifest mono-periodic oscillations with the same period of 6.2 ns along both \(\tau_{21}\) and \(\tau_{31}\) directions, as illustrated in Figs. 3B and C. This is because, in this case, the Rabi frequency of \(E_{2}\) was tuned to be very close to that of \(E_{3}\). As a consequence, half of the multiple resonances associated with the emission of \(E_{52}\)-photons (Fig. 2B) become degenerate and share
the same spectrum. Likewise, the triphoton temporal coherence length along the \(\tau_{21}+\tau_{31}=29\) ns direction is enlarged to 40 ns. On the other hand, triphoton interference can be also modulated by altering the phase-mismatch longitudinal function \(\Phi\) in Eq. (2). Akin to the biphoton generation, the phase mismatch \(\Delta k\) in \(\Phi\) is determined by the linear susceptibility of each mode in SSWM via the EIT slow-light effect. As showcased in Fig. 3D, by augmenting the OD from 4.6 to 45.7, the triphoton temporal correlation is considerably modified by the dispersion relation of the atomic vapor and falls into the group-delay regime. In addition to raising the production rate to \(125\pm 11\) per minute, the oscillatory curvature is markedly suppressed and replaced by the overall decay envelopes. This transformation becomes more evident when examining the conditioned two-photon coincidence counts. By comparing Fig. 3F with Figs. 3B, C and E, one can see that the enhanced dispersion apparently smears the damped Rabi oscillations along the \(\tau_{21}\)-direction, implying that the narrower bandwidths defined by \(\Phi\left(\frac{\Delta kL}{2}\right)\) regulate the bandwidths dictated by \(\chi^{(5)}\) to obscure the interference amongst four sets of coexisting SSWM channels. Besides, the triphoton temporal coherence length along the direction of \(\tau_{21}+\tau_{31}=50\) ns is also significantly prolonged up to 70 ns.
To reveal the nonclassicality of the W triphoton state, we continued to examine the violation of the Cauchy-Schwarz inequality [35, 36] as well as the fringe visibilities of the observed Rabi oscillations. By normalizing the threefold coincidence events to the flat background counts along with the additional auto-correlation measurement of the collected \(E_{S1}\), \(E_{S2}\) and \(E_{S3}\) photons, we found that the Cauchy-Schwarz inequality is violated by a factor of \(250\pm 55\) in Fig. 2A, \(154\pm 43\) in Fig. 3A, and \(79\pm 21\) in Fig. 3D. Note that here these values were optimized by filtering possible biphoton processes in measurement. Additionally, we observed that the fringe visibility of Fig. 2A can be as high as \(90\pm 5\%\).
In addition to the above experiments, it is also instructive to explore the triphoton production rate and temporal correlation width as a function of the input pump power for further understanding the proposed generation mechanism. This has motivated us to implement additional measurements and the experimental data is presented in Fig. 4. As one can see, indeed, the triphoton generation rate follows a linear growth in the input power \(P_{2}\) of the \(E_{2}\) field. For the temporal coherence length, we concentrated on the two-photon conditional coincidence counting along the \(\tau_{21}\) and \(\tau_{31}\) directions. From Fig. 4, it is not difficult to find that increasing \(P_{2}\) results in the reduction of the correlation time. This stems from the reduced slow-light effect when augmenting \(P_{2}\). Note that Figs. 2A, 3A and 3D simply become one individual point in Fig. 4. Overall, our approach enables all-optical coherent manipulation to create the genuine triphotons with controllable waveforms.
In conclusion, we have for the first time observed the efficient W-triphoton emission directly through SSWM in a warm atomic vapor with a generation rate of about \(125\pm 11\) min-1. Moreover, due to the coexistence of multi-SSWMs, these time-energy-entangled W triphotons have resulted in various nontrivial three-photon temporal interferences. Furthermore, by manipulating the
system parameters, the triphoton temporal correlations can be flexibly engineered and tailored and demonstrate many peculiar characteristics inaccessible to all previous mechanisms. As a reliable source, it is expected to play a vital role in probing foundations of quantum theory and advancing various quantum-based technologies in information processing, communications, imaging, metrology, etc.
|
2307.04751 | Shelving, Stacking, Hanging: Relational Pose Diffusion for Multi-modal
Rearrangement | We propose a system for rearranging objects in a scene to achieve a desired
object-scene placing relationship, such as a book inserted in an open slot of a
bookshelf. The pipeline generalizes to novel geometries, poses, and layouts of
both scenes and objects, and is trained from demonstrations to operate directly
on 3D point clouds. Our system overcomes challenges associated with the
existence of many geometrically-similar rearrangement solutions for a given
scene. By leveraging an iterative pose de-noising training procedure, we can
fit multi-modal demonstration data and produce multi-modal outputs while
remaining precise and accurate. We also show the advantages of conditioning on
relevant local geometric features while ignoring irrelevant global structure
that harms both generalization and precision. We demonstrate our approach on
three distinct rearrangement tasks that require handling multi-modality and
generalization over object shape and pose in both simulation and the real
world. Project website, code, and videos:
https://anthonysimeonov.github.io/rpdiff-multi-modal/ | Anthony Simeonov, Ankit Goyal, Lucas Manuelli, Lin Yen-Chen, Alina Sarmiento, Alberto Rodriguez, Pulkit Agrawal, Dieter Fox | 2023-07-10T17:56:06Z | http://arxiv.org/abs/2307.04751v1 | # Shelving, Stacking, Hanging: Relational Pose Diffusion for Multi-modal Rearrangement
###### Abstract
We propose a system for rearranging objects in a scene to achieve a desired object-scene placing relationship, such as a book inserted in an open slot of a bookshelf. The pipeline generalizes to novel geometries, poses, and layouts of both scenes and objects, and is trained from demonstrations to operate directly on 3D point clouds. Our system overcomes challenges associated with the existence of many geometrically-similar rearrangement solutions for a given scene. By leveraging an iterative pose de-noising training procedure, we can fit multi-modal demonstration data and produce multi-modal outputs while remaining precise and accurate. We also show the advantages of conditioning on relevant local geometric features while ignoring irrelevant global structure that harms both generalization and precision. We demonstrate our approach on three distinct rearrangement tasks that require handling multi-modality and generalization over object shape and pose in both simulation and the real world. Project website, code, and videos: [https://anthonysimeonov.github.io/rpdiff-multi-modal](https://anthonysimeonov.github.io/rpdiff-multi-modal)
Keywords:Object Rearrangement, Multi-modality, Manipulation, Point Clouds
## 1 Introduction
Consider Figure 1, which illustrates (1) placing a book on a partially-filled shelf and (2) hanging a mug on one of the multiple racks on a table. These tasks involve reasoning about geometric interactions between an object and the scene to achieve a goal, which is a key requirement in many cleanup and de-cluttering tasks of interest to the robotics community [1]. In this work, we enable a robotic system to perform one important family of such tasks: 6-DoF rearrangement of rigid objects [2]. Our system uses point clouds obtained from depth cameras, allowing real-world operation with unknown 3D geometries. The rearrangement behavior is learned from a dataset of examples that show the desired object-scene relationship - a scene and (segmented) object point cloud are observed and a demonstrator transforms the object into a final configuration.
Real-world scenes are often composed of objects whose shapes and poses can vary independently. Such composition creates scenes that (i) present combinatorial variation in geometric appearance and layout (e.g., individual racks may be placed anywhere on a table) and (ii) offer many locations and geometric features for object-scene interaction (e.g., multiple slots for placing the book and multiple racks for hanging the mug). These features of real-world scenes bring about two key challenges for learning that go hand-in-hand: multi-modal placements and generalization to diverse scene layouts.
* **Multi-modality** appears in the rearrangement _outputs_. There may be many scene locations to place an object, and these multiple possibilities create difficulties during both learning and deployment. Namely, a well-known challenge in _learning_ from demonstrations is fitting a dataset containing similar inputs that have different associated targets (modes). Moreover, during deployment, predicting multiple candidate rearrangements can help the robot choose the ones that also satisfy any additional constraints, such as workspace limits and collision avoidance. Therefore, the system must _predict_ multi-modal outputs that span as many different rearrangement solutions as possible.
* **Generalization** must be addressed when processing the _inputs_ to the system. A scene is composed of many elements that vary in both shape and layout. For example, a shelf can be located anywhere in the environment, and there are many possible book arrangements within a shelf. The point clouds that are presented to the model reflect this diversity. Generalizing to such input variability is harder than generalizing to shape and pose variations for a single object, due to the combinatorially many arrangements and layouts of scenes. Moreover, the system must also generalize to any possible initial configuration of the object.
Given a dataset of final object-scene point clouds (obtained by transforming the observed object point cloud into its resultant configuration at the end of the demo), we can synthesize many initial object configurations as perturbations of the final point clouds. Using this data, we can naturally cast rearrangement prediction as _point cloud pose de-noising_. From a final object-scene point cloud, we create a "noised" point cloud by randomly transforming the object and train a neural network to predict how to transform the noised point cloud back into the original configuration (using the known perturbation for ground truth supervision). During deployment, we similarly predict a de-noising object transformation that satisfies the learned relation with the scene and use this predicted transformation as the rearrangement action. The robot executes the predicted rearrangement using a combination of grasp sampling, inverse kinematics, and motion planning.
Unfortunately, learning to de-noise from large perturbations in one step can be ineffective when considering multi-modality [3] - creating similar-looking noised point clouds with prediction targets that differ can lead the model to learn an average solution that fits the data poorly. We overcome this difficulty by training the predictor as a diffusion model [4, 5] to perform _iterative_ de-noising. By creating a _multi-step_ noising process, diffusion models are trained to _incrementally_ reverse the process one step at a time. Intuitively, early steps in this reverse process are closer to the ground truth and the associated prediction targets are more likely to be unique across samples - the prediction "looks more unimodal" to the model. The model similarly generates the test-time output in an iterative fashion. By starting this inference procedure from a diverse set of initial guesses, the predictions can converge to a diverse set of final solutions.
While iterative de-noising helps with multi-modality, we must consider how to support generalization to novel scene layouts. To achieve this, we propose to _locally encode_ the scene point cloud by cropping a region near the object. Locally cropping the input helps the model generalize by focusing on details in a local neighborhood and ignoring irrelevant and distant distractors. The features for representing smaller-scale patches can also be re-used across different spatial regions and scene instances [6, 7, 8, 9]. We use a larger crop size on the initial iterations because the inference procedure starts from random guesses that may be far from a good solution. As the solution converges over multiple iterations, we gradually reduce the crop size to emphasize a more local scene context.
Figure 1: By learning from a set of demonstrations of a rearrangement task, such as _place the book in the shelf_ (A) and _hang the mug on the rack_ (B), Relational Pose Diffusion (RPDiff) can produce _multiple_ transformations that achieve the same object-scene relationship for new object/scene pairs.
In summary, we present Relational Pose Diffusion (RPDiff), a method that performs 6-DoF relational rearrangement conditioned on an object and scene point cloud, that (1) generalizes across shapes, poses, and scene layouts, and (2) gracefully handles scenarios with multi-modality. We evaluate our approach in simulation and the real world on three tasks, (i) comparing to existing methods that either struggle with multi-modality and complex scenes or fail to achieve precise rearrangement, and (ii) ablating the various components of our overall pipeline.
## 2 Problem Setup
Our goal is to predict a set of \(\text{SE}(3)\) transformations \(\{\mathbf{T}_{k}\}_{k=1}^{K}\) that accomplish an object rearrangement task given the scene \(\mathbf{S}\) and the object \(\mathbf{O}\), represented as 3D point clouds (\(\mathbf{P}_{\mathbf{S}}\in\mathbb{R}^{M\times 3}\) and \(\mathbf{P}_{\mathbf{O}}\in\mathbb{R}^{N\times 3}\), respectively). By selecting (i.e., via a learned scoring function) and applying one transformation from this set, we can place the object in a manner that fulfills the desired geometric relationship with the scene. We assume the object point cloud is segmented from the whole scene, which does not have any additional segmented objects (e.g., we cannot segment any individual books on the shelf). We also assume a training dataset \(\mathcal{D}=\{(\mathbf{P}_{\mathbf{O}},\mathbf{P}_{\mathbf{S}})\}_{l=1}^{L}\) where each data point represents an object placed at the desired configuration. For example, \(\mathcal{D}\) could include point clouds of books and bookshelves (with different shapes, poses, and configurations of books on the shelf), and \(\text{SE}(3)\) transformations that place the books in one of the available slots. These demonstrations could come from a human or a scripted algorithm with access to ground truth object states in simulation.
Critically, depending on constraints imposed by other system components (e.g., available grasps, robot reachability, collision obstacles), the system must be capable of producing _multi-modal_ output transformations. Predicting diverse outputs enables searching for a placement that can be feasibly executed. For execution on a robot, the robot has access to a grasp sampler [10], inverse kinematics (IK) solver, and motion planner to support generating and following a pick-and-place trajectory.
## 3 Method
The main idea is to iteratively de-noise the 6-DoF pose of the object until it satisfies the desired geometric relationship with the scene point cloud. An overview of our framework is given in Fig. 2.
Figure 2: **Method Overview.** (A) Starting from an object and scene point cloud \(\mathbf{P}_{\mathbf{O}}\) and \(\mathbf{P}_{\mathbf{S}}\), we transform \(\mathbf{P}_{\mathbf{O}}\) to a diverse set of initial poses. RPDiff takes the initial object-scene point clouds as input, iteratively updates the object pose, and outputs a _set_ of object configurations that satisfy a desired relationship with the scene. This enables integrating RPDiff with a planner to search for a placement to execute while satisfying additional system constraints. (B) The model is trained to perform _iterative pose de-noising_. Starting from object-scene point clouds that satisfy the desired task, we apply a sequence of perturbations to the object and train the model to predict \(\text{SE}(3)\) transforms that remove the noise one step at a time. (C) To facilitate generalization to novel scene layouts, we crop the scene point cloud to the region near the object point cloud.
### Object-Scene Point Cloud Diffusion via Iterative Pose De-noising
We represent a rearrangement action \(\mathbf{T}\) as the output of a multi-step de-noising process for a combined object-scene point cloud, indexed by discrete time variable \(t=0,...,T\). This process reflects a transformation of the object point cloud in its initial noisy configuration \(\mathbf{P_{O}}^{(T)}\) to a final configuration \(\mathbf{P_{O}}^{(0)}\) that satisfies a desired relationship with the scene point cloud \(\mathbf{P_{S}}\), i.e., \(\mathbf{P_{O}}^{(0)}=\mathbf{TP_{O}}^{(T)}\). To achieve this, we train neural network \(f_{\theta}:\mathbb{R}^{N\times 3}\times\mathbb{R}^{M\times 3}\to\text{SE}(3)\) to predict an \(\text{SE}(3)\) transformation from the combined object-scene point cloud at each step. The network is trained as a diffusion model [4, 5] to incrementally reverse a manually constructed noising process that gradually perturbs the object point clouds until they match a distribution \(\mathbf{P_{O}}^{(T)}\sim p_{\mathbf{O}}^{(T)}(\cdot\mid\mathbf{P_{S}})\), which we can efficiently sample from during deployment to begin de-noising at test time.
**Test-time Evaluation.** Starting with \(\mathbf{P_{O}}\) and \(\mathbf{P_{S}}\), we sample \(K\) initial transforms \(\{\hat{\mathbf{T}}_{k}^{(I)}\}_{k=1}^{K}\) and apply these to \(\mathbf{P_{O}}\) to create initial object point clouds \(\{\hat{\mathbf{P}}_{\mathbf{O},k}^{(I)}\}_{k=1}^{K}\) where \(\hat{\mathbf{P}}_{\mathbf{O},k}^{(I)}=\hat{\mathbf{T}}_{k}^{(I)}\mathbf{P_{O}}\). For each of the \(K\) initial transforms, we then perform the following update for \(I\) steps.+ At each iteration \(i\):
Footnote β : See Appendix A7 for results showing that scoring with \(h_{\phi}\) performs better than, e.g., uniform output sampling
\[\mathbf{T}_{\Delta}^{(i)}=\mathbf{T}_{\Delta}^{\text{Rand}}f_{\theta}\Big{(} \hat{\mathbf{P}}_{\mathbf{O}}^{(i)},\mathbf{P_{S}},\texttt{pos\_emb}\big{(}t \big{)}\Big{)}\qquad t=\texttt{i\_to\_t}(i) \tag{1}\]
\[\hat{\mathbf{T}}^{(i-1)}=\mathbf{T}_{\Delta}^{(i)}\hat{\mathbf{T}}^{(i)} \qquad\hat{\mathbf{P}}_{\mathbf{O}}^{(i-1)}=\mathbf{T}_{\Delta}^{(i)}\hat{ \mathbf{P}}_{\mathbf{O}}^{(i)} \tag{2}\]
The update \(\mathbf{T}_{\Delta}^{(i)}\) is formed by multiplying the denoising transform predicted by our model \(f_{\theta}\) with a perturbation transform \(\mathbf{T}_{\Delta}^{\text{Rand}}\) that is sampled from an iteration-conditioned normal distribution which converges toward deterministically producing an identity transform as \(i\) tends toward 0. In the de-noising process, \(\mathbf{T}_{\Delta}^{\text{Rand}}\) helps each of the \(K\) samples converge to different multi-modal pose basins (analogously to the perturbation term in Stochastic Langevin Dynamics [11]). The function pos_emb represents a sinusoidal position embedding. Since \(f_{\theta}\) is only trained on a finite set of \(t\) values (i.e., \(t=1,...,5\)) but we might want to perform the update in Eq. 2 for a larger number of steps, we use the function i_to_t to map the iteration \(i\) to a timestep value \(t\) that the model has been trained on. Details on external noise scheduling and mapping \(i\) to \(t\) can be found in Appendix A3.
Generally, we search through \(K\) solutions \(\{\hat{\mathbf{T}}_{k}^{(0)}\}_{k=1}^{K}\) for one that can be executed while satisfying all other constraints (e.g., collision-free trajectory). However, we also want a way to select a single output to execute assuming there are no other constraints to satisfy. We may also want to reject "locally optimal" solutions that fail to complete the desired task. To achieve this, we use a separate classifier \(h_{\phi}\) to score the predicted poses (i.e., \(s_{k}=h_{\phi}(\mathbf{P_{O}}_{k}^{(0)},\mathbf{P_{S}})\) where \(s\in[0,1]\)), such that the sample indexed with \(k_{\text{exec}}=\texttt{argmax}\ \{s_{k}\}_{k=1}^{K}\) can be selected for execution+.
Footnote β : See Appendix A7 for results showing that scoring with \(h_{\phi}\) performs better than, e.g., uniform output sampling
**Training.** Given a dataset sample \((\mathbf{P_{O}},\mathbf{P_{S}})\), we start with final "placed" object point cloud \(\mathbf{P_{O}}^{(0)}=\mathbf{P_{O}}\) and randomly sampled timestep \(t\in[1,T]\). We then obtain a perturbation transform \(\mathbf{T}_{\text{noise}}^{(t)}\)from a timestep-conditioned distribution with appropriately scaled variance and create a noised point cloud \(\mathbf{P_{O}}^{(t)}=\mathbf{T}_{\text{noise}}^{(t)}\mathbf{P_{O}}\). The task is to predict a transformation that takes one de-noising step as \(\hat{\mathbf{T}}_{\Delta}^{(t)}=f_{\theta}(\mathbf{P_{O}^{(t)}},\mathbf{P_{S}}, \texttt{pos\_emb}(t))\). Network parameters \(\theta\) are trained to minimize a loss between the prediction \(\hat{\mathbf{T}}_{\Delta}^{(t)}\) and a ground truth target \(\mathbf{T}_{\Delta,\text{GT}}^{(t)}\). The loss is composed of the mean-squared translation error, a geodesic rotation distance error [12, 13], and the chamfer distance between the point cloud obtained by applying the predicted transform and the ground-truth next point cloud.
A natural target for \(f_{\theta}\) to predict is the inverse of the perturbation, i.e., \(\mathbf{T}_{\Delta,\text{GT}}^{(t)}=\mathbf{T}_{\text{noise,inv}}^{(t)}=\big{[} \mathbf{T}_{\text{noise}}^{(t)}\big{]}^{-1}\), to encourage recovering the original sample. However, as the perturbation magnitude varies across timesteps, this requires output predictions of different scales for different timesteps. In supervised learning with neural networks, it is advisable to keep the magnitudes of both input and output signals consistent in order to minimize large fluctuations in gradient magnitudes between samples [14]. For this reason, an alternative approach is to encourage the network to take shorter "unit steps" in the _direction_ of the original sample. We achieve this by uniformly interpolating the full
inverse perturbation as \(\{\mathbf{T}_{\text{interp}}^{(s)}\}_{s=1}^{t}=\texttt{interp}(\mathbf{T}_{\text{ ineq., inv}}^{(t)},t)\) and training the network to predict one interval in this interpolated set, i.e., \(\mathbf{T}_{\Delta,\text{GT}}^{(t)}=[\mathbf{T}_{\text{interp}}^{(t-1)}]^{-1} \mathbf{T}_{\text{interp}}^{(t)}\) (details in Appendix A2 and A7).
For the success classifier, we generate positive and negative rearrangement examples, where positives use the final demonstration point cloud, \(\mathbf{P_{O}}^{(0)}\), and negatives are obtained by sampling diverse perturbations of \(\mathbf{P_{O}}^{(0)}\). The classifier weights \(\phi\) (separate from weights \(\theta\)) are trained to minimize a binary cross-entropy loss between the predicted likelihood and the ground truth success labels.
### Architecture
We use a Transformer [15] for processing point clouds and making pose predictions. We use a Transformer because it learns to (i) identify important geometric parts _within_ the object and the scene and (ii) capture relationships that occur _between_ the important parts of the object and the scene. Starting with \(\mathbf{P_{O}}\) and \(\mathbf{P_{S}}\), we tokenize the point clouds to obtain input features. This can be performed by passing through a point cloud encoder [16; 17], but we simply downsample and append a one-hot feature to each point indicating whether it is part of the object or the scene. We then pass these input tokens through a Transformer encoder and decoder, which performs self-attention on the scene point cloud, and cross-attention between the scene and the object. This produces output features for each point, which are mean-pooled to obtain a global feature vector. The global feature is passed to a set of MLPs which predict the rotation \(\mathbf{R}\in\text{SO}(3)\) and a translation \(\mathbf{t}\in\mathbb{R}^{3}\). As in [10; 18], we represent the rotation by predicting vectors \(a\in\mathbb{R}^{3}\) and \(b\in\mathbb{R}^{3}\), finding the component of \(b\) that is orthogonal to \(a\), and normalizing to obtain \(\hat{a}\) and \(\hat{b}\). We then take a cross product to obtain \(\hat{c}=\hat{a}\times\hat{b}\), and construct \(\mathbf{R}\) as \(\begin{bmatrix}\hat{a}&\hat{b}&\hat{c}\end{bmatrix}\). We incorporate iteration \(t\) by passing \(\texttt{pos\_emb}(t)\) as a global token in the decoder and adding it to the global output feature. To predict success likelihood, we process point clouds with the same Transformer but output a single scalar followed by a sigmoid.
### Local Conditioning
The approach described above conditions the transform regression on both the object and the scene. However, distant global information can act as a distraction and hamper both precision and generalization. Prior work has also observed this and suggested hard attention mechanisms on the input observation like cropping task-relevant regions to improve generalization by ignoring irrelevant distractors [8; 9]. Building on this intuition, we modify the scene point cloud by cropping \(\mathbf{P_{S}}\) to only include points that are near the current object point cloud \(\mathbf{P_{O}}^{(i)}\). Our modified pose prediction thus becomes \(\mathbf{\tilde{T}}_{\Delta}^{(i)}=f_{\theta}\Big{(}\mathbf{\tilde{P}}_{ \mathbf{O}}^{(i)},\mathbf{\bar{P}}_{\mathbf{S}}^{(i)},\texttt{pos\_emb}\big{(} \texttt{i\_to\_t}(i)\big{)}\Big{)}\) where \(\mathbf{\bar{P}}_{\mathbf{S}}^{(i)}=\texttt{crop}(\mathbf{\hat{P}}_{\mathbf{O} }^{(i)},\mathbf{P_{S}})\). The function crop returns the points in \(\mathbf{P_{S}}\) that are within an axis-aligned box centered at the mean of \(\mathbf{\hat{P}}_{\mathbf{O}}^{(i)}\). We try one variant of the crop function that returns a fixed-size crop, and another that adjusts the crop size depending on the iteration variable \(i\) (the size starts large and gradually decreases for later iterations).
## 4 Experiments: Design and Setup
Our quantitative experiments in simulation are designed to answer the following questions:
1. How well does RPDiff achieve the desired tasks compared to other methods for rearrangement?
2. How successful is RPDiff in producing a diverse set of transformations compared to baselines?
3. How does our performance change with different components modified or removed?
We also demonstrate RPDiff within a pick-and-place pipeline in the real world to further highlight the benefits of multi-modal generation and our ability to transfer from simulation to the real world.
### Task Descriptions and Training Data Generation
We evaluate our method on three tasks that emphasize multiple available object placements: (1) placing a book on a partially-filled bookshelf, (2) stacking a can on a stack of cars or an open shelf region, and, (3) hanging a mug on one of many racks with many hooks. As a sanity check for our baseline implementations, we also include two easier versions of "mug on rack" tasks that are "less
multi-modal". These consist of (i) hanging a mug on one rack with a single hook and (ii) hanging a mug on one rack with two hooks. We programmatically generate \(\sim\)1k-3k demonstrations of each task in simulation with a diverse set of procedurally generated shapes (details in Appendix A2). We use each respective dataset to train both RPDiff and each baseline (one model for each task). For our real-world experiments, we directly transfer and deploy the models trained on simulated data.
### Evaluation Environment Setup
**Simulation.** We conduct quantitative experiments in the PyBullet [19] simulation engine. The predicted transform is applied to the object by simulating an insertion controller which directly actuates the object's center of mass (i.e., there is no robot in the simulator). The insertion is executed from a "pre-placement" pose that is offset from the predicted placement. This offset is obtained using prior knowledge about the task and the objects and is not predicted (see Appendix A6 for details). To quantify performance, we report the success rate over 100 trials, using the final simulator state to compute success. We also quantify coverage by comparing the set of predictions to a ground truth set of feasible solutions and computing the corresponding precision and recall. Details on the insertion controller, computation of \(\mathbf{T}^{\text{pre-place}}\), and the task success criteria can be found in the Appendix.
**Real World.** We also apply RPDiff to object rearrangement in the real world using a Franka Panda robotic arm with a Robotiq 2F140 parallel jaw gripper. We use four calibrated depth cameras to observe the tabletop environment. From the cameras, we obtain point clouds \(\mathbf{P_{O}}\) and \(\mathbf{P_{S}}\) of object \(\mathbf{O}\) and scene \(\mathbf{S}\) and apply our method to predict transformation \(\mathbf{T}\). \(\mathbf{T}\) is applied to \(\mathbf{O}\) by transforming an initial grasp pose \(\mathbf{T}_{\text{grasp}}\) (using a separate grasp predictor [10]) by \(\mathbf{T}\) to obtain a placing pose \(\mathbf{T}_{\text{place}}=\mathbf{T}\mathbf{T}_{\text{grasp}}\), and inverse kinematics and motion planning is used to reach \(\mathbf{T}_{\text{grasp}}\) and \(\mathbf{T}_{\text{place}}\).
### Baselines
**Coarse-to-Fine Q-attention (C2F-QA).** This method adapts the classification-based approach proposed in [8] to relational rearrangement. We train a fully convolutional network to predict a distribution of scores over a voxelized representation of the scene, denoting a heatmap over candidate translations of the object centroid. The model runs in a "coarse-to-fine" fashion by performing this operation multiple times over a smaller volume at higher resolutions. On the last step, we pool the voxel features and predict a distribution over a discrete set of rotations to apply to the object. We use our success classifier to rank the predicted transforms and execute the output with the top score.
**Relational Neural Descriptor Fields (R-NDF).** R-NDF [20] uses a neural field shape representation trained on category-level 3D models as a feature space wherein local coordinate frames can be matched via nearest-neighbor search. R-NDFs have been used to perform relational rearrangement tasks via the process of encoding and localizing task-relevant coordinate frames near the object parts that must align to achieve the desired rearrangement. We call this method "R-NDF-base" because it does not feature the additional energy-based model for refinement proposed in the original work.
**Neural Shape Mating (NSM) + CVAE.** Neural Shape Mating (NSM) [3] uses a Transformer to process a pair of point clouds and predict how to align them. Architecturally, NSM is the same as our relative pose regression model, with the key differences of (i) being trained on arbitrarily large perturbations of the demonstration point clouds, (ii) not using local cropping, and (iii) only making
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **Mug/EasyRack** & **Mug/MedRack** & **Book/Shelf** & **Mug/Multi-MedRack** & **Can/Cabinet** \\ \hline C2F Q-attn & 0.31 & 0.31 & 0.57 & 0.26 & 0.51 \\ R-NDF-base & 0.75 & 0.29 & 0.00 & 0.00 & 0.14 \\ NSM-base & 0.83 & 0.17 & 0.02 & 0.01 & 0.08 \\ NSM-base + CVAE & β & 0.39 & 0.17 & 0.27 & 0.19 \\ RPDiff (**ours**) & **0.92** & **0.83** & **0.94** & **0.86** & **0.85** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Rearrangement success rates in simulation.** On tasks with a unimodal solution space and simpler scene geometry, each method performs well (see **Mug/EasyRack** task). However, on tasks involving more significant shape variation and multi-modality, RPDiff works better than all other approaches.
a single prediction. We call this baseline "NSM-base" because we do not consider the auxiliary signed-distance prediction and learned discriminator proposed in the original approach [3]. While the method performs well on unimodal tasks, the approach is not designed to handle multi-modality. Therefore, we modify NSM to act as a conditional variational autoencoder (CVAE) [21] to better enable learning from multi-modal data. We use NSM+CVAE to predict multiple transforms and execute the output with the top score produced by our success classifier.
## 5 Results
### Simulation: Success Rate Evaluation
Table 1 shows the success rates achieved by each method on each task and highlights that our method performs best across the board. The primary failure mode from C2F-QA is low precision in the rotation prediction. Qualitatively, the C2F-QA failures are often close to a successful placement but still cause the insertion to fail. In contrast, our refinement procedure outputs very small rotations that can precisely align the object relative to the scene.
Similarly, we find R-NDF performs poorly on more complex scenes with many available placements. We hypothesize this is because R-NDF encodes scene point clouds into a global latent representation. Since the single set of latent variables must capture all possible configurations of the individual scene components, global encodings fail to represent larger-scale scenes with significant geometric variability [6; 7]. For instance, R-NDF can perform well with individual racks that all have a single hook, but fails when presented with multiple racks.
Finally, while NSM+CVAE improves upon the unimodal version of NSM, we find the generated transforms vary too smoothly between the discrete modes (e.g., book poses that lie in between the available shelf slots), an effect analogous to the typical limitation of VAE-based generators producing blurry outputs in image generation. We hypothesize this over-smoothing is caused by trying to make the approximate posterior match the unimodal Gaussian prior. This contrasts RPDiff's ability to "snap on" to the available placing locations in a given scene. More discussion on the performance obtained by the baseline methods and how they are implemented can be found in Appendix A6.
### Simulation: Coverage Evaluation
Next, we evaluate the ability to produce multi-modal outputs that cover the space of rearrangement solutions and examine the tradeoff between prediction quality and coverage. Since coverage is affected by the number of parallel runs we perform, we compute average recall and average precision for different values of \(K\) (the number of initial poses that are refined). Precision and recall are computed with respect to a set of ground truth rearrangement solutions for a given object-scene instance. We consider positive predictions as those that are within a 3.5cm position and 5-degree rotation threshold of a ground truth solution.
Fig. 2(a) shows the results for our approach along with C2F-QA, the best-performing baseline. We observe a trend of better coverage (higher recall) with more outputs for both approaches. For a modest
Figure 3: (a) **Coverage evaluation in simulation. Both RPDiff and C2F-QA achieve high placement coverage, but the prediction quality of C2F-QA reduces with an increase in coverage, while RPDiff produces outputs that remain precise while achieving high coverage. (b) **Cropping ablations.** Success rate of RPDiff with different kinds of scene point cloud conditioning. The increased success rate achieved when using local scene cropping highlights the generalization and precision benefits of focusing on a local spatial region.
value of \(K=32\), we observe RPDiff is able to cover over half of the available placement solutions on average, with C2F-QA achieving slightly lower coverage. However, we see a stark difference between the methods in terms of precision as the number of outputs is increased. C2F-QA suffers from more outputs being far away from any ground truth solution, while our approach maintains consistently high generation quality even when outputting upwards of 200 rearrangement poses.
### Simulation: Local Cropping Ablations and Modifications
Finally, we evaluate the benefits of introducing local scene conditioning into our relative pose regression model. Table 2(b) shows the performance variation of our method with different kinds of scene point cloud conditioning. We achieve the best performance with the version of local conditioning that varies the crop sizes on a per-iteration basis. Using a fixed crop size marginally reduces performance, while conditioning on the whole uncropped scene point cloud performs much worse. This highlights the generalization and precision benefits of focusing on a local spatial region near the object in its imagined configuration. It also suggests an advantage of using a coarse-to-fine approach that considers a larger region on earlier iterations. Additional results examining the effect of the success classifier, external noise, and parameterization of i_to_t can be found in Appendix A7.
### Real World: Object rearrangement via pick-and-place
Finally, we use RPDiff to perform relational rearrangement via pick-and-place on real-world objects and scenes. Fig. 1 and Fig. 4 show the robot executing _multiple_ inferred placements on our three tasks. We relied on our approach's ability to output multiple solutions, as some geometrically valid placements were not kinematically feasible for the robot based on its workspace limits and the surrounding collision geometry. Please see the supplemental video for real-world execution.
## 6 Related Work
**Object Rearrangement from Perception**. Object rearrangement prediction using perceptual inputs has been an area of growing interest [22; 23; 24; 25; 20; 26; 27; 28; 29; 3; 40; 41; 42; 43; 44; 45; 46; 47; 48]. One straightforward method is end-to-end training to directly regress the relative transformation, as in Neural Shape Mating (NSM) [3]. Others have explored identifying task-relevant object parts and then solving for the desired alignment, as in TAX-Pose and R-NDF [20; 40; 48]. However, many of these approaches in their naive form struggle when there is _multi-modality_ (NSM and TAX-Pose can only output a single solution). There has been success addressing multi-modality by performing classification over a discretized version of the search space [42; 44; 46; 47; 49], but these methods are typically less precise.
**Denoising Diffusion and Iterative Regression**. Diffusion models [4; 50] use an iterative de-noising process to perform generative modeling. While they were originally designed for generating images, they have been extended to other domains including waveforms [51; 52], 3D shapes [53; 54], and
Figure 4: **Real-world multi-modal rearrangement. Executing Can/Cabinet (A), Book/Shelf (B), and Mug/Rack (C) in the real world. For each task, the initial object-scene configuration is shown in the top-left image, and examples of executing multiple inferred placements are shown in the main image sequence.**
decision-making[55, 56, 57]. In robotics, diffusion models (and related energy-based models) have been applied to policy learning [58, 59], trajectory optimization [60, 61, 62, 63], grasping [56], and object rearrangement [20, 41]. Iterative regression has also been successful in domains such as pose estimation [64, 65, 66, 67], and recent work has illustrated connections between iterative prediction and de-noising diffusion [68, 69].
SE(3)-DiffusionFields [56] integrate learned 6-DoF grasp distributions within a trajectory optimization framework, and LEGO-Net [57] employs iterative de-noising to generate realistic-looking room layouts. Our work differs in that we do not assume known object states or 3D models. Most similar to our work, StructDiffusion [41] uses a diffusion model to perform language-conditioned object rearrangement with point clouds. While the focus in [41] is to rearrange multiple objects into abstract structures (e.g., circles, lines) specified via natural language, we emphasize covering all rearrangement modes and integrating with sampling-based planners.
## 7 Limitations and Conclusion
**Limitations**. The amount of demonstration data we use can only be easily obtained via scripted policies in simulation. Future work could explore pre-trained representations and multi-task learning to reduce data requirements for new tasks. We also suffer from some amount of sim2real gap due to training on simulated point clouds. Incorporating broader sim2real transfer techniques and finetuning on real-world data would be useful to investigate. Finally, we execute the predicted placement open loop. These types of rearrangement tasks would benefit from a closed-loop policy that can track progress toward the predicted rearrangement goal and react to/recover from disturbances.
**Conclusion**. This work presents an approach for rearranging objects in a scene to achieve a desired placing relationship, while operating with novel geometries, poses, and scene layouts. Our system can produce multi-modal distributions of object transformations for rearrangement, overcoming the difficulty of fitting multi-modal demonstration datasets and facilitating integration with planning algorithms that require diverse actions to search through. Our results illustrate the capabilities of our framework across a diverse range of rearrangement tasks involving objects and scenes that present a large number of feasible rearrangement solutions.
## 8 Acknowledgement
The authors would like to thank NVIDIA Seattle Robotics Lab members and the MIT Improbable AI Lab for their valuable feedback and support in developing this project. In particular, we would like to acknowledge Idan Shenfeld, Anurag Ajay, and Antonia Bronars for helpful suggestions on improving the clarity of the draft. This work was partly supported by Sony Research Awards and Amazon Research Awards. Anthony Simeonov is supported in part by the NSF Graduate Research Fellowship.
**Author Contributions**
**Anthony Simeonov** conceived the overall project goals, investigated several approaches for addressing multi-modality in rearrangement prediction, implemented the pose diffusion framework, wrote all the code, ran simulation and real-world experiments, and was the primary author of the paper.
**Ankit Goyal** advised the project, made technical suggestions on clarifying the method and improving the experimental evaluation, and supported iteration on obtaining real robot results, and helped with writing the paper.
**Lucas Manuelli** engaged in research discussions about rearrangement prediction, suggested initial ideas for addressing multi-modality, advised the project in its early stages, and provided valuable feedback on the paper.
**Lin Yen-Chen** supported early project brainstorming, helped develop direct connections with diffusion models, gave feedback on evaluation tasks, and helped edit the paper.
**Alina Sarminento** helped implement the framework on the real robot and implemented the grasp generation model that enabled the pick-and-place demos on the Franka Panda.
**Alberto Rodriguez** engaged in technical discussions on the connections to iterative optimization methods and integrating the framework in the context of a sampling-based planner.
**Pulkit Agrawal** suggested connections to work on iterative regression that came before diffusion models, helped clarify key technical insights on the benefits of iterative prediction, suggested ablations, helped with paper writing/editing, and co-advised the project.
**Dieter Fox** was involved in technical discussions on relational tasks involving object part interactions, proposed some of the evaluation tasks, helped formalize connections to other related work, and advised and supported the overall project.
|
2307.15884 | Fusing Sparsity with Deep Learning for Rotating Scatter Mask Gamma
Imaging | Many nuclear safety applications need fast, portable, and accurate imagers to
better locate radiation sources. The Rotating Scatter Mask (RSM) system is an
emerging device with the potential to meet these needs. The main challenge is
the under-determined nature of the data acquisition process: the dimension of
the measured signal is far less than the dimension of the image to be
reconstructed. To address this challenge, this work aims to fuse model-based
sparsity-promoting regularization and a data-driven deep neural network
denoising image prior to perform image reconstruction. An efficient algorithm
is developed and produces superior reconstructions relative to current
approaches. | Yilun Zhu, Clayton Scott, Darren Holland, George Landon, Aaron Fjeldsted, Azaree Lintereur | 2023-07-29T04:04:19Z | http://arxiv.org/abs/2307.15884v1 | # Fusing Sparsity with Deep Learning for Rotating Scatter Mask Gamma Imaging
###### Abstract
Many nuclear safety applications need fast, portable, and accurate imagers to better locate radiation sources. The Rotating Scatter Mask (RSM) system is an emerging device with the potential to meet these needs. The main challenge is the under-determined nature of the data acquisition process: the dimension of the measured signal is far less than the dimension of the image to be reconstructed. To address this challenge, this work aims to fuse model-based sparsity-promoting regularization and a data-driven deep neural network denoising image prior to perform image reconstruction. An efficient algorithm is developed and produces superior reconstructions relative to current approaches.
## I Introduction
Localizing radioactive material is particularly important for nuclear safety applications, such as environmental monitoring and searching for orphan sources. There is growing need for portable imaging systems that give accurate results in real-time. Coded aperture systems [1] offer such capabilities, but have a limited field-of-view (FOV). Recently, a novel time-encoded gamma-ray detection system called the Rotating Scatter Mask (RSM) [2] with a near \(4\pi\) FOV was developed.
For the emerging RSM system, novel and suitable image reconstruction methods are needed. One of the biggest challenges of RSM imaging is the limited number of measurements: the goal is to recover an image of size \(m\times n\) from \(n\) measurements. Previously, Olesen et al. [3] proposed an optimization-based method by using maximum-likelihood expectation-maximization (ML-EM) with median root prior [4]. Later, they developed an end-to-end learning-based method [5] that exhibited improved performance.
This paper leverages both analytical sparsity-promoting \(\ell_{1}\) regularization and data-driven neural network-based smoothness prior. Combining these two regularizations is valid due to the structure of images of interest: sparse radiation sources that are located in containers with regular geometries.
## II Rotating Scatter Mask Imaging
Fig. 1 shows the schematic of an RSM system. A single scintillating detector is covered by a mask made of homogeneous poly(methyl methacrylate) (PMMA) [3]. As the mask spins, the recorded measurement is a time-varying noisy signal \(\mathbf{y}=[y_{1}\dots y_{n}]^{\prime}\) called the detector response curve (DRC), viewed as a column vector. For simplicity, assume the mask rotates one cycle and \(n\) denotes the number of mask elements along the horizontal direction. At time index \(i\), \(y_{i}\) represents the total number of gamma rays detected during the \(i\)-th time interval.
The detector response matrix (DRM) is expressed via an \(m\times n\) matrix \(\mathbf{D}\), where \(m\) is the total number of mask elements along the vertical direction. Each element \(\mathbf{D}_{ij}\) represents the expected detector response due to a source located at the center of the \((i,j)\)-th discretized mask voxel [7]. The image \(\mathbf{A}\) to be reconstructed is also \(m\times n\). Mathematically, the measured DRC is produced by convolving an image with the DRM along the horizontal direction. The ideal measured DRC, denoted \(\mathbf{s}\), can thus be expressed as a sum of 1D convolutions
\[\mathbf{s}=\sum_{i=1}^{m}\mathbf{D}_{i,:}\oplus\mathbf{A}_{i,:}=:\Phi\mathbf{a}, \tag{1}\]
where \(\mathbf{D}_{i,:}\) and \(\mathbf{A}_{i,:}\) denote the \(i\)-th row of matrices \(\mathbf{D}\) and \(\mathbf{A}\), respectively, \(\Phi\) is the concatenation of circulant matrices generated from \(\mathbf{D}_{i,:}\), and \(\mathbf{a}\) is the vectorized image. The actual measured signal \(\mathbf{y}\) is a noisy version of the ideal signal \(\mathbf{s}\).
The challenge of this image reconstruction task stems from the limited dimension of the measured signal compared to image size. Romberg [8] has analyzed a similar problem and
Fig. 1: Rotating Scatter Mask system, with an interior detector that measures a time/angle varying signal. Figure used with permission [6].
found a sufficient condition of sparse image recovery to be \(n>O\left(\log^{4}mn\right)\). However, this condition is violated for RSM imaging tasks when \(m=75\) and \(n=180\). This violation indicates using sparsity alone is not enough and more advanced image reconstruction algorithms are needed.
## III Alternating direction method of multipliers (ADMM) \(\&\) Plug-and-Play (PnP)
Image \(\boldsymbol{a}\) is estimated from measurement \(\boldsymbol{y}\) by means of regularization-based image reconstruction, which seeks the image that best explains the observations, while also conforming to prior expectations. To be specific, the goal is to solve an optimization problem
\[\min_{\boldsymbol{a}}f(\boldsymbol{a})+h(\boldsymbol{a}), \tag{2}\]
with \(f(\boldsymbol{a})\) aiming to fit data and \(h(\boldsymbol{a})\) encoding image prior. This paper focuses on using Alternating direction method of multipliers (ADMM) [9, 10] to solve the above objective function. In this section, we provide a general overview of ADMM, and in the next section, we apply this technique for our specific model.
### _Standard ADMM_
The idea of ADMM is to convert (2) into a constrained form by variable splitting
\[\min_{\boldsymbol{a},\boldsymbol{b}}f(\boldsymbol{a})+h(\boldsymbol{b}), \quad\mathrm{s.\,t.}\quad\boldsymbol{a}=\boldsymbol{b}, \tag{3}\]
and consider its augmented Lagrangian form
\[L_{\rho}(\boldsymbol{a},\boldsymbol{b},\boldsymbol{w})=f(\boldsymbol{a})+h( \boldsymbol{b})+\langle\boldsymbol{a}-\boldsymbol{b},\boldsymbol{w}\rangle+ \frac{\rho}{2}||\boldsymbol{b}-\boldsymbol{a}||_{2}^{2}, \tag{4}\]
where \(\boldsymbol{w}\) is the Lagrange multiplier and \(\rho>0\). Then, minimize (4) by solving a sequence of sub-problems
\[\boldsymbol{a}^{(k+1)} \leftarrow\arg\min_{\boldsymbol{a}}L_{\rho}\left(\boldsymbol{a}, \boldsymbol{b}^{(k)},\boldsymbol{w}^{(k)}\right), \tag{5}\] \[\boldsymbol{b}^{(k+1)} \leftarrow\arg\min_{\boldsymbol{b}}L_{\rho}\left(\boldsymbol{a}^ {(k+1)},\boldsymbol{b},\boldsymbol{w}^{(k)}\right),\] (6) \[\boldsymbol{w}^{(k+1)} \leftarrow\boldsymbol{w}^{k}+\rho\left(\boldsymbol{a}^{(k+1)}- \boldsymbol{b}^{(k+1)}\right), \tag{7}\]
where the update of each sub-problem often has a closed-form solution for various choices of \(f(\cdot)\) and \(h(\cdot)\).
### _Plug-and-Play ADMM_
The modular structure of ADMM updates allows us to incorporate powerful data-driven methods into the optimization scheme. To be specific, (6) can be viewed as an image denoising problem as shown in (8) and the image prior \(h(\cdot)\) is encoded via the proximal operator \(H_{\sigma}(\cdot)\) associated with it in (9), where \(\sigma=\sqrt{1/\rho}\).
\[\boldsymbol{b}^{(k+1)} \leftarrow\arg\min_{\boldsymbol{b}}\frac{1}{2\sigma^{2}}\left\| \boldsymbol{b}-(\boldsymbol{a}^{(k+1)}+\boldsymbol{w}^{(k)})\right\|_{2}^{2}+h (\boldsymbol{b}), \tag{8}\] \[=H_{\sigma}\left(\boldsymbol{a}^{(k+1)}+\boldsymbol{w}^{(k)} \right). \tag{9}\]
For example, if \(h(\boldsymbol{b})=\left\|\boldsymbol{b}\right\|_{1}\), then \(H_{\sigma}(\cdot)\) is the soft-thresholding operator. Building upon this, Venkatakrishnan et al. [11] proposed the Plug-and-Play (PnP) ADMM framework by plugging advanced image denoising methods into \(H_{\sigma}(\cdot)\), which do not necessarily correspond to an analytical image prior. This approach produces state-of-the art results in many imaging tasks [12, 13, 14, 15].
## IV Proposed Method
We propose combining sparsity with deep learning for RSM imaging. The data fitting is assessed by calculating the squared error. The _a priori_ belief is reflected by a sparsity-promoting \(\ell_{1}\) penalty with a "deep denoiser prior" [16] represented by a convolutional neural network (CNN)
\[\min_{\boldsymbol{a}}\frac{1}{2}\left\|\boldsymbol{y}-\Phi\boldsymbol{a}\right\| _{2}^{2}+\lambda\left\|\boldsymbol{a}\right\|_{1}+\gamma\left\|\boldsymbol{a} \right\|_{\text{CNN}}, \tag{10}\]
where \(\lambda,\gamma\) are hyperparameters, and \(\left\|\cdot\right\|_{\text{CNN}}\) is a pseudo-prior represented by a CNN [16]. Previous works using deep denoiser priors have not addressed sparse images like those arising in nuclear security applications, thus this approach is novel. The above optimization problem is solved by using the ADMM by first doing variable splitting
\[\min_{\boldsymbol{a},\boldsymbol{b},\boldsymbol{c}}\frac{1}{2}\left\| \boldsymbol{y}-\Phi\boldsymbol{a}\right\|_{2}^{2}+\lambda\left\|\boldsymbol{b }\right\|_{1}+\gamma\left\|\boldsymbol{c}\right\|_{\text{CNN}}\quad\mathrm{s.\,t.}\quad\boldsymbol{a}=\boldsymbol{b},\boldsymbol{a}=\boldsymbol{c}, \tag{11}\]
and then writing out the augmented Lagrangian
\[L_{\rho_{1},\rho_{2}}(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c },\boldsymbol{w}_{1},\boldsymbol{w}_{2}) =\frac{1}{2}\left\|\boldsymbol{y}-\Phi\boldsymbol{a}\right\|_{2} ^{2}+\lambda\left\|\boldsymbol{b}\right\|_{1}+\gamma\left\|\boldsymbol{c} \right\|_{\text{CNN}}\] \[+\langle\boldsymbol{b}-\boldsymbol{a},\boldsymbol{w}_{1}\rangle+ \frac{\rho_{1}}{2}||\boldsymbol{b}-\boldsymbol{a}||_{2}^{2}\] \[+\langle\boldsymbol{c}-\boldsymbol{a},\boldsymbol{w}_{2}\rangle+ \frac{\rho_{2}}{2}||\boldsymbol{c}-\boldsymbol{a}||_{2}^{2}. \tag{12}\]
Our alternating minimization algorithm successively updates according to the data-fitting, sparse promoting and CNN denoising terms, respectively
\[\boldsymbol{a}^{(k+1)} \leftarrow\left(\Phi^{\prime}\Phi+\rho_{1}I+\rho_{2}I\right)^{-1}.\] \[\left[\Phi^{\prime}\boldsymbol{y}+\rho_{1}\left(\boldsymbol{b}^ {(k)}+\frac{1}{\rho_{1}}\boldsymbol{w}_{1}^{(k)}\right)+\rho_{2}\left( \boldsymbol{c}^{(k)}+\frac{1}{\rho_{2}}\boldsymbol{w}_{2}^{(k)}\right) \right], \tag{13}\] \[\boldsymbol{b}^{(k+1)} \gets S_{\lambda/\rho_{1}}\left[\boldsymbol{a}^{(k+1)}-\frac{1}{ \rho_{1}}\boldsymbol{w}_{1}^{(k)}\right],\] (14) \[\boldsymbol{c}^{(k+1)} \leftarrow\text{Denoiser}_{\gamma/\rho_{2}}\left[\boldsymbol{a}^ {(k+1)}-\frac{1}{\rho_{2}}\boldsymbol{w}_{2}^{(k)}\right], \tag{15}\]
where \(S_{\lambda/\rho_{1}}[\cdot]\) denotes the soft-thresholding operator with threshold \(\lambda/\rho_{1}\) and \(\text{Denoiser}_{\gamma/\rho_{2}}(\cdot)\) is a pre-trained CNN denoiser proposed in [16], which takes a noisy image and noise variance \(\gamma/\rho_{2}\) as input.
Equation (13) requires heavy computation of \(\mathcal{O}(m^{2}n^{2}\log mn)\) because it involves inverting a matrix of size \(mn\times mn\). Fortunately, by leveraging the convolutional structure in \(\Phi\), the algorithm can be substantially accelerated to \(\mathcal{O}(mn\log n)\). Through the use of the matrix inversion lemma and discrete Fourier Transform, (13) becomes
\[\boldsymbol{a}^{(k+1)}\leftarrow\frac{1}{\rho}\boldsymbol{x}^{(k)}-\frac{1}{ \rho^{2}}\Phi^{\prime}\mathcal{F}^{-1}\left\{\frac{\mathcal{F}\left(\Phi \boldsymbol{x}^{(k)}\right)}{\boldsymbol{1}+\frac{1}{\rho}\sum\limits_{i=1}^{m} \left|\mathcal{F}(\boldsymbol{d}_{i})\right|^{2}}\right\}, \tag{16}\]
where \(\mathcal{F}(\cdot)\) denotes the Fourier transform, \(\mathbf{d}_{i}\) is the \(i\)-th row of the DRM, \(\mathbf{x}^{(k)}:=\Phi^{\prime}\mathbf{y}+\rho_{1}\left(\mathbf{b}^{(k)}+\frac{1}{\rho_{1}} \mathbf{w}_{1}^{(k)}\right)+\rho_{2}\left(\mathbf{c}^{(k)}+\frac{1}{\rho_{2}}\mathbf{w}_{2}^{ (k)}\right)\), \(\rho=\rho_{1}+\rho_{2}\) and the operations inside inverse Fourier transform are performed element-wisely. The overall algorithm is shown in Algorithm 1.
```
Input: measurement \(\mathbf{y}\), system matrix \(\Phi\), Deep denoiser model, number of iterations \(K\), hyperparameter \(\lambda,\gamma,\rho_{1},\rho_{2}>0\).
1 Initialize \(\mathbf{a}^{(0)},\mathbf{b}^{(0)},\mathbf{c}^{(0)},\mathbf{w}_{1}^{(0)},\mathbf{w}_{2}^{(0)}\) as zero. for\(k=0\)to\(K-1\)do
2\(\mathbf{a}^{(k+1)}\leftarrow\frac{1}{\rho}\mathbf{x}^{(k)}-\frac{1}{\rho^{2}}\Phi^{ \prime}\mathcal{F}^{-1}\left\{\frac{\mathcal{F}\left(\Phi\mathbf{x}^{(k)}\right) }{1+\frac{1}{\rho}\sum_{i=1}^{\left|\mathcal{F}\left(\mathbf{d}\right)\right|^{2}} }\right\},\) where \(\mathbf{x}^{(k)}:=\Phi^{\prime}\mathbf{y}+\rho_{1}\left(\mathbf{b}^{(k)}+\frac{1}{\rho_{1} }\mathbf{w}_{1}^{(k)}\right)+\rho_{2}\left(\mathbf{c}^{(k)}+\frac{1}{\rho_{2}}\mathbf{w}_{ 2}^{(k)}\right),\ \rho=\rho_{1}+\rho_{2}\) ;
3\(\mathbf{b}^{(k+1)}\gets S_{\lambda/\rho_{1}}\left[\mathbf{a}^{(k+1)}-\frac{1}{\rho _{1}}\mathbf{w}_{1}^{(k)}\right]\) \(\mathbf{w}_{1}^{(k+1)}\leftarrow\mathbf{w}_{1}^{(k)}+\rho_{1}\left(\mathbf{b}^{(k+1)}-\mathbf{ a}^{(k+1)}\right);\)
4\(\mathbf{c}^{(k+1)}\leftarrow\text{Denoiser}_{\gamma/\rho_{2}}\left[\mathbf{a}^{(k+1)}- \frac{1}{\rho_{2}}\mathbf{w}_{2}^{(k)}\right]\) \(\mathbf{w}_{2}^{(k+1)}\leftarrow\mathbf{w}_{2}^{(k)}+\rho_{2}\left(\mathbf{c}^{(k+1)}-\mathbf{ a}^{(k+1)}\right)\) ;
5 end for Output: Reconstructed image \(\widehat{\mathbf{a}}=\mathbf{a}^{(K)}\).
```
**Algorithm 1**RSM image reconstruction by fusing sparsity with deep learning. \(S[\cdot]\) denotes soft-thresholding operator. \(\mathbf{w}_{i}\) refers to Lagrange multipliers that arise within the ADMM framework, one for each penalty term.
## V Experiments
In this section, results of the proposed algorithm are presented for synthetically generated noisy data.
### _Experimental Setup_
The 1" cylindrical CsI detector's response (i.e., elements of DRM) within the full-energy peak was simulated using 662 keV photons located 86.36 cm from the detector center in Monte Carlo N-Particle (MCNP) 6.2.0 for every mask voxel with enough particles to produce a relative error below 3% for each position.
The algorithm was tested on a suite of 20 sparse, but distributed images that contain three categories of shapes (disc, ring, and square). These shapes mimic different cross-sections of nuclear waste containers and contaminated storage drums. Among 20 testing images, there were 6 discs, 6 rings and 8 squares, each located in different angular positions (which leads to the distortion of regular shapes caused by viewing from spherical coordinate system) and having different sizes.
For a given true image \(\mathbf{a}\), \(\mathbf{y}=\Phi\mathbf{a}+\mathbf{\epsilon}\) was simulated, where \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\), such that \(\mathbf{y}\) has an average intensity of \(10,000\) counts, and where \(\sigma^{2}=10,000\). Since the average intensity is large, the Gaussian noise model is a good approximation of Poisson measurements.
The quality of the reconstructed result on a specific image was evaluated via normalized root mean square error (NRMSE)
\[\text{NRMSE}=\frac{\sqrt{\frac{1}{mn}\sum_{i}\left|\widehat{\mathbf{a}}[i]-\mathbf{a} [i]\right|^{2}}}{\sqrt{\frac{1}{mn}\sum_{i}\left|\mathbf{a}[i]\right|^{2}}}=\frac{ \left\|\widehat{\mathbf{a}}-\mathbf{a}\right\|_{2}}{\left\|\mathbf{a}\right\|_{2}},\]
where \(\mathbf{a}\) is the ground truth image and \(\widehat{\mathbf{a}}\) is the estimated image.
### _Choice of Parameters_
We tuned the hyperparameter of the proposed algorithm on a validation dataset, leading to
\[\lambda=0.36,\quad\gamma=0.23,\quad K=300.\]
As for the choice of \(\rho_{1},\rho_{2}\), we set them equal \(\rho^{(k)}:=\rho_{1}^{(k)}=\rho_{2}^{(k)}\) and adopt the common strategy of gradually increasing the value as a function of iteration index \(k\)[16], resulting in \(\rho^{(1)}<\cdots<\rho^{(k)}<\cdots<\rho^{(K)}\).
### _Reconstruction Results_
The proposed method (\(\ell_{1}-\)CNN) was compared with MLEM-MRP [3] and \(\ell_{1}\) reconstruction [17]. MLEM-MRP was implemented according to [3] and used the suggested hyperparameters. For \(\ell_{1}\) reconstruction, an iterative solver was implemented via ADMM.
Table I shows that \(\ell_{1}-\)CNN achieves the best results for all sources of interest. On average, our method improved more than \(30\%\) in terms of NRMSE. Using only the denoising CNN [16] without \(\ell_{1}\) regularization did not lead to meaningful results since the reconstructed images were not sparse (therefore not reported in the table).
Fig. 2 shows 5 representative reconstruction results. Our proposed method (\(\ell_{1}-\)CNN) performed best both visually and quantitatively (in terms of NRMSE) in all cases. For the reconstruction of square sources, although the NRMSE were small, visually the artifact was obvious. This left room for further improvement, for example, leveraging deep unrolling [18] for the iterative algorithm and then training the corresponding architecture end-to-end.
## VI Conclusion
This paper develops a novel gamma source image reconstruction algorithm for the RSM system. The ADMM algorithm was leveraged to fuse a model-based analytical image prior with the latest data-driven approaches. The results show that the method greatly improves the reconstruction NRMSE
\begin{table}
\begin{tabular}{c|c c c} & MLEM-MRP & \(\ell_{1}\) & \(\ell_{1}-\)CNN \\ \hline Disc & 0.703 & 1.41 & **0.340** \\ Ring & 0.894 & 1.15 & **0.780** \\ Square & 0.787 & 1.35 & **0.545** \\ \hline Average & 0.794 & 1.31 & **0.553** \\ \end{tabular}
\end{table} TABLE I: Average NRMSE of the three methods evaluated on testing set data. Lower values correspond to improved performance.
over \(30\%\) compared to baseline approaches. Furthermore, the proposed framework is formulated in a general way that has potential to enhance the performance of other imaging systems.
## Acknowledgment
We thank Maj James E. Bevins for useful discussions.
|
2305.11243 | Comparing Machines and Children: Using Developmental Psychology
Experiments to Assess the Strengths and Weaknesses of LaMDA Responses | Developmental psychologists have spent decades devising experiments to test
the intelligence and knowledge of infants and children, tracing the origin of
crucial concepts and capacities. Moreover, experimental techniques in
developmental psychology have been carefully designed to discriminate the
cognitive capacities that underlie particular behaviors. We propose that using
classical experiments from child development is a particularly effective way to
probe the computational abilities of AI models, in general, and LLMs in
particular. First, the methodological techniques of developmental psychology,
such as the use of novel stimuli to control for past experience or control
conditions to determine whether children are using simple associations, can be
equally helpful for assessing the capacities of LLMs. In parallel, testing LLMs
in this way can tell us whether the information that is encoded in text is
sufficient to enable particular responses, or whether those responses depend on
other kinds of information, such as information from exploration of the
physical world. In this work we adapt classical developmental experiments to
evaluate the capabilities of LaMDA, a large language model from Google. We
propose a novel LLM Response Score (LRS) metric which can be used to evaluate
other language models, such as GPT. We find that LaMDA generates appropriate
responses that are similar to those of children in experiments involving social
understanding, perhaps providing evidence that knowledge of these domains is
discovered through language. On the other hand, LaMDA's responses in early
object and action understanding, theory of mind, and especially causal
reasoning tasks are very different from those of young children, perhaps
showing that these domains require more real-world, self-initiated exploration
and cannot simply be learned from patterns in language input. | Eliza Kosoy, Emily Rose Reagan, Leslie Lai, Alison Gopnik, Danielle Krettek Cobb | 2023-05-18T18:15:43Z | http://arxiv.org/abs/2305.11243v2 | Comparing Machines and Children: Using Developmental Psychology Experiments to Assess the Strengths and Weaknesses of LaMDA Responses
###### Abstract
Developmental psychologists have spent decades devising experiments to test the intelligence and knowledge of infants and children, tracing the origin of crucial concepts and capacities. Moreover, experimental techniques in developmental psychology have been carefully designed to discriminate the cognitive capacities that underlie particular behaviors. We propose that using classical experiments from child development is a particularly effective way to probe the computational abilities of AI models, in general, and LLMs in particular. First, the methodological techniques of developmental psychology, such as the use of novel stimuli to control for past experience or control conditions to determine whether children are using simple associations, can be equally helpful for assessing the capacities of LLMs. In parallel, testing LLMs in this way can tell us whether the information that is encoded in text is sufficient to enable particular responses, or whether those responses depend on other kinds of information, such as information from exploration of the physical world. In this work we adapt classical developmental experiments to evaluate the capabilities of LaMDA, a large language model from Google. We propose a novel LLM Response Score (LRS) metric which can be used to evaluate other language models, such as GPT. We find that LaMDA generates appropriate responses that are similar to those of children in experiments involving social understanding, perhaps providing evidence that knowledge of these domains is discovered through language. On the other hand, LaMDA's responses in early object and action understanding, theory of mind, and especially causal reasoning tasks are very different from those of young children, perhaps showing that these domains require more real-world, self-initiated exploration and cannot simply be learned from patterns in language input.
## 1 Introduction:
In 1950 Alan Turing famously said "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain (Turing [1946])." Developmental psychologists have spent decades devising experiments to determine the intelligence and knowledge of infants and children. This research has revealed types of knowledge that are in place well before formal education, and are the foundation for further human intelligence. The developmental literature allows us to map the cognitive trajectory of a human child and offers insight into when and how these key concepts develop. In particular, experimental techniques in developmental psychology have been carefully designed to discriminate the cognitive capacities that underlie particular behaviors. The same superficial behavior may come from very different sources. Behavior may be the result of
deeper conceptual structures or more superficial associations. It may also come from interactions with the external world, or from cultural information passed on through the medium of language.
Using classical experiments from child development may be a particularly effective way to probe the understanding of AI models in general and LLMs, in particular. First, the methodological techniques of developmental psychology such as the use of novel stimuli to control for past experience or control conditions to determine whether children are using simple associations can be very helpful for assessing LLMs. We could not accurately judge a child's cognitive capacities simply by having a conversation with them, though this has often been done with LLMs. So human developmental methods can help us understand LLMs.
But this research program can also help us use LLMs to help understand human intelligence. Classic LLMs represent the kind of information that can be extracted simply from statistical patterns in text. We can think of them as a kind of "cultural technology," like writing or print, that summarizes information that has been gathered by humans and allows other humans to access that information. This contrast with knowledge that comes from the actual interactions with the external world that humans engage in. LLMs are representations of what is available in all text and written language. However, these models are not yet capable of gathering exploratory data on their own, and this may be what makes some facets of human knowledge unique. Thus, we can use developmental tests to see which facets of knowledge are encoded in the language other people have produced, and may be transmitted to children through that medium, and which seem to require active interaction with the physical world.
Previous work that has tested LLMs, specifically GPT-3, has found conflicting evidence of theory of mind (Ullman (2023)) (Kosinski (2023)) in these models. Previous work has also demonstrated that GPT-3 deeply struggles with causal reasoning-based tasks though it does well on vignette tasks ((Binz and Schulz, 2023)). One issue that arises in these studies is that LLMs may simply reference published research papers, for example, finding the false-belief task in many published papers on the internet, and so responding to it appropriately. Again this emphasizes the importance of methodological care in designing the experimental problems, to ensure that systems do not simply replicate or narrowly generalize from particular examples in the training text. This project is similar to the BIB benchmark used by Lake and Dillon but assesses a wider range of abilities beyond the "core knowledge" domains they describe there, particularly abilities for learning and discovery(Dillon (2023)),(Gandhi (2003)).
In this paper we use the Google LLM model called LaMDA (Thoppilan (2022)), to approach some of these questions. First, we determined key examples of experiments that discovered developmental milestones in human understanding. The selected experiments were categorized into four domains of cognition: early object and action understanding, theory of mind, social understanding, and causal reasoning. We then converted these experiments into text form in order to input them into LaMDA and probed the model's responses in these tasks. Importantly, the guiding question of this work is not to determine LaMDA's intelligence or understanding, a difficult and complicated question. We ask not what it understands, but whether it responds as a child would in these experiments? In particular, we seek to understand LaMDA's capacities as an agent with access only to text-based data and not data that comes from real-world exploration.
We find that LaMDA generates appropriate responses that are similar to children in experiments involving social understanding, perhaps providing evidence that the core of these domains is discovered or accessible through language. However, LaMDA's responses in early object and action understanding, theory of mind, and especially causal reasoning tasks are very different from those of young children, perhaps showing that these domains require more real-world, self-initiated exploration and cannot simply be learned from patterns in language input.
We considered two hypotheses about the general relation between LaMDA's responses and those of children. The first hypothesis was that LaMDA would fall into place somewhere along the human developmental trajectory. By systematically using developmental studies we might determine, as it were, how old LaMDA is, with the assumption that the systems might do better on tasks that are mastered earlier by humans. The other hypothesis was that LaMDA might perform better in developmental domains that can more easily be learned from language alone while performing worse in domains that rely more on exploration and knowledge seeking.
AI models may have their own developmental timeline that is completely different from that of a human. In acknowledging and understanding the divergences between human and AI learning
trajectories and patterns, we may be better equipped to train more optimal models and glean an understanding of what may make human cognition uniquely human.
## 2 Experimental Design:
For our work, we use the Google LLM model LaMDA as the underlying language model. LaMDA 137B was used as it is one of two versions that can be used for public research publication. 137B is not the latest version of LaMDA. Previous work has shown that LaMDA performs similarly to GPT-3 and other similarly-sized models in a variety of natural language understanding and generation tasks (Srivastava (2022)). Notably, LaMDA responses rely on text prediction with some fine-tuning and unlike other LLMs such as GPT-4 does not use Reinforcement Learning from Human Feedback. RLHF is undoubtedly useful for applications but it poses serious problems for attempts to understand the capacities of the models. Without detailed information on just how human coders responded to responses, it is difficult to make assessments. In particular, it seems plausible that mistaken responses that would indicate failure on some of the developmental tasks are simply pruned away in the course of RLHF. (Note that children famously do not respond to reinforcement signals in this way)
All prompts were written based on the seminal experiments in Table 1 and pasted in LaMDA for output. Each task was run ten times, scored, and assigned an average.
### Procedure and Study Design:
To determine which experiments to include, we first considered which problems are widely taken to be indices of children's developing capacity to understand the world. Upon a review of the literature, we selected four cognitive domains that capture the breadth of this capacity: Early object and Action understanding, Theory of Mind, Social Understanding, and Causal Reasoning.
In each domain, we identified a variety of representative tasks that are classic, well-replicated, and heavily cited in meta-analyses and review papers (Table 1). The methods of eleven selected experiments were adapted into text-based prompts formulated as a series of conversational turns with LaMDA acting as the participant. Developmental psychologists have worked for decades to ensure these experiments accurately capture the underlying cognitive capacities. Moreover, it is important to note that developmental psychologists have control conditions in all experiments. Children's responses can be misleading, and control conditions are often essential to understand the intricacies of cognitive processes. For example, responses to test questions in false belief and theory of mind tasks only lend appropriate insight if the children also understand the actual reality the beliefs refer to.
### Rating system: (Large Language Model Response Score "LRS"):
We adapted all the selected developmental experiments and studies on the list (Table 1) so that they could be employed as input into LaMDA as text-based prompts. We then analyzed the output, determining whether the response was similar to that of a child and whether the response was generally appropriate. We created a scoring metric called the Large Language Model Response Score or "LRS". The range on our scale went from 0-10. 0 was the lowest score LaMDA could receive, meaning LaMDA did not respond appropriately or as a child would. A score of 5 would represent partial similarity to a child, displaying somewhat desired reasoning and partially correct answers to the prompts. 10 was the highest score and meant that LaMDA was able to correctly answer any questions and produced the same responses as children for the entire prompt, including passing any control prompts. If children were given prompts to verbally respond to in the original study, the prompts given to LaMDA were copied and pasted exactly from the published study. If the original studies involved a looking time or visual preference method, we converted these into the form of a text question for LaMDA, asking for a preference. If the original study relied on a visual paradigm (i.e. a puppet show) we described the scene in language. For all studies we included the same checks and controls as the original studies, serving as a crucial part of our scoring metric. If LaMDA was not able to pass the control question, a score of 0 was given immediately, as in the studies with human children. A further question concerns the consistency of responses as LLMs often produce different responses on different trials. Accordingly, each prompt was executed 10 times, and an average response score was calculated across all 10 trials.
### Experiment Permutations:
One concern was that these seminal experiments exist in LaMDA's training data as research publications. In our pilot work, we found that LaMDA would indeed cite previous papers in its responses. For example, when Alison Gopnik's seminal Blicket Detector work was released, a Blicket was intentionally designed to be a novel term that children would not have heard before, ensuring that children could not use past linguistic knowledge to complete the tasks. Now, it has become a common term in the developmental literature, and LaMDA has ample access to that literature. In order, to prevent this and ensure a true test of LaMDA's understanding, we implemented small systematic permutations in each experiment that did not alter the essence of the tasks themselves. Names were changed (i.e. the Sally-Anne task became the Rose-Eliza task), along with objects and colors. Previous work has indicated that even small permutations may lead to a LLMs complete failure of a task that it excelled at in its original form (Ullman (2023)).
## 3 Results:
Through analysis of the LaMDA outputs, we were able to assign an average response score (LRS) to its performance on each experiment. Overall, we found LaMDA responses were most like human children in the Social Understanding domain, earning an average LRS of 8.47 across these tasks. LaMDA performed at chance in the domains of Early Object and Action Understanding and Theory of Mind, with an average LRS of 5.06 and 5.12 respectively. LaMDA achieved its worst performance in the domain of Causal Reasoning with an average LRS of 1.79. We will go through these results per domain (see Figure 1 and Table 2) and offer explanations for why LaMDA's performance varied across these specific domains.
### Domain: Social Understanding
In this work we find that LaMDA performs relatively well in experiments involving social understanding, receiving an average LRS of 8.47. The only experiment across all domains that LaMDA performed at ceiling was Experiment 8: the Helper and Hinderer task (Wynn and Bloom. (2013)) which investigated the prosociality of agents, receiving an average LRS of 10. LaMDA is seemingly able to track altruistic behavior in a human-like way and universally reports a preference for helpful actors over hinderers.
LaMDA also performed above chance on both variations of Experiment 9 the prosociality and Intention task (Felix and Tomasello. (2006)), which investigated the prosociality of agents when a
Figure 1: LaMDA LRS Score per Domain
\begin{table}
\begin{tabular}{|p{42.7pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Exp \#** & **Paper, Title, Author, Year** & **Domain** & **Summary** \\ \hline
1 & S. [1985]. & Object /Action & Probes whether children understand that objects continue to exist while occluded. LaMDA was asked whether a car still existed when it drove behind a curtain. \\ \hline
2 & Wynn [1992] & Object /Action & Investigates infantsβ capacity to complete basic addition or subtraction in the real world. After being given evidence of multiple birds sequentially flying behind a curtain, LaMDA was asked to identify the final number of birds. \\ \hline
3 & Gergely [1995] & Object /Action & Explores when children consider an agentβs intentions when interpreting goal-directed behavior. We provided LaMDA evidence of an actor behaving irrationally to reach a goal and probed for whether it identified the behavior as irrational. \\ \hline
4 & Woodward [1999] & Object /Action & Studies when children attribute behavior to a certain goal. LaMDA was given evidence of an actor consistently reaching for one of two objects. When the positions of the items were switched, LaMDA was asked which item the actor would reach for, the goal object or the decoy. \\ \hline
5 & Wimmer [1983]; Baron-Cohen [1985] & Theory of Mind & Explores when children can attribute false beliefs to actors. In a vignette, LaMDA was introduced to two actors. βSallyβ moved βAnneβsβ toy. LaMDA was privy to the new location while Anne was not. LaMDA was asked both where Anne believes the toy is and its actual location. \\ \hline
6 & Ullman [2023] & Theory of Mind & This task explores LLMsβ responses to slight variations of the above task. \\ \hline
7 & Perner Josef and Wimmer. [1987] & Theory of Mind & Studies childrenβs ability to attribute false beliefs to actors when the participant is given more information. LaMDA was given evidence that a candy box actually contained pencils rather than candy, then was asked what an ignorant actor would believe was in the candy box. \\ \hline
8 & Wynn and Bloom. [2013] & Social & Explores childrenβs ability to incorporate actor intentionally into social evaluations. LaMDA was given social evidence in which Actor A helps Actor B towards a goal, and Actor C impedes Actor A or C. \\ \hline
9 & Felix and Tomasello. [2006] & Social & Asks whether children consider an agentβs goal when determining whether to help the agent. LaMDA was given evidence of an actor coming to an outcome intentionally vs unintentionally. It was then tested on whether it offered help in both conditions or only when the outcome was unintentional. \\ \hline
10 & Gopnik and Glymour. [2007] & Causal & Investigates childrenβs use of conditional interventions to determine causal structure. LaMDA was introduced to a series of gear mechanisms and asked to intervene in order to determine the causal structure. The two conditions included a simplistic relationship (A turns B) and a complex relationship (A and B turn each other). \\ \hline
11 & Gopnik and Sobel [2000] & Causal & Studied childrenβs ability to categorize objects using a novel causal mechanism and labels. LaMDA was given information about a novel machine and was tasked with determining which objects held causal power (i.e. made the machine play music). One condition categorized objects with causal power using novel labels, while the other asked LaMDA to apply the novel labels to the objects with causal power. \\ \hline
12 & Lucas [2014] & Causal & Investigated childrenβs abilities to make inferences using evidence of causal relationships. LaMDA was asked to extrapolate about complex causal systems using relational evidence. \\ \hline \end{tabular}
\end{table}
Table 1: Explanation of experiments chosen for this study
separate actor either drops something by accident or on purpose. In the two conditions, LaMDA received an LRS of 8.8 and a 6.6. In addition to preferring prosocial actors, LaMDA exhibited prosocial tendencies in that it often offered to help the actors in response to our prompts. However, LaMDA would sometimes offer to help even when the agent intentionally dropped something, and it had trouble identifying when an agent did not necessarily need help, while children were able to make this distinction and modify their altruistic behavior accordingly. LaMDA's overall success in this domain may imply that the core of social understanding is discovered through language and is less reliant on outside exploration than other domains of cognition. See Table 3.
### Domain: Early Object and Action Understanding
LaMDA performed at chance across the Early Object and Action Understanding domain tasks, receiving an average LRS of 5.08. LaMDA scored highest in Experiment 1: Object Permanence task (S. [1985]) which investigated the ability to track an object as it moves behind a curtain with an LRS of 8.0. It also scored well on Experiment 3: Intentional Action task (Gergely [1995]) which investigated predictions about intentional goal directed action and reach and object preference receiving a LRS of 7.3.
Object permanence is a core facet of understanding and may be considered the key ingredient upon which tasks 2-4 are built. This perhaps explains why LaMDA's responses are adequate in Task 1 but taper off as the experiments increase in complexity. LaMDA also produces the desired responses in tasks that predict that agents will prefer paths that minimize energy expenditure. This reasoning utilizes social understanding, one of LaMDA's strengths.
With the increase in the number of actors to track and the general complexity of tasks, LaMDA's performance worsens, as in Experiment 2: Tracking Addition and Subtraction in Objects (Wynn [1992]) which investigates tracking objects appearing and disappearing behind a curtain and receives an LRS of 2.0. LaMDA also struggled with Experiment 4: Understanding Goals (Woodward [1999]) which compared predictions about goal-directed behavior in human and non-human actors, receiving an LRS of 3.0. This performance is strikingly poor compared to infants as young as five months old, who are able to complete these tasks successfully. This suggests that human-like object and action understanding may rely on external visual information and that these capacities cannot be gleaned through language input alone. See Table 4.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Exp. & Author & Title & Age & Condition & Score \\ \hline
1 & Baillargeon (1985) & Object Permanence & 5 MOS & Original & 8.0 \\
2 & Wynn (1992) & Object/action & 5 MOS & Original & 2.0 \\
3 & Gergely (1995) & Rational & 12 MOS & Rational & 7.3 \\
4 & Woodward (1998) & Standard & 8-10 MOS & Standard & 3.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: LRS Scores for the Early Object and Action Understanding Domain:
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**DEVELOPMENTAL DOMAIN** & & **AVG LRS SCORE** \\ \hline EARLY OBJECT AND ACTION UNDERSTANDING & 5.08 \\ \hline THEORY OF MIND & & 5.12 \\ \hline SOCIAL UNDERSTANDING & 8.47 \\ \hline CAUSAL REASONING & 1.79 \\ \hline \hline \end{tabular}
\end{table}
Table 2: LRS Average Scores per Domain:
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Exp. & Author & Title & Age & Condition & Score \\ \hline
8 & Hamlin (2013) & Helper/Hinderer & 6-10 MOS & Successful Helper/Hinderer & 10.0 \\
9 & Warneken, (2006) & Altruism and Intention & 18 MOS & 1.Constraint: Out of Reach & 8.8 \\
9 & Warneken (2006) & Altruism and Intention & 4-5 YR & 2.Constraint: Physical Obstacle & 6.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: LRS Scores for the Social Understanding Domain:
### Domain: Theory of Mind
There is a question about whether LLMs are capable of Theory of Mind, with some researchers (Ullman [2023]) finding failures and others (Kosinski [2023]) interpreting success in a GPT-3 model. LaMDA seems to perform at chance on our tasks across this domain receiving an average LRS of 5.1.
LaMDA scores highest on Experiment 7: False Belief (Perner Josef and Wimmer. [1987]) which investigated whether participants are able to discard knowledge of reality and attribute false beliefs to others, receiving an LRS of 7.3 on this task.
LaMDA performed at chance on Experiment 5: the Sally-Anne task (Baron-Cohen [1985]) which further probes false belief in a multi-agent interaction, receiving a LRS of 4.7 These results suggest LaMDA is able to successfully use evidence to update its own beliefs but struggles to discard internal information and interpret the beliefs of others who have limited evidence.
Inspired by the work in (Ullman [2023]) and (Kosinski [2023]), which probed the LLMs understanding of Theory of Mind using GPT-3, we decided to conduct these same tasks using LaMDA to see how it compares. Experiment 6: Variations on Theory of Mind (Ullman [2023]) allowed us to truly dissect LaMDA's theory of mind abilities. Some of these variations forced LaMDA to use perception and common sense in solving the tasks, both of which proved difficult. LaMDA received an average LRS of 4.6 on these tasks. See Table 5.
### Domain: Causal Reasoning
LaMDA's poorest performance lies in the Causal Reasoning domain, with its domain LRS averaging a 1.79.
In Experiment 10: Causal Gear Task Gopnik and Glymour. [2007], in which participants are given evidence and asked to identify the causal relationship between a series of gears, LaMDA's performance varied greatly by condition. In the simplest condition (a machine works when gear A pushes gear B), its LRS was 6.4. When the given causal relationship becomes more complex and unusual (a machine works conjunctively when gears A and B push against each other), LaMDA's accuracy drops to an LRS of 2.6. This suggests that LaMDA is using text information that specifies the most common types of causal relationships. In contrast, children in this study simply relied on the pattern of data they observed and were equally willing to infer unusual or common causal relationships.
LaMDA also faced significant difficulties when faced with causal reasoning tasks using a Blicket Detector mechanism. In Experiment 11: the Blicket Induction Task Gopnik and Sobel [2000], which examines participants' ability to make causal inferences about a novel mechanism using category-based and linguistic labels, LaMDA scored an LRS of 0.8 and 1.3 in each condition.
Across four conditions of Experiment 12: the Disjunctive/Conjunctive Blicket Task Lucas [2014], which assesses LaMDA's proficiency in determining causal relationships using a variety of given evidence, its LRS ranged from a 0 at the lowest and 1.3 at the highest. Across tasks, LaMDA is largely unable to track the causal evidence given and make meaningful inferences about causal structures. In its attempt to search its learning data for appropriate responses, LaMDA often responded with outlandish internet links. When asked to identify which objects were Blickets (changed to "Zerpdas" following LaMDA citing existing Blicket papers in its responses), LaMDA confidently, yet incorrectly, identified the Zerpda as a bright blue wig. These failures may indicate that these types of causal learning require more real-world and self-initiated exploration and cannot simply be learned from language. Discovering novel causal structures requires seeking real-life evidence and constantly
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Exp. & Author & Title & Age & Condition & Score \\ \hline
5 & Wimmer (1983) & Sally Anne Task & 4-5 YR & Original & 4.7 \\
6 & Ullman (2023) & Theory of Mind & 4-5 YR & 1.Transparent & 5.7 \\
6 & Ullman (2023) & Theory of Mind & 4-5 YR & 2.Relationship Change & 5.5 \\
6 & Ullman (2023) & Theory of Mind & 4-5 YR & 3.Trusted Communication & 3.5 \\
6 & Ullman (2023) & Theory of Mind & 4-5 YR & 4.Additional Actor & 4.0 \\
7 & Perner (1987) & False Belief & 3 YR & Smarties/Pencils & 7.3 \\ \hline \hline \end{tabular}
\end{table}
Table 5: LRS Scores for the Theory of Mind Domain:
updating hypotheses about causal structure based on that evidence, rather than simply assuming the causal structures that are most frequently reported in language. See Table 6.
## 4 Discussion:
In this work we proposed a developmental lens and framework as a systematic way to evaluate LLMs. We proposed a battery of classical developmental tasks and converted them to text format for use in LaMDA. We created a rating system called Large Language Models Response Score (LRS) and were able to assign values to quantify LaMDA's ability to respond like a child across crucial developmental domains. We found that LaMDA responded like a child and provided appropriate answers in experiments involving social understanding, perhaps providing evidence that the core of these domains is discovered through language. However, LaMDA's responses were at chance in the Early Object and Action Understanding and Theory of Mind domains. LaMDA's responses diverged especially strongly from those of children in causal learning tasks, perhaps showing that this domain requires more real-world and self-initiated exploration and cannot simply be learned from language input.
We find that LaMDA does not seem to follow a human developmental trajectory. It performs more poorly on some tasks that infants solve at a very young age than on others that are solved much later. Instead, LaMDA's performance reflects differences in how much tasks rely on prior knowledge that may be encoded and transmitted through language, as in the preference for helpers, and how much they rely on the ability to draw inferences from novel patterns of evidence, as in causal learning tasks. However, whatever their origins are, these conceptual milestones will be crucial for genuinely functioning AI systems. Systems will have to understand and use knowledge of objects, actions and goals, minds and causality, in much the way that human children do.
We hope that our proposed core developmental domains and the associated battery of developmental tasks can be used by fellow researchers to study other AI models, assess crucial understanding of basic common sense concepts, and gauge their ability to reason using seminal developmental experiments. We also propose that our novel LLM Response Score metric can be used to evaluate other language models, such as GPT, and can be adapted to apply to other key experiments from the psychological literature.
## 5 Supplementary Material
All the prompts used in the experiment and output from LaMDA can be found in this document: [https://docs.google.com/document/d/1c-oRRBF-NXRjfrsqNknd3OwMInst8RPVhBwRgVtvXxo/edit?usp=sharing](https://docs.google.com/document/d/1c-oRRBF-NXRjfrsqNknd3OwMInst8RPVhBwRgVtvXxo/edit?usp=sharing)
|
2303.13902 | Blocking transition of interface traps in MoS$_2$-on-SiO$_2$ FETs | Electrical conductivity with gate-sweep in a few layer MoS$_2$-on-SiO$_2$
field-effect-transistor shows an abrupt reduction in hysteresis when cooled.
The hysteresis and time dependent conductivity of the MoS$_2$ channel are
modeled using the dynamics of interface traps' occupancy. The reduction in
hysteresis is found to be steepest at a blocking temperature near 225 K. This
is attributed to the interplay between thermal and barrier energies and fitted
using a distribution of the latter. Further, the charge stored in the blocked
traps is programmed at low temperatures by cooling under suitable gate voltage.
Thus the threshold gate-voltage in nearly non-hysteretic devices at 80 K
temperature is reversibly controlled over a wide range. | Santu Prasad Jana, Suraina Gupta, Anjan K. Gupta | 2023-03-24T10:29:42Z | http://arxiv.org/abs/2303.13902v1 | # Blocking transition of interface traps in MoS\({}_{2}\)-on-SiO\({}_{2}\) FETs
###### Abstract
Electrical conductivity with gate-sweep in a few layer MoS\({}_{2}\)-on-SiO\({}_{2}\) field-effect-transistor shows an abrupt reduction in hysteresis when cooled. The hysteresis and time dependent conductivity of the MoS\({}_{2}\) channel are modeled using the dynamics of interface traps' occupancy. The reduction in hysteresis is found to be steepest at a blocking temperature near 225 K. This is attributed to the interplay between thermal and barrier energies and fitted using a distribution of the latter. Further, the charge stored in the blocked traps is programmed at low temperatures by cooling under suitable gate voltage. Thus the threshold gate-voltage in nearly non-hysteretic devices at 80 K temperature is reversibly controlled over a wide range.
## I I: Introduction
Single and few layer transition metal chalcogenides [1; 2] offer much potential for device applications including transistors [3; 4] with high frequency capability [5], logic gates [6; 7] for integrated circuits [8] and optoelectronic [9; 10; 11; 12] devices. The MoS\({}_{2}\) single layer devices with direct band gap [1; 3] in optical range have been of particular interest. The field effect transistors (FETs) based on MoS\({}_{2}\) show a very promising behavior with scalability, non-ideal behavior and degradation with time as the main hurdles. A pertinent culprit with interesting physics that contributes to non-ideal behavior is interface traps. Such traps lead to reduced mobility and response time as well as increased noise and hysteresis in transfer characteristics. Thus a more comprehensive understanding of the traps is necessary.
The phenomenon of blocking is common in magnetic systems. In ferromagnetic nano-particles, exhibiting superparamagnetism, [13] the blocking arises from the interplay between an anisotropy energy barrier, the thermal energy and the Zeeman energy. As a result, the thermally activated switching rate (\(\tau_{\rm s}(T)^{-1}\)) between two magnetic states is a sharply rising function of temperature. Thus, their response to an applied magnetic field, measured over certain time (\(\tau_{\rm m}\)), shows hysteresis at low temperatures and non-hysteretic paramagnetic behavior at high temperatures. This crossover transition occurs at a blocking temperature \(T_{\rm B}\) at which \(\tau_{\rm s}(T_{B})\approx\tau_{\rm m}\). In contrast, the blocking of traps in MoS\({}_{2}\) FETs leads to hysteresis reduction with cooling. A similar behavior is also observed in graphene FETs [14]; though, the much sharper transfer characteristics in MoS\({}_{2}\) devices with a threshold gate-voltage help carry out a more quantitative analysis.
The threshold gate-voltage at which an FET shows a steep rise in conductance is controlled by both the traps' charge and the capacitive displacement charge across gate dielectric. A positive hysteresis in the transfer characteristics of MoS\({}_{2}\) FETs has been studied as a function of various parameters [15; 16; 17; 18] and attributed mainly to charge-traps. This arises from the traps that have a timescale comparable to the gate-voltage sweep-time. This also amounts to a relaxation in channel's conductance at varying time-scales. The fast traps do not lead to hysteresis but they shield the gate electric field restricting the density of mobile carriers in the channel. This broadens the threshold region and forbids the access to the ambipolar behavior in MoS\({}_{2}\) FETs even for gate voltages far exceeding the voltage equivalent to the energy gap. In addition, the electrostatic potential of the trap ions leads to reduced mobility of the channel carriers while the variation in the charge-state of traps gives rise to carrier density and mobility fluctuations.
In this paper, the transfer characteristics and its' time dependence in few layer MoS\({}_{2}\)-on-SiO\({}_{2}\) FETs as a function of temperature is presented together with a model on the effect of traps on gate-dependent channel conductance. A trap's charge state determines the channel's chemical potential which in turn dictates the traps' occupancy. This makes it a complex non-linear system with coupling between traps' occupancy mediated by the channel. Thus, even the traps at a single energy and with the same barrier lead to non-exponential relaxation. The hysteresis and its temperature dependence is modeled using some simplifications and analogy with superparamagnets. Finally, the traps' blocking is used to reversibly control the threshold voltage at 80 K temperature.
## II II: Experimental details
Few layer MoS\({}_{2}\) was transferred on SiO\({}_{2}\) by a dry method [19] from a natural MoS\({}_{2}\) single crystal (from SPI, USA) using commercial PDMS film based viscoelastic stamp. The latter is first fixed on a glass slide and an MoS\({}_{2}\) flake is transferred on it using a scotch tape. The mechanism of this transfer process uses the viscoelastic response of the PDMS film, which behaves as an elastic solid for a short time scale. So pulling the PDMS film from the scotch tape is done at high speed leading to strong adhesion of MoS\({}_{2}\) on PDMS as viscoelastic solid makes a strong conformal contact with the flake [20]. The PDMS with MoS\({}_{2}\) flake is aligned with a SiO\({}_{2}\)/Si sub
strate fixed by carbon tape on a XYZ micro-manipulator and under an optical microscope. The stamp is removed with sufficiently low speed so that the adhesion of the flake to stamp is week and the flake gets transferred to the SiO\({}_{2}\) surface easily. Raman spectra, see Fig. 1(c), were used to confirm few-layer nature of MoS\({}_{2}\).
The number of MoS\({}_{2}\) layers is determined by optical microscope contrast and verified by Raman Spectroscopy with 532 nm wavelength laser excitation. As seen in Fig.1(c) the separation between the E\({}_{\rm 2g}^{1}\) and A\({}_{\rm 1g}\) Raman peaks is 18.47, 20.20 and 24.1 cm\({}^{-1}\), which correspond to the single-layer, few-layer and bulk MoS\({}_{2}\), respectively [21; 22].
We make 50 nm thick gold film source-drain contacts using mechanical masking with a 15 \(\mu\)m diameter tungsten wire. Use of Au without Cr/Ti adhesion layer promotes Ohmic contacts due to a very small difference in contact potentials of Au and MoS\({}_{2}\)[23; 24]. Mechanical masking avoids use of organic lithography resist which can leave residue on MoS\({}_{2}\). The wire is carefully aligned under an optical microscope with few-layer MoS\({}_{2}\) on SiO\({}_{2}\) substrate. Fig. 1(a) shows an optical micrograph of a MoS\({}_{2}\) device with source-drain contacts. Two probe conductance down to 80 K temperature was measured, with the configuration shown in Fig. 1(b) in a homemade vacuum cryostat with a heater for temperature control. A 10 k\(\Omega\) series resistance was connected with the gate voltage supply, which was controlled by a data acquisition card using a LabView program. The Ohmic contacts were confirmed by two probe current-voltage characteristics as shown in Fig. 1(d). The cryostat was pumped by a turbo-molecular pump to less than \(10^{-4}\) mbar pressure. When the cryostat is dipped into liquid nitrogen for cooling, the vacuum is expected to be much better than this. The device was annealed at 400 K in vacuum to minimize adsorbates on Mo\({}_{2}\) surface.
## III III: Modeling of channel transport with time dependent interface traps
In this section the temperature and time dependent channel transport in presence of interface traps is modeled. We first discuss the conduction in intrinsic 2D channel in presence of back-gate voltage. This is followed by details of the traps, their dynamics and their collective influence on the channel to model its time dependent transport and hysteresis. Finally we make some simplifications to model the temperature dependence of channel hysteresis and blocking transition of the traps.
### III-A: Intrinsic 2D semiconductor channel
Consider an intrinsic 2D semiconductor with dispersion \(E(k_{\rm x},k_{\rm y})=\hbar^{2}(k_{\rm x}^{2}+k_{\rm y}^{2})/2m^{*}\) on a gate-oxide with \(m^{*}\) as the effective mass. This leads to an energy independent density of states (DOS) \(g(E)=g_{\rm 2D}=g_{\rm s}g_{\rm v}m^{*}/2\pi\hbar^{2}\) with \(g_{\rm s}\) as spin- and \(g_{\rm v}\) as valley-degeneracy. The electron and hole densities, \(n\) and \(p\), respectively, will be given by \(n=\int_{E_{\rm c}}^{\infty}g(E)f(T,E-\mu_{\rm ch})dE\) and \(p=\int_{-\infty}^{E_{\rm v}}g(E)[1-f(T,E-\mu_{\rm ch})]dE\). Here \(\mu_{\rm ch}\) is the chemical potential of the channel and \(f(T,E)=[1+\exp{(E/k_{\rm B}T)}]^{-1}\) is the Fermi function. Eventually, the constant DOS in 2D leads to the expressions for \(n\) and \(p\) as
\[n(T,\mu_{\rm ch})= g_{\rm 2D}k_{\rm B}T\ln[1+\exp{\{\beta(\mu_{\rm ch}-E_{\rm c})\}}]\] \[p(T,\mu_{\rm ch})= g_{\rm 2D}k_{\rm B}T\ln[1+\exp{\{\beta(E_{\rm v}-\mu_{\rm ch})\}}]. \tag{1}\]
The net charge density in the channel \(\sigma_{\rm ch}=e(p-n)\), in general, arises from dopants, gate electric field and interface trap charges. For an intrinsic 2D MoS\({}_{2}\) channel with zero gate electric field and no traps \(\sigma_{\rm ch}=0\) and thus \(n=p\) and so the chemical potential \(\mu_{\rm ch}^{0}=(E_{\rm c}+E_{\rm v})/2\). This also assumes identical DOS for the valence and conduction bands. When a gate voltage \(V_{g}\) is applied the channel potential changes to \(V_{\rm ch}\) and \(\mu_{\rm ch}=\mu_{\rm ch}^{0}+eV_{\rm ch}\), see Fig. 2. This leads to a non-zero \(\sigma_{\rm ch}\) given by
\[\sigma_{\rm ch}=-\gamma C_{\rm ox}\left(\frac{k_{\rm B}T}{e}\right)\ln\left[ \frac{f(T,E_{\rm v}-\mu_{\rm ch}^{0}-eV_{\rm ch})}{f(T,\mu_{\rm ch}^{0}+eV_{ \rm ch}-E_{\rm c})}\right]. \tag{2}\]
Here, the dimensionless \(\gamma=e^{2}g_{\rm 2D}/C_{\rm ox}\) is the ratio of channel's quantum capacitance in the degenerate limit and the per-unit-area gate-oxide capacitance. The latter is given by \(C_{\rm ox}=\kappa\epsilon_{0}/d\) with \(d=300\) nm as SiO\({}_{2}\) thickness, \(\kappa=4\) as its dielectric constant and \(\epsilon_{0}\) as the permittivity of free space. The non-linear relation between
Figure 1: (a) Optical image of few-layer MoS\({}_{2}\) with gold contacts. (b) The electrical schematic drawing of MoS\({}_{2}\) FET. (c) Raman spectra measured on exfoliated single layer, few-layer and bulk MoS\({}_{2}\). (d) \(I_{\rm ds}\) Vs \(V_{\rm ds}\) for a few-layer device at different gate voltage values.
\(\sigma_{\rm ch}\) and \(V_{\rm ch}\) in Eq. 2 amounts to a non-linear quantum capacitance [25] of the channel. Note that a positive \(V_{\rm ch}\) leads to a negative \(\sigma_{\rm ch}\) and thus an increase in electron density. A positive \(V_{\rm g}\) leads to a positive charge density at the gate electrode which will be equal and opposite to the combined charge densities of the channel carriers, dopant ions and trap ions.
As shown in Fig. 2(d), the overall applied \(V_{\rm g}\), between drain/source, kept at ground potential, and the gate electrode, is shared between the channel potential \(V_{\rm ch}\) and the voltage drop across the gate dielectric. For positive \(V_{\rm g}\) and with \(\mu_{\rm ch}\) as the fixed zero energy reference, the electron (negative charge) energy-bands of the channel shift downward by \(eV_{\rm ch}\) leading to an increase in electron density in the channel. Thus \(V_{\rm g}-V_{\rm ch}=-\sigma_{\rm ch}/C_{\rm ox}\), in the absence of dopants and traps. This and Eq. 2 lead to
\[V_{\rm g}=V_{\rm ch}+\gamma\left(\frac{k_{\rm B}T}{e}\right)\ln\left[\frac{f(T,E_{\rm v}-\mu_{\rm ch}^{0}-eV_{\rm ch})}{f(T,\mu_{\rm ch}^{0}+eV_{\rm ch}-E_{ \rm c})}\right]. \tag{3}\]
Assuming same mobility \(\mu\) and DOS for e and h, the channel's conductivity \(G=(n+p)e^{2}\mu\) is given by,
\[G=-e^{2}\mu g_{\rm 2D}k_{\rm B}T\times\] \[\ln\left[f(T,E_{\rm v}-\mu_{\rm ch}^{0}-eV_{\rm ch})f(T,\mu_{\rm ch }^{0}+eV_{\rm ch}-E_{\rm c})\right]. \tag{4}\]
For single-layer MoS\({}_{2}\) we use \(g_{\rm s}=2\), \(g_{\rm v}=1\) and \(m^{*}=0.57m_{\rm e}\) to get \(g_{\rm 2D}=2.64\times 10^{14}\) eV\({}^{-1}\)cm\({}^{-2}\) and with \(d=300\) nm we get \(\gamma=3570\). Further, for intrinsic MoS\({}_{2}\) channel we take \(\mu_{\rm ch}^{0}=0\), \(E_{\rm c}=1\) eV and \(E_{\rm v}=-1\) eV. The \(\sigma_{\rm ch}\), \(V_{\rm ch}\) and \(\ln(G)\), thus obtained, are plotted as a function of \(V_{\rm g}\) in Fig. 3 using Eqs. 2, 3 and 4, respectively, at \(T=300\) K. We see that the channel conductivity and carrier density stay close to zero for \(|eV_{\rm g}|<E_{\rm g}/2\) and rise abruptly beyond this range. Thus we expect the threshold voltages \(eV_{\rm th}\sim\pm E_{\rm g}/2\). This is far from what is seen in actual experiments. We next discuss traps' energies, relative to channel chemical potential, and the barrier for charge exchange with the channel.
### III-B: Charge traps near 2D channel
Traps have some similarities to dopants. Near room temperature a donor dopant exists in a semiconductor as a positively charged ion and a band-electron binds to it with an energy \(E_{\rm d}\), just below \(E_{\rm c}\). A negatively charged acceptor dopant, on the other hand, has a bound-hole state with an energy \(E_{\rm a}\), just above \(E_{\rm v}\). An ionized donor dopant leads to a mobile electron in the conduction band while keeping the overall system charge neutral. As dictated by the Fermi distribution, a bound state at \(E_{\rm d}\) is unfavorable as compared to an electron at the chemical potential \(\mu_{\rm ch}\) of the band-system as \(\mu_{\rm ch}\) is close to the middle of the gap for low doping. Similarly, an acceptor dopant leads to a hole in the valence band keeping the overall system neutral and as compared to a hole at the chemical potential of the band system a hole-bound state at \(E_{\rm a}\) is unfavorable.
The interface traps differ from dopants in three major ways. First, they are weakly coupled to the channel with
Figure 2: (a) and (b) show the channel bands while (c) and (d) show the electrostatic potential profile between the channel and the gate. (a), (c) are for \(V_{\rm g}=0\) and (b), (d) for \(V_{\rm g}>0\). The channel potential (\(V_{\rm ch}\)) rises with \(V_{\rm g}\) and leads to a downward shift of the bands by \(eV_{\rm ch}\) relative to the channelβs chemical potential \(\mu_{\rm ch}\) which is fixed to the ground reference. The jump by \(V_{\rm ch}\) between drain (or source) and the channel, see (d), arises due to channelβs quantum capacitance when the channel acquires a charge density.
Figure 3: Variation with \(V_{g}\) of \(V_{ch}\) in Volts, channel charge density \(\sigma\) in \(\gamma C_{ox}(k_{B}T/e)\) units and \(\ln\left(G\right)\) for no traps case. One can see that for this case the \(eV_{th}\) values for channel conduction are close to the conduction and valence band energies.
an energy barrier between the trap- and channel-bound electron states. The barrier heights for different traps can be different. Small barrier heights will lead to a very small electron exchange time. Second, the energy for trap bound electron state need not be close to \(E_{\rm c}\) and it can be anywhere relative to the bands. Third, a band-hole or band-electron will not be able to bind to a trap, and particularly so for large barriers or small coupling. The channel carriers will feel a weak electrostatic potential of the trap ion affecting its mobility. Further, a trap, assumed to be isolated in the sense of not interacting with other traps, will only form localized states and not extended band-like states.
The relative energies of an electron bound to a trap versus that in the channel will be dictated by the electrochemical reduction potential of the two. A more positive reduction potential indicates an increased affinity for electron. For a donor trap \(D\) if the reduction potential for the reaction \(D^{+}+e^{-}\to D\) exceeds the channel's reduction potential, _i.e._\(Ch+e^{-}\to Ch^{-}\) then it is energetically favorable for donor trap to stay in neutral state. In the schematic band-energy diagram, see Fig. 4(a), this can be depicted as an electron-occupied neutral donor trap-level being lower than \(\mu_{\rm ch}\) by its excess reduction-potential relative to the channel. This also means that if \(\mu_{\rm ch}\) is decreased, see Fig. 4(b), it will become energetically favorable for the donor trap to transfer an electron to the channel and exist as \(D^{+}\)[26]. The actual electron transfer can be slow depending on the barrier height between the channel and trap-bound states. Similar argument can be put forward for an acceptor trap. If the reduction potential for \(A+e^{-}\to A^{-}\) exceeds that of \(Ch^{+}+e^{-}\to Ch\), then it is energetically favorable for the acceptor trap to be in \(A^{-}\) state. Fig. 4(c),(d) depict the two energetically favorable scenarios for \(\mu_{\rm ch}\) relative to trap energy. Over the limited accessible gate-voltage range the other higher ionization states of the traps may not be accessible.
In equilibrium, a donor trap with energy several \(k_{\rm B}T\) lower than \(\mu_{\rm ch}\) will remain un-ionized and will not contribute carriers to the channel. Thus, the traps that are far away in energy from the range of interest of \(\mu_{\rm ch}\) will not change their charge state. A net charge density, due to such far-energy traps, will give rise to an equal and opposite charge density in the channel and it can be incorporated in the model through a fixed \(\sigma_{0}\).
Further, the acceptor traps that can change their state in the relevant range of \(\mu_{\rm ch}\) can be, for the modeling purpose, considered as donor traps by incorporating an appropriate change in \(\sigma_{0}\). Suppose the areal density of acceptor and donor traps, respectively, at a given energy \(E\) is \(N_{\rm A}\) and \(N_{\rm D}\). When a fraction \(1-x\) and \(x\), respectively, of these traps are ionized the total charge density in the traps will be \(\sigma_{0}-e(1-x)N_{\rm A}+exN_{\rm D}\), _i.e._\(\sigma_{0}-eN_{\rm A}+ex(N_{\rm A}+N_{\rm D})\). This can be interpreted as a fixed charge density \(\sigma_{0}-eN_{\rm A}\) together with \(N_{\rm A}+N_{\rm D}\) donor traps of which \(x\) fraction is ionized. This will be indistinguishable, for modeling the channel conduction, from the actual state. Alternatively, we can simplify by turning the donors into acceptors with appropriate change in fixed charge; however, we adopt the former as a convention.
The interface traps, all donors, can thus be assumed to have certain energy dependent density of states. The other characteristic feature is the barrier for electron exchange between the trap and the channel. This determines the time scale over which the electron can transfer, between a trap state and the mobile channel states, by thermal activation or perhaps by tunneling.
A simple model on the dynamics of trap-charge is presented in the next section while here we illustrate the equilibrium channel properties due to fast traps having a constant density of states \(g_{\rm fr}\). These traps act faster than the gate-voltage sweep time scale. Thus channel and traps exist in equilibrium for all \(V_{\rm g}\) values where the (donor) traps with energy sufficiently below \(\mu_{\rm ch}\) will be in neutral state while those above will be in the positively charged state. There is a fixed charge density \(\sigma_{0}\) in the traps as discussed earlier. In such equilibrium when \(V_{\rm g}\) is increased from zero, \(\mu_{\rm ch}\) will change to \(\mu_{\rm ch}^{0}+eV_{\rm ch}\) such that the interface traps in the energy range \(eV_{\rm ch}\) will get neutralized from their positively ionized state, see Fig. 4(a) and (b). Therefore, the interface-traps' charge density will change from \(\sigma_{0}\) to \(\sigma_{0}-e^{2}V_{\rm ch}g_{\rm fr}\). This will contribute an equal and opposite charge at the gate electrode. Thus Eq. 3 will change to
\[V_{\rm g}= (1+\gamma_{\rm fr})V_{\rm ch}-\sigma_{0}/C_{\rm ox}\] \[+\gamma\left(\frac{k_{\rm B}T}{e}\right)\ln\left[\frac{f(T,E_{\rm v }-\mu_{\rm ch}^{0}-eV_{\rm ch})}{f(T,\mu_{\rm ch}^{0}+eV_{\rm ch}-E_{\rm c}) }\right], \tag{5}\]
Figure 4: Schematics to illustrate the correlation between the equilibrium charge state of the donor and acceptor traps for different chemical potential values of the channel. The energies of the two traps relative to \(\mu_{\rm ch}\) are dictated by the electrochemical reduction potentials as discussed in the text. When \(V_{\rm g}\) is reduced [from (a) to (b) or (c) to (d)] leading to raising of bands (relative to \(\mu_{\rm ch}\)) and trap level, both the donor trap and acceptor trap, at the energies shown, lose an electron for the new equilibrium state.
with \(\gamma_{\rm frr}=e^{2}g_{\rm fr}/C_{\rm ox}\), _i.e._ the ratio of traps' quantum capacitance to the gate capacitance. The other two equations, _i.e._ Eq. 2 and 4, remain the same. Fig. 5 shows the calculated variation of \(V_{\rm ch}\), \(\sigma_{\rm ch}\) and \(\ln(G)\) for an MoS\({}_{2}\) channel at \(T=300\) K as a function of \(V_{\rm g}\). This is for \(\gamma_{\rm frr}=50\), corresponding to \(g_{\rm fr}=3.7\times 10^{12}\) eV\({}^{-1}\)-cm\({}^{-2}\), and \(\mu_{\rm ch}^{0}=0.7\) eV. Using Eq. 5 with \(V_{\rm ch}=0=V_{\rm g}\), the latter corresponds to static charge, \(\sigma_{0}/e=g_{\rm 2D}k_{\rm B}T\ln[f(T,E_{\rm v}-\mu_{\rm ch}^{0})/f(T,\mu_{ \rm ch}^{0}-E_{\rm c})]=6.3\times 10^{7}\) cm\({}^{-2}\).
A non-zero \(g_{\rm fr}\), thus, slows down the change in \(V_{\rm ch}\) with \(V_{\rm g}\) and also leads to an increase in the \(V_{\rm th}\) values. This could be a reason for not seeing the hole doped transport regime over a large \(V_{\rm g}\) range accessible in MoS\({}_{2}\) FETs. In the non-degenerate limit, _i.e._\((\mu_{\rm ch}-E_{\rm v}),(E_{\rm c}-\mu_{\rm ch})\gg k_{\rm B}T\), we use Eq. 4 to get \(d\ln(G)/dV_{\rm ch}=e/k_{\rm B}T\) for electron doping and Eq. 5 to get \(dV_{\rm g}/dV_{\rm ch}=1+\gamma_{\rm frr}\). This leads to the expression for the subthreshold swing (SS) as an experimental method to find \(\gamma_{\rm frr}\), _i.e_
\[{\rm SS}=\left(\frac{d\log(G)}{dV_{\rm g}}\right)^{-1}=\frac{k_{\rm B}T\ln 1 0}{e}(1+\gamma_{\rm frr}). \tag{6}\]
## III-C: Trap dynamics
The fast traps primarily increase the magnitude of \(V_{\rm th}\) and SS. The slow traps, particularly the ones having a time scale comparable to \(V_{\rm g}\) sweep time, are responsible for the positive hysteresis. The traps that are extremely slow to respond over \(V_{\rm g}\) sweep time will only lead to a shift in \(\sigma_{0}\). Fig. 6(a) depicts a schematic where an electron can either be in the channel at \(\mu_{\rm ch}\) or in the trap at energy \(E\). There is a barrier between these two states whose height \(\Delta_{1,2}\) will appear different from the two sides. This results into different transition rates \(\tau_{1,2}^{-1}\) from the two sides. The time evolution of occupancy \(p\), of an electron being in the trap at energy \(E\), will be dictated by,
\[\frac{dp}{dt}=-\tau_{2}^{-1}p+\tau_{1}^{-1}(1-p)=\tau_{1}^{-1}-(\tau_{1}^{-1}+ \tau_{2}^{-1})p. \tag{7}\]
This leads to the solution
\[p(t,E)=\frac{\tau_{2}}{\tau_{1}+\tau_{2}}+\left[p(0,E)-\frac{\tau_{2}}{\tau_{ 1}+\tau_{2}}\right]e^{-(\tau_{1}^{-1}+\tau_{2}^{-1})t}. \tag{8}\]
Thus, at equilibrium, _i.e._\(t\to\infty\), \(p_{\rm eq}=\tau_{2}/(\tau_{1}+\tau_{2})\).
Assuming identical attempt rates \(\tau_{\rm a}\) from the two sides, we get \(\tau_{1,2}=\tau_{\rm a}\exp{(\Delta_{1,2}/k_{\rm B}T)}\). With the barrier height difference \(\Delta_{2}-\Delta_{1}=\mu_{\rm ch}-E\) we get \(\tau_{2}/\tau_{1}=\exp{[(\mu_{\rm ch}-E)/k_{\rm B}T]}\). Thus \(p_{\rm eq}\) can be written as,
\[p_{\rm eq}(E)=\tau_{2}/(\tau_{1}+\tau_{2})=[1+e^{(E-\mu_{\rm ch})/k_{\rm B}T}] ^{-1}, \tag{9}\]
which is the Fermi distribution \(f(T,E-\mu_{\rm ch})\). Note that this is not an exact result and it will depend on the attempt rates from the two sides and on the degeneracy. In general a dopant-state occupancy is also not given by the exact F-D distribution [27] due to the spin degeneracy of the dopant level.
The interface traps' areal charge density, dictated by traps' occupancy, will in turn determine the filling of the channel bands or \(\mu_{\rm ch}\) value. We work in an independent electron approximation where the electron filling only affects \(\mu_{\rm ch}\) and not the individual energies including channel-bands, traps, or the barrier \(\Delta_{2}\). With change in \(\mu_{\rm ch}\) the barrier \(\Delta_{1}\) seen by the band electrons at \(\mu_{\rm ch}\) will change but \(\Delta_{2}\) will remain the same, see Fig. 6(a). Thus
Figure 6: (a) Schematic of the barrier between the trap at energy \(E\) and the channel filled with electrons up to its chemical potential \(\mu_{\rm ch}\). (b) The trap levels relative to the bands. In equilibrium, the filled (orange) and empty (blue) states of the traps and the band are dictated by the Fermi function with chemical potential \(\mu_{\rm ch}\).
eliminating \(\tau_{1}\) in favor of \(\tau_{2}\), \(\mu_{\rm ch}\) and \(E\), one can write Eq. 7 and 8, respectively, as,
\[\dot{p}=\frac{f(T,E-\mu_{\rm ch})-p}{\tau_{2}[1-f(T,E-\mu_{\rm ch})]}, \tag{10}\]
and
\[p(t,E)=p(0,E)e^{-t/\tau_{2}f(T,\ \mu_{\rm ch}-E)}\\ +f(T,E-\mu_{\rm ch})[1-e^{-t/\tau_{2}f(T,\ \mu_{\rm ch}-E)}]. \tag{11}\]
Here we assumed a time independent \(\mu_{\rm ch}\) to solve for \(p(t,E)\). However, \(\mu_{\rm ch}\) will actually be determined by the time-dependent occupancy \(p\) of various traps. We discuss this coupling between \(p\) and \(\mu_{\rm ch}\) next.
## III-D: Time dependence of channel properties
When the charge stored in interface traps changes with time, the density of mobile carriers, and thus \(\mu_{\rm ch}\) will also change. The carriers in the channel are assumed to equilibrate over a time much smaller than \(\tau_{1,2}\). We denote the areal density of states of slow traps with a transition time \(\tau_{2}\) in range \(\tau\) to \(\tau+d\tau\) as \(g_{str}(\tau,E)d\tau\).
When \(V_{\rm g}\) is changed from zero, the displacement charge and the fast traps will react immediately leading to a new \(\mu_{\rm ch}\) and then the slow traps will start changing their occupancy. This will lead to a slow change in \(\mu_{\rm ch}\). At an instant \(t\) if the occupancy of the traps is \(p(t,\tau,E)\) then the change in charge density of slow traps from the \(t=0\) equilibrium state will be given by, \(\Delta\sigma_{\rm str}(t)=e\int\int[1-p(t,\tau,E)-1+p(0,\tau,E)]g_{\rm str}(\tau,E)d\tau dE\). By differentiating we get,
\[\hat{\sigma}_{\rm str}=-e\int\int\dot{p}(t,\tau,E)g_{\rm str}(\tau,E)d\tau dE. \tag{12}\]
With slow traps, the \(\sigma_{0}\) in Eq. 5, relating instantaneous \(V_{\rm g}\) and \(V_{\rm ch}\), gets replaced by \(\sigma_{0}+\sigma_{\rm str}\). Differentiating this modified relation with respect to time, one gets,
\[\dot{V}_{\rm g}=\left(1+\gamma_{\rm tr}\right)\dot{V}_{\rm ch}- \dot{\sigma}_{\rm str}/C_{\rm ox}+\gamma\dot{V}_{\rm ch}\times\\ \left[2-f(T,E_{v}-\mu_{\rm ch}^{0}-eV_{\rm ch})-f(T,\mu_{\rm ch}^ {0}+eV_{\rm ch}-E_{\rm c})\right]. \tag{13}\]
A simple case is the trap states existing only at certain energy \(E_{0}\) and with characteristic time \(\tau_{0}\), _i.e._\(g_{\rm str}(\tau,E)=n_{\rm str}\delta(\tau-\tau_{0})\delta(E-E_{0})\) with \(n_{\rm str}\) as the areal density. This leads to \(\dot{\sigma}_{\rm str}=-e\dot{p}n_{\rm str}\) with \(\dot{p}\) dictated by Eq. 10 with \(E=E_{0}-\mu_{\rm ch}^{0}\) and \(\tau_{2}=\tau_{0}\). Using this in Eq. 13, we get
\[\dot{V}_{\rm ch}\left[1+\gamma_{\rm tr}+\gamma\{1-f(T,E_{v}-\mu_{\rm ch}^{0}- eV_{\rm ch})+f(T,E_{\rm c}-\mu_{\rm ch}^{0}-eV_{\rm ch})\}\right]=\dot{V}_{\rm g }-\left(\frac{en_{\rm str}}{\tau_{0}C_{\rm ox}}\right)\frac{f(T,E_{0}-\mu_{ \rm ch}^{0}-eV_{\rm ch})-p}{1-f(T,E_{0}-\mu_{\rm ch}^{0}-eV_{\rm ch})}. \tag{14}\]
Let's consider a step change in \(V_{\rm g}\) from the \(V_{\rm g}=0\) equilibrium state to \(V_{\rm g1}\) at \(t=0\) and then \(\dot{V}_{\rm g}=0\) for \(t>0\). \(V_{\rm ch}\), \(p\) and channel conductance \(G\) will evolve with time with the latter being directly measurable. This evolution is dictated by two coupled first order ordinary non-linear differential equations, _i.e._ Eqs. 10 and 14, that can be solved numerically. Figure 7 shows the time evolution at \(T=300\) K of \(V_{\rm ch}\), \(p\) and \(G\) for traps at single energy \(E_{0}=0.82\) eV (from the middle of the gap) and when \(V_{\rm g}\) is changed from zero to \(21.4\) V. We assume \(\mu_{\rm ch}^{0}=0.7\) eV as arising from \(\sigma_{0}/e=6.3\times 10^{7}\) cm\({}^{-2}\). The other used parameters are: \(n_{\rm tr}=1.48\times 10^{11}\) cm\({}^{-2}\) giving \((en_{\rm str}/C_{\rm ox})=2\) eV and \(\gamma_{\rm tr}=50\). The jump in \(V_{\rm g}\) leads to a jump in \(V_{\rm ch}\) from zero to \(0.2\) V and then it decreases continuously to about \(0.15\) V as the traps' charge increases. The channel Fermi energy \(\mu_{\rm ch}=\mu_{\rm ch}^{0}+eV_{\rm ch}\) thus jumps to \(0.9\) eV and then decreases to about \(0.85\) eV. This is just above the trap energy of \(E_{0}=0.82\) eV and thus leads to more than \(75\%\) filling of the traps from nearly zero, see the discontinuous line in Fig. 7(a). As seen in this figure none of the \(V_{\rm ch}\), \(p\) or \(G\) time-evolution can actually be described by an exponential. This is illustrated in Fig. 7(b) for \(G\) where the dashed line shows an exponential relaxation with a characteristic rate \(25\tau_{0}^{-1}\). This rate closely matches the initial relaxation rate, _i.e._\(\tau_{0}^{-1}f(T,\mu_{\rm ch}-E_{0})\) in Eq. 11, which, with \(\mu_{\rm ch}^{0}-E_{0}=80\) meV, works out as \(23.2\tau_{0}^{-1}\) at room temperature.
## III III-E: Conductance hysteresis and blocking transition
On SiO\({}_{2}\) the few layer MoS\({}_{2}\) is experimentally observed to be n-doped with its \(\mu_{\rm ch}\) close to \(E_{\rm c}\). Thus \(V_{\rm th}\) for n-type conduction is usually found within \(V_{\rm g}=\pm 50\) V and p-type conduction is not observed. In the absence of slow traps one does not expect significant hysteresis in transfer characteristics. In such a case when \(V_{\rm g}\) is ramped forward from an extreme negative to extreme positive value the
n-type conduction will start at a threshold \(V_{\rm g}\) value, say \(V_{\rm th0}\) and it would stop at the same value when \(V_{\rm g}\) is ramped back.
Fig. 8(a) shows a measured conductance hysteresis loop at room temperature when \(V_{g}\) is changed from 0 to -80 V then to +80 V and finally back to zero, all at the same rate. Fig. 8(b) is the schematic of how the slow traps charge density \(\sigma_{\rm str}\) and \(\mu_{\rm ch}\) change during this \(V_{\rm g}\) cycle. When \(V_{\rm g}\) is ramped to \(-V_{0}\) from zero over certain time \(\tau_{\rm m}\), both the channel and slow traps accumulate positive charge leading to a rise in \(\sigma_{\rm str}\) and lowering of \(\mu_{\rm ch}\) relative to \(E_{\rm c}\). Now when \(V_{\rm g}\) is ramped forward towards \(+V_{0}\), the displacement charge in the channel changes fast while \(\sigma_{\rm str}\) turns around slowly. This leads to a sharp rise in \(\mu_{\rm ch}\) which crosses \(\mu_{\rm th}\) at some threshold \(V_{\rm g}=V_{\rm thf}<V_{\rm th0}\) when \(\sigma_{\rm str}=\sigma_{\rm f}\). By this point only some traps change back their charge state to negative and the remaining still contribute mobile electrons in the channel together with those due to rising \(V_{\rm g}\). At \(V_{\rm g}=+V_{0}\) the traps accumulate a negative charge density and some of it, say \(\sigma_{\rm b}\), will still remain when \(V_{\rm g}\) turns around to reach \(V_{\rm thb}>V_{\rm th0}\) at which the conduction stops.
At the conduction threshold, the channel carrier density, the chemical potential, and thus the quantum capacitance as well as the charge stored in fast traps will be same and independent of the \(V_{\rm g}\) history. Therefore, the difference in charge density at the gate electrode for the two threshold \(V_{\rm g}\) values will be equal to the difference in the charge density of the traps, _i.e._\(\sigma_{\rm f}-\sigma_{\rm b}=C_{\rm ox}(V_{\rm thb}-V_{\rm thf})\) or \(\Delta\sigma_{\rm str}=C_{\rm ox}\Delta V_{\rm th}\). Our objective here is to understand and model the temperature dependence of \(\Delta V_{\rm th}\) which is proportional to \(\Delta\sigma_{\rm str}\). The physics of this is similar to the super-paramagnet hysteresis, which is briefly discussed in the Appendix with the parameters that are relevant for traps. In high electron mobility transistors based on semiconductor heterojunction the hysteresis, similar to superparamagnets, is found only at low temperatures [28], presumably due to a better coupling of the traps to the channel.
When one ramps \(V_{\rm g}\) to \(+V_{0}\), a given trap's occupancy
Figure 8: (a) shows the measured gate dependent drain current for a few layer MoS\({}_{2}\) at \(V_{\rm ds}=\) 1 V as a function of \(V_{\rm g}\) and over a \(V_{\rm g}\) cycle from 0 to -80V, then to +80 and back to zero. (b) shows the schematic changes in slow-trap charge density \(\sigma_{\rm str}\) (solid red line), \(\mu_{\rm ch}\) (solid blue line) over a cyclic change in \(V_{\rm g}\) (solid black line) between \(\pm V_{0}\). The discontinuous red line depicts \(\sigma_{\rm str}\) at initial and final zero \(V_{\rm g}\) and the discontinuous blue line shows the \(E_{\rm c}\) relative to which \(\mu_{\rm ch}\) changes. The discontinuous horizontal black line just below \(E_{\rm c}\) shows the \(\mu_{\rm th}\) at which the channel starts conducting.
Figure 7: (a) shows the time evolution of channel voltage \(V_{\rm ch}\) and trap occupancy \(p\). The solid line in (b) shows the time evolution of conductance \(G\) for a step change in \(V_{\rm g}\) from zero to 21.4 V at \(t=0\) (see text for details). The dashed line in (b) depicts an exponential decay function with characteristic time of \(0.04\tau_{0}\).
will change according to Eq. 7 with a time dependent \(\mu_{\rm ch}=\mu_{\rm ch}^{0}+eV_{\rm ch}\) dictated by Eqs. 12 and 13. This makes the occupancy of different traps coupled and rather complex. \(\Delta V_{\rm th}\) is dictated by the difference in occupancy of the traps at the two threshold voltages. In the absence of the detailed knowledge about the traps' distribution, _i.e._\(g_{\rm str}(\tau,E)\), and associated barriers we make certain simplifying assumptions. The magnitude of hysteresis and large SS value imply that the overall, fast and slow, trap density is large. We assume that this makes the overall change in \(\mu_{\rm ch}\) much smaller than the energy gap \(E_{\rm g}\) as well as the energy barrier \(\Delta_{2}\). This also implies that only the traps in a narrow energy range, as compared to \(E_{\rm g}\) and \(\Delta_{2}\), change their state over experimental \(V_{\rm g}\) sweep range.
For the temperature dependence \(\Delta V_{\rm th}\), the exact details of \(V_{\rm g}\) cycle will not make a significant difference as long as the overall time scale of the cycle is the same. We thus consider a case where \(V_{\rm g}\) is first kept at zero for long enough time to achieve an equilibrium occupancy of traps and then it is abruptly changed to \(-V_{0}\) and held at this value for time \(\tau_{\rm m}\) and then it is ramped to zero over time \(\tau_{\rm m}\). In this way \(\mu_{\rm ch}\) will first decrease to certain lowest value \(\mu_{\rm ch-}\) and then rise passing through \(\mu_{\rm th}\) at certain \(V_{\rm g}=V_{\rm thf}\) where the channel starts conducting. We consider a similar excursion from \(V_{\rm g}=0\) equilibrium state, where the channel is insulating, to \(+V_{0}\) where it is kept for \(\tau_{\rm m}\). The channel will start conducting during this \(\tau_{\rm m}\) when \(\mu_{\rm ch}\) rises above \(\mu_{\rm th}\) and reaches a maximum value \(\mu_{\rm ch+}\). When \(V_{\rm g}\) is ramped back to zero over time \(\tau_{\rm m}\), \(\mu_{\rm ch}\) will now go below \(\mu_{\rm th}\) at certain \(V_{\rm g}=V_{\rm thb}\) where the channel stops conducting.
We write from Eq. 11 for the difference in occupancy of a trap at energy \(E\), _i.e._\(\Delta p=p_{+}-p_{-}\), corresponding to the two extreme gate voltages \(\pm V_{0}\) as,
\[\Delta p=f(T,E-\mu_{\rm ch+})\left[1-\exp\left(-\frac{\tau_{\rm m} }{\tau_{2}f(T,\mu_{\rm ch+}-E)}\right)\right]\] \[-f(T,E-\mu_{\rm ch-})\left[1-\exp\left(-\frac{\tau_{\rm m}}{\tau_{ 2}f(T,\mu_{\rm ch-}-E)}\right)\right]. \tag{15}\]
Here, we have assumed identical initial occupancy of the traps before \(V_{\rm g}\) is brought to the two extreme values. Note also that \(\tau_{2}=\tau_{\rm a}\exp(\Delta_{2}/k_{\rm B}T)\) is a steep function of temperature \(T\) in the range of interest.
As discussed earlier, the traps that actually change their charge state in response to \(V_{\rm g}\) will be in a narrow energy range. We believe that the potential barrier for an interface trap to change its charge state is actually much higher, _i.e._\(\Delta_{1},\Delta_{2}>>k_{\rm B}T,|\Delta_{1}-\Delta_{2}|\). So the temperature dependence of \(\tau_{2}f(T,\mu_{\rm ch}-E)\) is primarily dictated by \(\Delta_{2}\). In fact, the distribution in \(\Delta_{2}\) or \(\tau_{2}\) as compared to that in \(E\) of relevant slow traps dominates the behavior. Thus we absorb \(f(T,\mu_{\rm ch}-E)\) in \(\tau_{2}\) for the purpose of variation with temperature and write,
\[\Delta p(\tau_{\rm m})=[f(T,E-\mu_{\rm ch+})-f(T,E-\mu_{\rm ch-})]\times\] \[\left[1-\exp\left(-\frac{\tau_{\rm m}/\tau_{\rm a}}{\exp\left( \Delta_{2}/k_{\rm B}T\right)}\right)\right]. \tag{16}\]
The traps occupancy that actually dictates \(\sigma_{\rm str}\) is when \(V_{\rm g}\) is ramped back towards zero over time \(\sim\tau_{\rm m}\) from the two extremes such that \(\mu_{\rm ch}=\mu_{\rm th}\), see fig.8. This will change \(\Delta p\) by a factor \(\sim\exp\left(-\frac{\tau_{\rm m}/\tau_{\rm a}}{\exp\left(\Delta_{2}/k_{\rm B }T\right)}\right)\). This is similar to the super-paramagnets discussed in the Appendix. Different traps may have different energy barriers and with the fact that only traps in a narrow energy range near \(\mu_{\rm ch}\) change their charge state, the distribution of \(\Delta_{2}\) will dominate the temperature dependence of \(V_{\rm th}\). Combining the weekly temperature dependent prefactor \([f(T,E-\mu_{\rm ch+})-f(T,E-\mu_{\rm ch-})]\) with the slow trap's energy barrier distribution function \(n_{\rm str}(\Delta_{2})\), we conclude
\[\Delta V_{\rm th}\propto\int n_{\rm str}(\Delta_{2})\left[1-\exp \left(-\frac{\tau_{\rm m}/\tau_{\rm a}}{\exp\left(\Delta_{2}/k_{\rm B}T\right) }\right)\right]\times\] \[\exp\left(-\frac{\tau_{\rm m}/\tau_{\rm a}}{\exp\left(\Delta_{2}/k _{\rm B}T\right)}\right)d\Delta_{2}. \tag{17}\]
In case of the same barrier value \(\Delta_{2}\) for all traps we expect a peak in \(\Delta V_{\rm th}\) with at a blocking temperature \(T_{\rm B}=\Delta_{2}/[k_{\rm B}\ln(\tau_{\rm m}/\tau_{\rm a})]\). A distribution around a mean \(\Delta_{2}\) will increase the width of this peak. Another unknown parameter here is \(\tau_{\rm m}/\tau_{\rm a}\), _i.e._ the ratio of measurement time, or \(V_{\rm g}\) sweep time, to attempt rate.
## IV IV Experiments on hysteresis, blocking and gate-cooling
In this section we discuss experimental measurements focusing on the slow traps in an FET device with a few-layer MoS\({}_{2}\) on SiO\({}_{2}\). This helps us understand the energy and barrier distribution associated with these traps. The observed temperature dependence of hysteresis, quantified by \(\Delta V_{\rm th}\), is presented next together with the blocking model discussed earlier. Finally, the reversible handle on \(V_{\rm th}\) through blocking of the traps in desired charge state by cooling under different gate voltages is discussed.
### IV-A: Hysteresis and time dependence at room temperature
The transfer characteristics, shown in Fig. 8(a), of a few layer MoS\({}_{2}\) FET at room temperature exhibit a large hysteresis. The on-state high conductance at \(V_{g}=+80\) V due to n-doping can be attributed to the electron rich sulfur vacancies and other n-type impurities present in natural MoS\({}_{2}\) crystals [29]. This also leads to the pinning
of \(E_{\rm c}\) of MoS\({}_{2}\) close to the Fermi energy of contact-metal (gold) and thus negligible electron Schottky barrier at the MoS\({}_{2}\)-metal contacts [30; 31]. The blue line in Fig. 8(a) marks the subthreshold region for backward \(V_{\rm g}\) sweep. The sub-threshold swing (SS) from this line works out as 3 V/dec as opposed to 0.06 V/dec, _i.e._ the value expected for no traps, see Eq. 6. This measured SS gives \(\gamma_{\rm fr}\approx 50\) and \(g_{\rm fr}=3.7\times 10^{12}\) eV\({}^{-1}\)cm\({}^{-2}\).
Fig. 9(a) shows the measured \(I_{\rm ds}-V_{\rm g}\) curves for different \(V_{\rm g}\) sweep ranges varying from \(\pm 10\) (\(\Delta V_{\rm g}=20\)) V, to \(\pm 90\) V (\(\Delta V_{\rm g}=180\) V). There is negligible hysteresis for \(\pm 10\) V sweep range as \(V_{\rm th}\) values for both the sweep directions are well within this sweep-range and nearly equal. With increasing sweep range, \(V_{\rm thf}\) reduces and \(V_{\rm thb}\) increases leading to a monotonic rise in \(\Delta V_{\rm th}\), see Fig. 9(b). Thus the slow traps, responsible for hysteresis, are nearly uniformly distributed over the \(\mu_{\rm ch}\) range accessible up to the largest \(V_{\rm g}\) sweep range. From \(\Delta V_{\rm th}\) we can find the areal density of slow traps responsible for hysteresis for a given \(\Delta V_{\rm g}\) by using \(C_{\rm ox}\Delta V_{\rm th}/e\) with \(C_{\rm ox}/e=7.6\times 10^{10}\) cm\({}^{-2}\)V\({}^{-1}\). Typical resulting values of areal density of slow traps \(\sim 10^{12}\) cm\({}^{-2}\) are smaller than the usual three-dimensional (3D) semiconductors and similar to other 2D materials like graphene [32; 33].
A careful look at Fig. 9(b) shows an asymmetry between \(V_{\rm thf}\) and \(V_{\rm thb}\) with the former changing more with \(\Delta V_{\rm g}\) than later. This can be expected even for uniform distribution of traps as the magnitude of change in \(\mu_{\rm ch}\) for positive \(V_{\rm g}\) is less than that for negative \(V_{\rm g}\). This is due to the rapid increase in channel's quantum capacitance when \(\mu_{\rm ch}\) approaches \(E_{\rm c}\). This will amount to activation of traps in narrower energy range for same the magnitude positive \(V_{\rm g}\) than negative. A continuous rise in the rate at which \(V_{\rm thb}\) changes with \(\Delta V_{\rm g}\) and up to the highest \(\Delta V_{\rm g}\) implies an increase in slow traps' DOS near \(E_{\rm c}\). Also towards large \(\Delta V_{\rm g}\) values \(V_{\rm thf}\) seems to saturate indicating a reduction in slow trap's density of states when \(\mu_{\rm ch}\) moves away from \(E_{\rm c}\) and into the gap. From the monotonic rise in \(\Delta V_{\rm th}\) with \(\Delta V_{\rm g}\) we conclude that the slow traps are somewhat uniformly distributed. Although from the details of the \(V_{\rm thf}\) and \(V_{\rm thb}\) variation the traps seem to be concentrated over a limited energy range close to \(E_{\rm c}\).
Figure 10(a) shows a measured time dependent \(I_{\rm ds}\) as a function of time when \(V_{\rm g}\) is abruptly changed from \(-80\) to \(+80\) V. There is a fast initial relaxation followed
Figure 10: (a) The measured time dependence of \(I_{ds}\) when \(V_{\rm g}\) is changed abruptly from -80 to +80 V. The inset shows the zoomed-in initial part of the relaxation. (b) \(I_{\rm ds}\) vs Gate voltage curves for different sweep rates of back gate voltage at a fixed V\({}_{\rm ds}=1\) V. The solid squares in the inset show \(\Delta V_{\rm th}\) as a function of overall sweep time with the red line showing a fit to a sum of two exponential relaxations.
Figure 9: (a) \(I_{\rm ds}\) Vs \(V_{\rm g}\) at \(V_{\rm ds}=1\) V for different sweep ranges of \(V_{\rm g}\) from \(\pm 10\) to \(\pm 90\) V. All these curves are acquired at the same \(V_{\rm g}\) sweep rate. The inset shows the zoomed-in portion for \(\pm 10\) and \(\pm 20\) V range \(V_{\rm g}\) sweeps. (b) Variation of \(V_{\rm thf}\), \(V_{\rm thb}\) and \(\Delta V_{\rm th}\) with sweep range \(\Delta V_{\rm g}\) as extracted from (a).
by a slow stretched exponential tail indicating multiple time scales. This relaxation would be rather complex to fit to a microscopic model, as discussed earlier, in the absence of the knowledge about the distribution of trap energies and activation barriers. A fitting with multi-exponential or stretched exponential does work and it has indeed been used [15] to conclude a distribution in barrier energies. However, due to the coupling between the dynamics of different trap's occupancy and \(\mu_{\rm ch}\), even traps at single energy and with the same barrier can lead to non-exponential relaxation, with a long tail that can resemble a stretched-exponential, see Fig. 7(b).
As a consequence of this slow relaxation of traps, the hysteresis has a significant dependence on the \(V_{\rm g}\) sweep rate for a fixed sweep range. Fig. 10(b) shows the conductance hysteresis loops acquired at different sweep rates from 0.26 to 24.6 V/s. A high sweep rate also gives higher peak conductance as less number of traps acquire negative charge leading to more electrons in the channel. In fact, for some of the very fast sweep rates, a saturation or a downturn in channel conductivity is seen with \(V_{\rm g}\) due to a delayed response of the traps which depletes electrons from the channel. As discussed earlier, the rate of filling of an empty trap state at a given energy will increase with \(V_{\rm g}\) as \(\mu_{\rm ch}\) rises with \(V_{\rm g}\). Fig. 9(b) inset shows the variation of \(\Delta V_{\rm th}\) as a function of the \(V_{\rm g}\) sweep time. It fits well to a double exponential function, \(\Delta V_{\rm th}=\alpha-\beta e^{-r_{1}\Delta t}-\gamma e^{-r_{2}\Delta t}\) with \(r_{1}^{-1}=35\) s, \(r_{2}^{-1}=292.5\) s and \(\alpha\), \(\beta\) and \(\gamma\) as constants.
## IV-B: Blocking transition of interface traps
Fig.11(a) shows \(I_{\rm ds}-V_{\rm g}\) curves at several temperatures between 300 and 80 K over \(\pm 80\) V \(V_{\rm g}\) sweep range and 2.6 V/s sweep-rate. For these measurements, the device was first kept at room temperature at \(V_{\rm g}=0\) for 2-3 hours in order to equilibrate the traps and then cooled and stabilized at each different temperature keeping \(V_{\rm g}=0\). The hysteresis can be seen to reduce with cooling though the rate of reduction is not monotonic as seen in Fig.11(b). \(\Delta V_{\rm th}\) reduces slowly near room temperature and the rate of reduction, _i.e._\(d\Delta V_{\rm th}/dT\), peaks near 225 K and then the rate as well as \(\Delta V_{\rm th}\) diminish as 80 K temperature is approached.
When compared to superparamagnets, as discussed in the Appendix with parameters relevant to traps, one expects to see a peak in \(\Delta V_{\rm th}\). In case of actual superparamagnets, where the attempt rate as well as barrier height are much smaller, the temperature dependence of M-H curves shows an opposite behavior as hysteresis disappears with increasing temperature. In that case, due to small barrier height the barrier can be made to vanish at accessible magnetic fields. However, in case of traps in MoS\({}_{2}\) devices the barrier is large and the accessible \(V_{g}\) range only permits a small \(\mu_{ch}\) variation which is insufficient to make the barrier vanish. Thus, at low temperatures, the traps do not change their state and one does not see hysteresis. Further, the hysteresis does not vanish at high temperatures, in case of traps, as there is a distribution in barrier height \(\Delta_{2}\) which may continue to very high values for some of the traps.
The continuous line in Fig. 11(b) shows the temperature dependence of \(\Delta V_{\rm th}\) found using Eq. 17 and a \(\Delta_{2}\) distribution depicted in the inset. Here we have used a fixed \(\tau_{\rm m}/\tau_{0}=10^{13}\) though a change in this value by up to even two orders of magnitude only slightly affects the required \(n(\Delta_{2})\) for fitting the measured \(\Delta V_{\rm th}(T)\). Traps with \(\Delta_{2}\) higher than 8000 K do not contribute to the hysteresis at temperatures 300 K or below. One may see a decline in \(\Delta V_{\rm th}\) at further higher temperatures, however, we find that the \(I_{\rm ds}-V_{\rm g}\) curves do not exhibit so sharp transitions at \(V_{\rm thf}\) and \(V_{\rm thb}\). This could be from activation of larger number of traps and some of the slow
Figure 11: (a) Temperature dependence of \(\log(I_{\rm ds})\) Vs \(V_{\rm g}\) curves at \(V_{\rm ds}=1\) V between 80 and 300 K. (b) The solid circles show \(\Delta V_{\rm th}\) as a function of temperature. The solid line shows the calculated variation of \(\Delta V_{\rm th}\) using Eq. 17 with the barrier distribution function \(n(\Delta_{2})\) depicted by the solid line in the inset. This \(n(\Delta_{2})\) is the sum of three Gaussian distributions shown by discontinuous lines in the inset. The traps with \(\Delta_{2}\) beyond 8000 K do not contribute to \(\Delta V_{\rm th}\) variation over the studied temperature range.
traps may turn into fast ones at higher temperatures. Other extrinsic effects, such as traps' diffusion, may also come into play. Eventually, very high \(V_{\rm g}\) needed to access \(V_{\rm thf}\) and \(V_{\rm thb}\), and particularly at high temperatures, also leads to the breakdown of the dielectric oxide and permanent device damage.
### IV-C: Gate cooling and reversible control of \(V_{\rm th}\)
Figure 12(a) shows \(I_{\rm ds}\) Vs \(V_{g}\) measured at 80 K temperature after cooling the device from 350 K temperature to 80 K in presence of different gate voltages, labeled as \(V_{\rm gc}\), between -80 and 90 V. The device was first warmed to 350K in vacuum and kept at desired \(V_{\rm gc}\) for an hour before cooling it down to 80K. As expected there is negligible hysteresis at 80 K but more striking is the reversible change in \(V_{\rm th}\) over a wide range from -40 to +40 V. At negative \(V_{\rm gc}\) the traps get blocked in a positively charged state. This trap charge electron dopes the channel and thus a negative \(V_{\rm g}\) is needed to deplete it. Similarly a positive \(V_{\rm gc}\) leads to traps blocked with negative charge that depletes the electrons from the channel and thus a positive \(V_{\rm g}\) is needed to make it conduct. In this way the traps act as a controllable virtual gate.
Figure 12(b) shows the variation of \(V_{\rm th}\) with \(V_{\rm gc}\). The \(V_{\rm th}\) value at 80 V can be converted into an appropriate charge density \(\sigma_{\rm str}\) associated with the blocked slow-traps. The axis labels on right shows this \(\sigma_{\rm str}/e=C_{\rm ox}V_{\rm th}\). Another fact from this figure is the nearly linear relation between \(V_{\rm th}\) and \(V_{\rm gc}\) with a slope close to 1/2. This indicates that about half of the charge induced by \(V_{\rm gc}\) gets stored in the blocked slow-traps while the remaining half is taken up by the fast traps and channel carriers. This is striking as the change in \(\mu_{\rm ch}\) (or \(V_{\rm ch}\)) with \(V_{\rm g}\) near the conduction threshold is quite non-linear, see Fig.5.
## V V: Discussion and conclusions
When the thermally grown SiO\({}_{2}\) surface is stored in ambient air, the surface siloxanes terminated edge on the substrate react with water and gradually revert to Si-OH, after which the substrate becomes rehydrated which can act as electron trap center [34; 35]. Further, a monolayer or submonolayer of hydrogen-bonded water stays on SiO\({}_{2}\) and cannot be removed by pumping in a vacuum even over long periods. This is also a possible source of interface traps. There can also be traps or dopants within MoS\({}_{2}\) channel that can arise from donor-like S monocuracies or more complex defects involving S vacancies. The slow traps having barriers of large heights, and presumably large widths, permit electron exchange only through thermal activation rather than through quantum tunneling. Other extrinsic effects where the change in interface charge happens through electrochemical reactions at the interface involving species of hydrogen and oxygen. This may bring the diffusion barriers for these species into the picture that can also influence the time scale of charge transfer.
Transport measurements are sensitive to the traps' distribution only over a narrow energy range near the conduction threshold. With a significant electron doping in MoS\({}_{2}\) on SiO\({}_{2}\) our results are consistent with a distribution of traps near \(E_{\rm c}\). The inability to access hole-doped conduction even till \(V_{\rm g}=-100\) V may indicate significant traps' density, both slow and fast, if one goes only by channel conduction. However, the contacts may also play a role and other investigations are required to conclude on this aspect. In fact, our attempts to access the hole doped regime by combining the gate-cooling at \(V_{\rm gc}=90\) V and large negative \(V_{\rm g}\) till -100 V at 80 K temperature also did not succeed.
In conclusion, a temperature dependent study of few-layer MoS\({}_{2}\) FET transfer characteristics shows hysteresis with a large difference \(\Delta V_{\rm th}\) between the backward- and forward-sweep threshold gate-voltages. This is modeled
Figure 12: (a) Effect of cooling the device from 350 K to 80 K under different applied gate voltages \(V_{\rm gc}\) from -80V to 90V. All the curves measured at 80 K and for \(V_{\rm ds}=1\) V show negligible hysteresis. (b) Variation of \(V_{\rm th}\) with \(V_{\rm gc}\). The axis labels on the right in (b) show the corresponding blocked slow trap density.
using the hysteresis in interface trap charge density. The model also describes the complex coupled dynamics of channel carrier density and traps' charge density and thus even the traps with single energy and barrier can lead to non-exponential relaxations. The observed temperature dependence of \(\Delta V_{\rm th}\) is attributed to the blocking of traps and fitted to a distribution of energy barriers for charge exchange between the traps and the channel. Finally, the blocking helps to get nearly non-hysteretic behavior at 80 K temperature with a voltage threshold programmable by gate-cooling voltage.
## Acknowledgements
Authors acknowledge discussions on blocking transition with Ranjit Thapa and funding from SERB-DST of the Government of India.
## Appendix: Blocking in superparamagnets
A superparamagnet consists of non-interacting nanosized ferromagnetic crystals in single domain limit with their magnetic reversal described by the Stoner-Wohlfarth model [36]. Such crystals with uniaxial anisotropy exhibit two opposite magnetic moment \(\pm m_{\rm s}\) states of equal energy, in the absence of external field, that are separated by an anisotropy energy-barrier, say \(\Delta_{0}\). The external magnetic field \(B\) along \(+z-\)direction makes spin-up state favorable to spin-down state by energy \(2m_{\rm s}B\). Thus the barrier seen by spin-up state is increased to \(\Delta_{0}+m_{\rm s}B\) and for the spin-down state it reduces to \(\Delta_{0}-m_{\rm s}B\). We further assume the same time scale \(\tau_{0}\) of magnetic dynamics in the two minima and the dominance of thermal activation over quantum tunneling. The rate of transition at temperature \(T\) from spin-up to spin-down state will be given by \(\tau_{\uparrow\downarrow}^{-1}=\tau_{0}^{-1}\exp[-(\Delta_{0}-mB)/k_{\rm B}T]\) and the reverse transition rate will be \(\tau_{\downarrow\uparrow}^{-1}=\tau_{0}^{-1}\exp[-(\Delta_{0}+mB)/k_{\rm B}T]\). The average magnetic moment is given by \(m=\langle m\rangle=(p_{\uparrow}-p_{\downarrow})m_{\rm s}\) with \(p_{\uparrow}\) and \(p_{\downarrow}\) as the probability of being in the respective spin-state. With \(p_{\uparrow}+p_{\downarrow}=1\) we get \(\langle m\rangle=(2p_{\uparrow}-1)m_{\rm s}\). The time dependence of \(p_{\uparrow}\) is dictated by,
\[\frac{dp_{\uparrow}}{dt}=-\tau_{\uparrow\downarrow}^{-1}p_{\uparrow}+\tau_{ \downarrow\uparrow}^{-1}p_{\downarrow}=-(\tau_{\uparrow\downarrow}^{-1}+\tau_ {\downarrow\uparrow}^{-1})p_{\uparrow}+\tau_{\downarrow\uparrow}^{-1}.\]
This leads to the equation of motion for \(m\) as,
\[\frac{dm}{dt}=-(\tau_{\uparrow\downarrow}^{-1}+\tau_{\downarrow\uparrow}^{-1} )m+(\tau_{\downarrow\uparrow}^{-1}-\tau_{\uparrow\downarrow}^{-1}).\]
Substituting for \(\tau_{\uparrow\downarrow}^{-1}\) and \(\tau_{\downarrow\uparrow}^{-1}\), we get
\[\frac{dm}{dt}=2\tau_{0}^{-1}\exp\left(-\frac{\Delta_{0}}{k_{\rm B }T}\right)\cosh\left(\frac{m_{\rm s}B}{k_{\rm B}T}\right)\times\] \[\left[\tanh\left(\frac{m_{\rm s}B}{k_{\rm B}T}\right)-\langle m \rangle\right]. \tag{18}\]
Thus at equilibrium, _i.e._ when \(dm/dt=0\), we get \(m_{\rm eq}=\tanh(m_{\rm s}B/k_{\rm B}T)\) expected for this two state system. However, the rate at which the equilibrium is attained is dictated by \(2\tau_{0}^{-1}\exp(-\Delta_{0}/k_{\rm B}T)\cosh(m_{\rm s}B/k_{\rm B}T)\), which for \(|m_{\rm s}B|>>k_{\rm B}T\) will become \(\tau_{0}^{-1}\exp[-(\Delta_{0}\pm m_{\rm s}B)/k_{\rm B}T]\). Typical \(\tau_{0}\) values for magnetic systems are of \(\sim 10^{-9}\) s order and thus within the measurement time scale \(\tau_{\rm m}\sim 1\) s, the equilibrium is attained either for \(m_{\rm s}|B|\gtrsim\Delta_{0}\) or for \(T\gtrsim T_{\rm B}=\Delta_{0}/[k_{\rm B}\ln(\tau_{\rm m}/\tau_{0})]\). The former corresponds to the vanishing of the barrier between two states due to applied field and the latter defines the blocking temperature \(T_{\rm B}\) with hysteresis for \(T<T_{\rm B}\) and no hysteresis for \(T>T_{\rm B}\).
In order to model the temperature dependence of hysteresis with parameters relevant to the charge traps, we consider a superparamagnetic-like system but with a large \(\tau_{0}^{-1}\) and a large barrier \(\Delta_{0}\) such that \(\Delta_{0}>>m_{\rm s}B>>k_{\rm B}T\). Therefore, the barrier does not vanish at any typical applied \(B\) and in fact the barrier always dominates the energetics. We can solve Eq. 18 numerically for a time-dependent \(B\) changed at a constant rate and in a cycle. Starting from \(t=0\), where \(B=0\) and \(\mu=0\), \(B\) is ramped up to \(+B_{0}\) over time \(\tau_{\rm m}/2\). It is then ramped down and to \(-B_{0}\) and then again to \(+B_{0}\), all at the same rate. We assume \(\Delta_{0}/k_{\rm B}=6500\) K, \(m_{\rm s}B/k_{\rm B}=500\) K
Figure 13: Calculated average magnetic moment (\(m\)) Vs applied field (\(B\)) at different temperatures. The field is swept in a cycle between \(\pm 500k_{\rm B}/m_{\rm s}\) at constant rate with total time \(2\tau_{\rm m}\) (see text for details). The dots in the inset show the variation of the difference in two \(m\) values at zero field, _i.e._\(\Delta m\), with temperature illustrating how the hysteresis peaks near 217 K. The continuous line in inset is the plot following from Eq. 19 with \(a=0.76\) and \(b=0.26\).
and \(\tau_{\rm m}/\tau_{0}=10^{13}\). This leads to \(m\) Vs \(B\) as plotted in Fig. 13. At low temperatures (say, 185 K) we see that both the response of \(m\) to the magnetic field and the hysteresis are negligible. As temperature rises, the response and hysteresis both increase but at high temperatures (say 250 K), the response is large but hysteresis vanishes. We use the difference \(\Delta m\) in \(m\) values at \(B=0\) during reverse and forward field change of the same cycle as a measure of the hysteresis. As seen in the inset of Fig. 13, this \(\Delta m\) exhibits a peak at the blocking temperature \(T_{\rm B}=\Delta_{0}/[k_{\rm B}\ln(\tau_{\rm m}/\tau_{0})]=217\) K.
The solid line in Fig. 13 inset, given by
\[\frac{\Delta m}{m_{\rm s}} = 2\left[1-\exp\left\{-\frac{\tau_{\rm m}}{2\tau_{0}}\exp\left(- \frac{\Delta_{0}-am_{\rm s}B_{0}}{k_{\rm B}T}\right)\right\}\right] \tag{19}\] \[\times\exp\left\{-\frac{\tau_{\rm m}}{2\tau_{0}}\exp\left(-\frac {\Delta_{0}+bm_{\rm s}B_{0}}{k_{\rm B}T}\right)\right\},\]
follows from an analytical model behavior with \(a\) and \(b\) as constants between zero and one. The forward (higher to lower energy state) relaxation rate at field \(B_{0}\) is \(\tau_{t}^{-1}=\tau_{0}^{-1}\exp[-(\Delta_{0}-m_{\rm s}B_{0})/k_{\rm B}T]\) while the reverse rate is \(\tau_{t}^{-1}=\tau_{0}^{-1}\exp[-(\Delta_{0}+m_{\rm s}B_{0})/k_{\rm B}T]\). The zero field rate will be \(\tau_{\rm x}=\tau_{0}^{-1}\exp[-(\Delta_{0})/k_{\rm B}T]\). As per Eq. 18, when \(B\) is changed abruptly to \(B_{0}\) from zero, \(m\) will change from zero to \(m_{1}=\tanh(m_{\rm s}B_{0}/k_{\rm B}T)[1-\exp(-\tau_{\rm m}/2\tau_{\rm f})]\) in time \(\tau_{\rm m}/2\). We can take \(\tanh(m_{\rm s}B_{0}/k_{\rm B}T)=1\) for large \(B_{0}\). Now if \(B\) is abruptly made zero \(m\) will become \(m_{2}=\mu_{1}\exp(-\tau_{\rm m}/2\tau_{\rm z})\) after time \(\tau_{\rm m}/2\). For continuous ramp the actual relaxation rate will be in between \(\tau_{t}^{-1}\) and \(\tau_{\rm z}^{-1}\) for forward ramp and in between \(\tau_{\rm z}^{-1}\) and \(\tau_{\rm r}^{-1}\) for return ramp leading to \(0<a,b<1\). A similar excursion in \(B\) from zero to \(-B_{0}\) will lead to \(-m_{2}\) and thus \(\Delta m=2m_{2}\).
|
2302.03838 | Spatio-spectral metrics in electron energy loss spectroscopy as a tool
to resolve nearly degenerate plasmon modes in dimer plasmonic antennas | Electron energy loss spectroscopy (EELS) is often utilized to characterize
localized surface plasmon modes supported by plasmonic antennas. However, the
spectral resolution of this technique is only mediocre, and it can be rather
difficult to resolve modes close in the energy, such as coupled modes of dimer
antennas. Here we address this issue for a case study of the dimer plasmonic
antenna composed of two gold discs. We analyze four nearly degenerate coupled
plasmon modes of the dimer: longitudinal and transverse bonding and antibonding
dipole modes. With a traditional approach, which takes into account the
spectral response of the antennas recorded at specific points, the modes cannot
be experimentally identified with EELS. Therefore, we employ the spectral and
spatial sensitivity of EELS simultaneously. We propose several metrics that can
be utilized to resolve the modes. First, we utilize electrodynamic simulations
to verify that the metrics indeed represent the spectral positions of the
plasmon modes. Next, we apply the metrics to experimental data, demonstrating
their ability to resolve three of the above-mentioned modes (with transverse
bonding and antibonding modes still unresolved), identify them unequivocally,
and determine their energies. In this respect, the spatio-spectral metrics
increase the information extracted from electron energy loss spectroscopy
applied to plasmonic antennas. | Michal HorΓ‘k, Andrea KoneΔnΓ‘, TomΓ‘Ε‘ Ε ikola, Vlastimil KΕΓ‘pek | 2023-02-08T02:14:01Z | http://arxiv.org/abs/2302.03838v1 | Spatio-spectral metrics in electron energy loss spectroscopy as a tool to resolve nearly degenerate plasmon modes in dimer plasmonic antennas
###### Abstract
Electron energy loss spectroscopy (EELS) is often utilized to characterize localized surface plasmon modes supported by plasmonic antennas. However, the spectral resolution of this technique is only mediocore, and it can be rather difficult to resolve modes close in the energy, such as coupled modes of dimer antennas. Here we address this issue for a case study of the dimer plasmonic antenna composed of two gold discs. We analyze four nearly degenerate coupled plasmon modes of the dimer: longitudinal and transverse bonding and antibonding dipole modes. With a traditional approach, which takes into account the spectral response of the antennas recorded at specific points, the modes cannot be experimentally identified with EELS. Therefore, we employ the spectral and spatial sensitivity of EELS simultaneously. We propose several metrics that can be utilized to resolve the modes. First, we utilize electrodynamic simulations to verify that the metrics indeed represent the spectral positions of the plasmon modes. Next, we apply the metrics to experimental data, demonstrating their ability to resolve three of the above-mentioned modes (with transverse bonding and antibonding modes still unresolved), identify them unequivocally, and determine their energies. In this respect, the spatio-spectral metrics increase the information extracted from electron energy loss spectroscopy applied to plasmonic antennas.
## I Introduction
Localized surface plasmons (LSP) emerge in metallic nanostructures, also known as plasmonic antennas (PAs), due to the coupling of surface charge in the metal and the related electromagnetic wave. They are manifested as broad spectral features in the optical response of PAs, observed e.g. by reflection and transmission spectroscopy, dark-field microscopy, scanning near-field optical microscopy, or electron beam spectroscopy. Scanning transmission electron microscopy (STEM) combined with electron energy loss spectroscopy (EELS) is an experimental technique allowing to characterize LSP in both the spatial and spectral domains. [1] Its spatial resolution (better than a nanometer) is excellent compared to the typical dimensions of PAs (tens to hundreds of nanometers) and decay lengths of LSP fields (hundreds of nanometers). However, the spectral resolution (around 100 meV, or around 10 meV for state-of-the-art instrumentation [2]) is rather low compared to linewidths of typical LSP resonances in the visible spectral range (around 100 meV).
In EELS, a monochromatized electron beam is transmitted through the sample. With a certain probability, an electron excites a LSP, losing characteristic energy. The transmitted electrons are subsequently characterized with a spectrometer and the loss probability is evaluated for each energy within the spectral region of interest. When the electron beam is scanned over the sample, the loss probability forms a three-dimensional EELS data cube (also referred to as the EELS spectrum image) with two spatial and one spectral dimensions. We note that it might be necessary to separate the contribution of LSP from the total spectrum, including bulk material losses and (nearly) elastically scattered electrons contributing to the zero-loss peak. When referring to the loss probability in the following, we will consider only the LSP contribution.
The loss probability \(\Gamma(\omega)\) is directly related to the electric component of the LSP field at the frequency \(\omega\) projected to the trajectory of the electron. More specifically, [3]
\[\Gamma(\omega)=\frac{e}{\pi\hbar\omega}\int\mathrm{d}t\,\mathrm{Re}\left\{ \exp(-\mathrm{i}\omega t)\mathbf{v}\cdot\mathbf{E}^{\mathrm{ind}}[\mathbf{r}_{ \mathrm{e}}(t),\omega]\right\} \tag{1}\]
where \(\mathbf{E}^{\mathrm{ind}}[\mathbf{r}_{\mathrm{e}}(t),\omega]\) is the field induced by the electron moving with the velocity \(\mathbf{v}\) at the position of the electron \(\mathbf{r}_{\mathrm{e}}(t)\). For the electron with the trajectory perpendicular to the sample (along the axis \(z\)) it is an out-of-plane component of the field (\(E_{z}\)) that is relevant for the interaction. In such case, the electron trajectory is fully described by its \(x\) and \(y\) coordinates and the loss probability can be expressed as a three-dimensinal data cube \(\Gamma(x,y,E)\), where the frequency of the field is replaced with the trasferred energy \(E=\hbar\omega\) for convenience.
The loss probability has a close, although not always straightforward, relation to other quantities of interest, such as the (projected) photonic density of states [4; 5] or extinction cross-section. [6] For the quantitative reconstruction of the electric field of LSP modes, electron beam tomography utilizes an electron beam rotated over the sample to reconstruct the spatial distribution of the electric field. [7; 8; 9] Finally, with help of Babinet's principle it is possible to probe the magnetic component of the LSP field. [10; 11]
The most significant drawback of EELS is constituted by its mediocre spectral resolution, which makes it difficult to resolve spectrally overlapping LSP modes whose central energies are close to each other. A prototypical system supporting such overlapping LSP modes is a weakly coupled dimer PA consisting of two identical particles. The complexity of the loss spectrum is further increased when the components of the dimer themselves support overlapping modes, such is the case of disc-shaped particles. The lowest-order dipole mode in disc-shaped PAs is double degenerate. A nice illustration of this degeneracy is provided by the splitting of the mode observed when the symmetry of the disc is lowered e.g. by its gradual morphing into a triangle [12] or a crescent. [13] Upon the formation of the dimer, the two pairs of dipole modes hybridize into a new set of modes: longitudinal dipole bonding (LDB), longitudinal dipole antibonding (LDA), transverse dipole bonding dipole (TDB), and transverse dipole antibonding (TDA) modes. [14; 11] The current oscillation for these modes is schematically shown in Fig. 1(c,d,e). Due to the interaction between the dipoles in the individual discs, the energies of the hybridized modes differ. Since the interaction between the longitudinal modes is stronger than between the transverse modes, a typical energy-ordering of the modes starting from the lowest energy one is LDB, TDB, TDA, and LDA.
Dimer PAs have been thoroughly studied both due to fundamental interest [14; 15; 16; 17; 18; 19; 11] and due to their intriguing applications in sensing, [17] enhanced optical response, [20] or strong light-matter coupling. [21] From our perspective, it is worth revisiting the ability of EELS to resolve the dipole modes in a dimer PA. Song _et al._[19] reported a detailed study systematically varying the gap between two gold discs and demonstrated the experimental resolving of the LDB and LDA modes only for strongly coupled discs with the gap of 10 nm or less. Koh _et al._[15] reported three nearly degenerate modes in a bowtie PA (a dimer of triangular prisms) with the gap between the prisms estimated to be about 25 nm. In this case, the modes are not experimentally resolvable, their identification is based on the simulations and even then it is rather illustrative. We note that their terminology for the modes differs from ours, with their dipolar bright, quadrupolar dark, and dipolar dark modes corresponding to ours LDB, a mixture of LD and TD, and LDA modes, respectively. Bitton _et al._[21] reported a silver bowtie PA with the gap of around 20 nm, and the LDB and LDA modes differing in energy by 0.3 eV. In this case, the modes were rather resolvable experimentally (also due to TD modes shifted to higher energies due to the low vertex angle of the bowtie) but their identification still required simulations. This overview, though not exhaustive, clearly demonstrates the lack of a methodology allowing to identify nearly degenerate LSP modes and to determine their energy solely from the experimental data.
In this manuscript, we address this issue by proposing several EELS metrics spanning both spatial and spectral degrees of freedom of a three-dimensional EELS data cube (containing the loss probability as a function of two spatial coordinates in the sample plane and one spectral coordinate). In a case study of a disc-shaped dimer PA, we test the performance of the metrics in the identification of the dipole modes. We demonstrate that the metrics allow to resolve and identify the LDB, TD (representing unresolved TDB and TDA), and LDA modes purely from the experimental data even for the gap size of 30 nm.
## II Methods
### Fabrication of plasmonic antenna
A gold disc-shaped dimer PA was fabricated on the substrate consisting of a silicon nitride membrane with the thickness of 30 nm and the lateral dimensions of \(250\times 250\)\(\upmu\)m\({}^{2}\). First, a gold layer with the thickness of 30 nm has been deposited by magnetron sputtering. The employed growth protocol produces a large-grain polycrystalline gold layer with optical properties comparable to monocrystalline gold in terms of Q factors of localized plasmon resonances. [22] The dimer was fabricated by focused ion beam milling (using Ga\({}^{+}\) ions at 30 keV) of the gold film in a dual beam system FEI Helios. The diameter of individual discs has been set to 275 nm and the edge-to-edge distance of the discs in the dimer has been set to 30 nm. The distance of the dimers from the boundary of the metal-free square has been at least 500 nm, which is a sufficient separation to prevent the interaction between the dimers or between the dimer and the surrounding metallic frame. [13] Annular dark-field STEM image of the dimer is shown in Fig. 4(a).
Prior to the EELS measurements, the sample was cleaned in an argon-oxygen plasma cleaner for 20 seconds to prevent carbon contamination. [23]
### Electron energy loss spectroscopy
EELS measurements were carried out in a scanning transmission electron microscope FEI Titan equipped with a GIF Quantum spectrometer. The microscope was operated at 120 kV in the scanning monochromated mode with the convergence semi-angle set to 10 mrad and the collection semi-angle set to 11.4 mrad. These settings are optimized for EELS characterization of PAs. [24] The dispersion of the spectrometer was set to 0.01 eV per channel and the full-width at half-maximum of the zero-loss peak was found in the range from 0.10 to 0.12 eV. The probe current was adjusted to around 100 pA. The acquisition time of every spectrum was set to 0.5 ms to use the full intensity range of the CCD camera in the spectrometer and avoid its overexposure. The spatial resolution of the EELS data cube is determined by the pixel
size, which was set to 5 nm. Such settings allowed the acquisition of one EELS data cube with a stable electron beam in a reasonable time.
The raw data containing electron counts recorded by the CCD shall be divided by the total number of electrons impinging the sample to obtain the loss probability, taking into account also the sensitivity of the detector. Since this approach is impractical, we utilize instead the electron counts of the zero-loss peak (with the energy integration window from \(-1\) eV to \(+1\) eV) to represent the total number of electrons. The division is then performed pixel-wise, i.e., independently for each position of the electron beam. In this way, we obtain the quantity proportional to the loss probability per channel, which is further divided by the energy interval of the channel (0.01 eV) to obtain the usual loss probability per electronvolt.
Further processing depends on the utilization of the EELS data cube. For EEL maps [Fig. 3] and metrics [Figs. 5 and 6] we integrate the data over the energy window of 0.1 eV around the target energy to suppress the noise. The zero-loss peak and background are not subtracted. They are both assumed to be constant over the mapped area, the zero-loss peak due to the normalization, and the background due to the homogeneity of the membrane. The EEL spectra [Fig. 2] are integrated over tens of pixels to reduce the noise. Next, experimentally determined background (including zero-loss peak) is subtracted to obtain a pure contribution of LSP. The background is measured for the electron beam impinging a bare membrane far from any plasmonic antennas.
### Electromagnetic simulations
Electron energy loss spectra have been calculated with the boundary element method (BEM) using a software package MNPBEM. [25; 26] In all simulations, the dimers are represented by two gold discs of the height of 30 nm on top of a 30-nm-thick silicon nitride membrane. The diameters of the discs (275 nm) and their edge-to-edge distance (30 nm) were matching the values set in the fabrication process. The dielectric function of gold was taken from Ref. [27] and the dielectric constant of the silicon nitride membrane was set equal to 4, which is a standard approximation in the considered spectral region. [12]
## III Results and discussion
To facilitate the discussion of the spatial distribution of the loss probability we introduce a cartesian coordinate system with the axes \(x\) and \(y\) parallel with the transverse and longitudinal directions of the disc-shaped dimer, respectively, and the origin located in the middle of the gap between both discs. Further, we utilize two polar coordinate systems originating in the centers of the discs, with the zero angle corresponding to the center of the dimer, and increasing angle toward the right side of the discs (clockwise for the bottom disc and counter-clockwise for the top disc). All coordinates are represented in Fig. 1(a,b).
Fig. 1(c,d,e) shows intuitive schemes of the LDB, TD, and LDA modes. The modes are represented by two oscillating dipoles, one in each disc, oriented along the longitudinal (\(y\)) or transverse (\(x\)) direction. Current oscillates along the arrows, and charge accumulates near the boundaries of discs in the areas marked with \(+\) and \(-\) signs. According to Gauss law, this charge is a source of the electric field, which acts at the electron beam and results in the energy loss of the probing electrons. Consequently, the loss probability is large for the electron beam passing through the areas of accumulated charge. The only exception is the gap region of the LDB mode, where the opposite charges accumulated at the opposite sides of the gap between the discs effectively cancel each other due to the long-range character of Coulomb interaction with the probing electron, and the loss probability exhibits low values there. The regions of the large loss probability are displayed by red patches in Fig. 1(c,d,e). These regions are characteristic for each of the LDB, TD, and LDA modes (but not for the pair of TDB and TDA modes). The LDB mode features two loss-probability maxima along the longitudinal direction at the outer sides of the discs, the TD modes feature four maxima at the transverse sides of the discs, and the LDA mode
Figure 1: (a,b) Cartesian (a) and angular (b) coordinates utilized in the definition of spatio-spectral metrics. (c,d,e) Schematic representation of (c) longitudinal dipole bonding (LDB), (d) transverse dipole (TD), and (e) longitudinal dipole antibonding (LDA) modes. Arrows represent charge oscillations, \(+\) and \(-\) signs represent the areas of charge accumulation, and red patches represent areas of pronounced loss probability for the respective modes.
exhibits three maxima along the longitudinal direction (two at the outer sides of the discs and one in the gap region between the inner sides).
Loss probability \(\Gamma(x,y,E)\), a three-dimensional function of the electron beam position \(x,y\) and the loss energy \(E\), is typically visualized in the spectral domain as a spectrum for a specific beam position \(x_{0}\), \(y_{0}\),
\[\Gamma_{x_{0},y_{0}}(E)=\Gamma(x=x_{0},y=y_{0},E), \tag{2}\]
or in the spatial domain as a map for a specific loss energy \(E_{0}\),
\[\Gamma_{E_{0}}(x,y)=\Gamma(x,y,E=E_{0}). \tag{3}\]
The loss probability spectra obtained for our disc-shaped dimer and three distinct beam positions are shown in Fig. 2. To suppress the noise, the experimental spectra are integrated over multiple (several tens) pixels. The maps for several selected energies (later identified as the energies of LDB, TD, and LDA modes) are shown in Fig. 3. To suppress the noise, the experimental maps are integrated over multiple energy slices in the range 0.1 eV around the central energy (corresponding to 10 energy slices with our energy step of 0.01 eV.
The theoretical loss spectra [Fig. 2(a)] exhibit three well separate peaks at the energy of 1.0 eV, 1.2 eV, and 1.4 eV. The peaks can be assigned to specific modes by a simple inspection of spatial maps. The peak at 1.0 eV with two longitudinal maxima [Fig. 3(c)] corresponds to the LDB mode, the peak at 1.2 eV with transverse maxima [Fig. 3(e)] corresponds to the TD modes, and the peak at 1.4 eV with two longitudinal maxima and one central maximum [Fig. 3(g)] corresponds to the LDA mode. The experimental spectrum [Fig. 2(b)] exhibits three quite resolvable features at the energy of 0.8 eV, 1.0 eV, and 1.2 eV. However, the related spatial maps do not clearly show features characteristic of considered modes and cannot be utilized for unambiguous peak-to-mode assignment. A natural question then arises whether any information on the modes is present and can be extracted from the experimental loss probability \(\Gamma(x,y,E)\). We will show that the answer is positive when inspecting the loss probability in both spatial and spectral domains simultaneously.
One of the characteristic features of the considered modes is the spatial spread of the loss probability in the \(x\) and \(y\) (transverse and longitudinal, respectively) directions. This feature can be quantified by the so-called _absolute central moments_, similar to standard statistical moments, defined as follows. We first introduce the loss image weight \(w(x,y,E)\), a quantity proportional to the loss probability but normalized to unity when integrated over a certain area of interest: \(\int w(x,y,E)\,\mathrm{d}x\,\mathrm{d}y=1\). Clearly,
\[w(x,y,E)=\frac{\Gamma(x,y,E)}{\int\Gamma(x,y,E)\,\mathrm{d}x\, \mathrm{d}y}, \tag{4}\]
where the integration goes over the area of interest. For simulations, the area of interest is a full simulation area. For experimental data, though, we selected two rings concentric with the plasmonic discs with a width of 100 nm where the signal-to-noise ratio is high [see Fig. 4(b,c)]. Next, we define the central coordinate of the loss image as
\[x_{\mathrm{C}} =\int xw(x,y,E)\,\mathrm{d}x\,\mathrm{d}y, \tag{5}\] \[y_{\mathrm{C}} =\int yw(x,y,E)\,\mathrm{d}x\,\mathrm{d}y, \tag{6}\]
and the (energy-dependent) absolute central moments are defined as
\[M_{x}(E) =\int|x-x_{\mathrm{C}}|w(x,y,E)\,\mathrm{d}x\,\mathrm{d}y, \tag{7}\] \[M_{y}(E) =\int|y-y_{\mathrm{C}}|w(x,y,E)\,\mathrm{d}x\,\mathrm{d}y. \tag{8}\]
Before inspecting the actual values of the moments, we shall qualitatively discuss their expected properties. The longitudinal spread of the loss function shall be larger for the LDB mode than for the LDA mode due to the presence of a strong central maximum for the latter with near
Figure 2: Loss probability spectra of the disc-shaped dimer PA obtained from the numerical simulation (a) and the experiment (b). The insets show the electron beam positions for which the spectra were recorded. The color of the spot representing the electron beam position is identical to the color of the corresponding spectrum: pink for the outer longitudinal sides, orange for the transverse sides, and green for the gap.
zero longitudinal (i.e., \(y\)) coordinate. Therefore, the longitudinal absolute central moment \(M_{y}(E)\) shall be larger at the energy of the LDB mode compared to that at the energy of the LDA mode. Similarly, the longitudinal spread and the moment \(M_{y}(E)\) of the TD modes shall be smaller than that of the LDB mode since the transverse maxima of the former are related with a lower longitudinal coordinate than the longitudinal maxima of the latter. We note that such a qualitative comparison is not possible for the TD and LDA modes. In the transverse direction, the TD mode shall feature a rather large transverse spread, accompanied by the large values of the transverse central moment \(M_{x}(E)\), presumably larger than for the LDB and LDA modes.
The spectral dependence of moments \(M_{x}(E)\), \(M_{y}(E)\) (normalized so that the maximum value in the inspected spectral region equals to unity) is shown in Fig. 5. The theoretical spectra are shown in the left panel. We will first discuss the longitudinal moment \(M_{y}(E)\). As expected, its large value at the energy 1.0 eV corresponding to the LDB mode gradually decreases over the energy 1.2 eV corresponding to the TD modes to the energy 1.4 eV corresponding to the LDA mode. Strikingly, the maximum and minimum of the longitudinal moment corresponds exactly to the energies of the LDB and LDA modes, respectively. This suggests that the energies of the maximum and minimum can be used as the operational definition for the LDB and LDA modes energy if no more straightforward measure is available (such as in the case of the experiment). Similarly, the maximum of the transverse central moment at the energy of 1.2 eV agrees with the energy of the TD modes determined from the loss spectra (cmp. Fig. 2 and related discussion) and can be used as the operational definition for the TD mode energy.
The experimental spectra of the moments correspond very well to the theoretical spectra, only with the reduced
Figure 4: (a) ADF STEM image of the disc-shaped dimer PA, with white/dark color representing gold/substrate. The red circles determine the boundaries of the discs utilized in the processing of the experimental loss probability. (b) A map of the loss probability at the energy of 1.0 eV. The black circles determine the boundaries of the discs and the gray circles mark the area with the high signal-to-noise ratio utilized for the calculation of the spatio-spectral metrics. (c) Same as panel b showing only the area with a high signal-to-noise ratio.
Figure 3: Loss probability maps of the disc-shaped dimer PA obtained from the numerical simulation (top) and the experiment (bottom) for several loss energies.
variance (3-5 times) over the examined spectral range, which is attributed to the experimental noise and/or background. This seemingly trivial statement represents an important finding. In contrast to theoretical maps of the loss probability, the experimental maps do not exhibit the apparent formation of the modes-identifying features such as the longitudinal, transverse, and central maxima. Still, the information allowing to identify the modes and to determine their energy can be extracted with judiciously defined metrics. The experimental energies of the LDB, TD, and LDA modes read 0.9 eV, 1.0 eV, and 1.2 eV, respectively (with a precision of 0.05 eV set by the half-width of the energy window used for the spectral integration of the experimental data), and correspond reasonably well to the positions of the peaks in Fig. 2(b) at 0.8 eV, 1.1 eV, and 1.3 eV. We suppose that the energies based on the absolute central moments are more relevant, since they take into account the spatial distribution of the loss probability, while the energies of the peaks in Fig. 2 are more prone to the experimental noise and possible incomplete background subtraction.
The spatial distribution of the loss probability can be inspected also in polar coordinates. A non-trivial question arises how to set the origin of the polar coordinate system. The examined modes of the dimer are formed through the hybridization of the modes of individual discs. Therefore, it is meaningful to inspect the mode of a specific disc in its local coordinate system related to that disc. We use the following approach. We utilize the same area of interest as in the case of absolute central moments [i.e., two rings concentric with the plasmonic discs with a width of 100 nm; see Fig. 4(b,c))]. We also obtain the loss image weight as the loss intensity normalized to the unit integral over the area of interest. The loss image weight in each ring of the area of interest is processed separately in a local coordinate system of the relevant disc shown in Fig. 1(b) (the pixels from the intersection of the rings are processed twice). The loss image weight \(w(x,y,E)\) is transformed to the (local) polar coordinates \(r\), \(\phi\),
\[w(x,y,E)\to w_{\rm U|L}(r,\phi,E), \tag{9}\]
and then integrated over the radial coordinate, leaving only angular dependence,
\[W_{\rm U|L}(\phi,E)=\int w_{\rm U|L}(r,\phi,E)\,\mathrm{d}r. \tag{10}\]
The indices \(U\) and \(L\) refer to the rings around the upper and lower disc, respectively, and the related local coordinate system. Finally, we define the zeros and positive directions of the angular coordinates in a way that respects the symmetry of the dimer. In this way, the angles 0, \(\pi/2\), \(\pi\), and \(3\pi/2\) correspond for both local coordinate systems to the center, right-hand (transverse) side, outer (longitudinal) side, and left-hand (transverse) side [see Fig. 1(b) and a related discussion]. This also allows us to inspect the total angular distributions of the loss image weight
\[W(\phi,E)=W_{\rm U}(\phi,E)+W_{\rm L}(\phi,E), \tag{11}\]
instead of individual components. For brevity, we will refer to the quantity \(W(\phi,E)\) as _angular loss weight_ in the following. Now, the \(W(\phi,E)\) shall feature maxima at the angle \(\pi\) for the LDB mode, the angles \(\pi/2\) and \(3\pi/2\) for the TD modes, and the angles 0, \(\pi\), and \(2\pi\) for the LDA mode.
The angular loss weight calculated and experimentally retrieved for the dimer under study is shown in
Figure 5: Spectral dependence of the longitudinal absolute central moment \(M_{y}(E)\) (blue) and the transverse absolute central moment \(M_{x}(E)\) (red). The insets show the schemes of the LDB, TD, and LDA modes, and their position on the energy scale qualitatively corresponds to the actual mode energies. The colors of the patches representing the loss probability maxima correspond to their significance for the longitudinal (blue) and transverse (red) spectral features of the moments.
Fig. 6(a,b). The calculated angular loss weight evolves with the energy as expected, featuring a single peak at the angle of \(\pi\) corresponding to the LDB mode at 0.9 - 1.0 eV, two peaks at the angles of \(\pi/2\) and \(3\pi/2\) corresponding to the TD modes at 1.2 eV, and two peaks at the angles 0 (and also \(2\pi\)) and \(\pi\) corresponding to the LDA mode at 1.4 - 1.5 eV. The energies correspond well to the energies determined from the spectra (Fig. 2) and absolute central moments (Fig. 5). The experimental weight angular distribution is qualitatively similar, with the signatures of the LDB, TD, and LDA modes at the energies of 0.8 eV, 1.0 - 1.1 eV, and 1.2 - 1.3 eV, respectively. However, the distribution profiles are distorted. Most prominently, they feature a strong peak around 4 rad for all energies, originating in the irregular shape of the dimer [a protrusion at the left upper corner of the dimer with a strong loss intensity observable in Fig. 2(a-h)]. To compensate for such irregularities, we proposed a _renormalized_ total angular distribution of the loss weight
\[\tilde{W}(\phi,E)=W(\phi,E)/\sum_{E}W(\phi,E), \tag{12}\]
i.e., we divide the _raw_ weight angular distribution by the sum of the weight angular distributions over a range of energies. In this way, the renormalized weight \(\tilde{W}(\phi,E)\) has a sum over the energy independent of the angle, and all angular variations are related purely to the properties of a specific mode. The renormalized total angular distribution of the loss weight is shown in Fig. 6(c,d). For calculated data we observe only insignificant differences between raw and renormalized weights, supporting the feasibility of the renormalization. For experimental data, the renormalized distributions are more symmetric and have peaks closer to the expected angles. The energies of the modes estimated from the renormalized weights are identical to those estimated from raw weights.
The spatio-spectral metrics introduced in the
Figure 6: (a,b) Angular loss weight obtained from the calculated (a) and experimental (b) data, taken at the energy indicated by the numbers (in eV units). The solid lines correspond to the energies of 0.8 eV, 1.0 eV, 1.2 eV, and 1.4 eV. The dashed lines correspond to the energies of 0.9 eV, 1.1 eV, 1.3 eV, and 1.5 eV. (c,d,) Renormalized angular loss weight obtained from the calculated (c) and experimental (d) data.
manuscript predict the mode energies in agreement with each other as well as in agreement with EEL spectra, as summarized in Table 1. The metrics offer several benefits over the traditional EEL spectra. First, they are tailored to specific modes and describe their features in more detail than the spectra. Second, they utilize a larger part of the EELS data cube. In consequence, spatio-spectral metrics allow reliable identification of the nearly degenerate modes and a determination of their energy. With EEL spectra, this is possible using calculated data but not using noisy experimental data. Both metrics, the absolute central moment and the angular weight distribution, have their advantages and disadvantages. The absolute central moment is well suited for systems with naturally defined longitudinal and transverse directions, such as dimers or single rods. However, its predictive power will be limited for systems lacking such directions, e.g., triangular prisms or single spheres. On the other hand, the angular weight distribution is more general (as long as the natural central point of the system exists).
In addition to LSP modes, spatio-spectral metrics are applicable to other local excitations with non-trivial spectral and spatial dependence, such as the optical modes in dielectric nanoparticles [28] and localized vibrational modes supported by nanoparticles. [29; 30] With some adaptation, the metrics can be applied also to data with additional dimensions, as in the case of spectral, spatial and angular dependence of localized surface plasmon modes characterized by cathodoluminescence. [31; 32]
## IV Conclusion
We have addressed an issue of resolving nearly degenerate localized surface plasmons using electron energy loss spectroscopy. More specifically, we have studied four plasmon dipole modes supported by a dimer of plasmonic discs. An inspection of the experimental loss probability individually in the spectral domain (i.e., loss spectra) and the spatial domain (i.e., loss maps) allowed neither unequivocal identification of the modes nor the reliable determination of their energies. Consequently, we have proposed two spatio-spectral metrics defined over both spatial and spectral domains of the loss probability. With their help, the identification of the modes and their energy was made possible. As a side benefit, the metrics require only the rudimentary processing of raw data with no need for background subtraction. We are convinced that simultaneous inspection of the loss probability in both spatial and spectral domains opens the way for a more detailed and reliable data analysis and thus significantly enhances the capability of electron energy loss spectroscopy.
###### Acknowledgements.
We acknowledge the support by the Czech Science Foundation (grant No. 22-04859S), and by the Ministry of Education, Youth and Sports of the Czech Republic (project CzechNanoLab, No. LM2018110).
|
2305.01050 | SafeWebUH at SemEval-2023 Task 11: Learning Annotator Disagreement in
Derogatory Text: Comparison of Direct Training vs Aggregation | Subjectivity and difference of opinion are key social phenomena, and it is
crucial to take these into account in the annotation and detection process of
derogatory textual content. In this paper, we use four datasets provided by
SemEval-2023 Task 11 and fine-tune a BERT model to capture the disagreement in
the annotation. We find individual annotator modeling and aggregation lowers
the Cross-Entropy score by an average of 0.21, compared to the direct training
on the soft labels. Our findings further demonstrate that annotator metadata
contributes to the average 0.029 reduction in the Cross-Entropy score. | Sadat Shahriar, Thamar Solorio | 2023-05-01T19:30:32Z | http://arxiv.org/abs/2305.01050v1 | # SafeWebUH at SemEval-2023 Task 11: Learning Annotator Disagreement
###### Abstract
Subjectivity and difference of opinion are key social phenomena, and it is crucial to take these into account in the annotation and detection process of derogatory textual content. In this paper, we use four datasets provided by SemEval-2023 Task 11 and fine-tune a BERT model to capture the disagreement in the annotation. We find individual annotator modeling and aggregation lowers the Cross-Entropy score by an average of 0.21, compared to the direct training on the soft labels. Our findings further demonstrate that annotator metadata contributes to the average 0.029 reduction in the Cross-Entropy score.
## 1 Introduction
While the web space is inundated with derogatory textual content, the subjectivity of their interpretation frequently necessitates a system capable of capturing reader disagreements. The Learning-With-Disagreement (Le-Wi-Di) task involves learning annotators' disagreements based on how they categorize a text Leonardelli et al. (2023). Recent research has found that almost every annotation task contains a wide range of disagreements Dumitrache et al. (2019); Pavlick and Kwiatkowski (2019). The subjective and biased nature of the raters, among other elements of natural language comprehension, make it crucial to learn disagreements through annotations Uma et al. (2021). In this study, we compare two strategies of disagreement learning: Disagreement Targeted Learning of soft labels, and annotator-specific learning with Post Aggregation, using BERT model. Furthermore, we utilize annotator-specific metadata, to capture annotators' disagreements in disparaging content.
Since the advent of social media, which has flooded the web with massive amounts of content, the number of offensive text, such as hate speech, misogyny, sexism, and abusive content has also increased significantly. Several studies were carried out to battle this problem, such as, Burnap and Williams studied online hate-speech in tweets, triggered by the murder of Lee Rigby, a London-based drummer Burnap and Williams (2015). Xu et al. formulated the cyber-bullying in social media as an NLP task Xu et al. (2012). Similar works are conducted in Warner and Hirschberg (2012); Silva et al. (2016); Gitari et al. (2015). However, tasks related to the detection of social phenomena, like offensiveness, and toxicity are often subjective in nature Kocon et al. (2021). A recent survey among American adults stated that according to half of the participants, "it is hard to know what others might find offensive", and the majority of them acknowledged there were disagreements in what is perceived as sexist or racist (pew, Accessed: 2022-12-03). To this end, we aim to develop a system that can capture subjective disagreement in derogatory text.
The four datasets in the Le-Wi-di task come with the annotator-specific labels, with aggregated hard labels (majority voting) and soft labels (average of the labels). Although a system for modeling disagreements should be trained to estimate soft labels, it is not clear, whether direct training on the soft label or aggregating on the annotator labels is a better approach. Hence, our first research question **(Q1)**: Can annotator-specific classification models, and post hoc aggregation outperform the direct approach of regression on soft labels in disagreement modeling? Additionally, we explore the annotator metadata which explains how an annotator labeled other related text, and we pose the question **(Q2)**: Can annotator metadata improve the disagreement modeling? To address these questions, we compare BERT-based disagreement-targeted learning (regression) and post-aggregation learning (classification) and explore different strategies for incorporating annotator metadata to model the disagreement. However, due to the inconsistency of
the annotators and lack of metadata, we limit our comparisons to two datasets only.
Our work has several important implications. To begin, our model's ability to capture conflicts makes it applicable to the modeling of controversial social phenomena and public opinions. Hence, it can be used to model ambiguity in textual ambiguity. Furthermore, our explorations of incorporating annotator metadata can help understanding readers' perception and outlook in different context. Finally, enhancing transparency and accountability among the raters can be performed as a mean to quality control in multi-rater annotation process. The code for implementing our work is available here: [https://github.com/sadat1971/LeWi-Di-SemEval-23](https://github.com/sadat1971/LeWi-Di-SemEval-23)
## 2 Dataset and Task Description
SemEval'23 Task 11 has four datasets that deal with derogatory text. While the three datasets are in English, _ArMIS_ is in Arabic. Along with soft and hard labels, each dataset contains some metadata. They are described below in brief.
The _MultiDomain Agreement_ (MD) dataset comes with tweets from three domains: BLM, Election and COVID-19 Leonardelli et al. (2021). A total of 819 annotators were used to label all the tweets using AMT. A random combination of five annotators was chosen to label each tweet for offensiveness. The train set contains 6,592 tweets, the dev set from the practice phase has 1,104 tweets, and the test set from the evaluation phase contains 3,057 tweets.
The _HS-Brexit_ dataset contains tweets related to Brexit, and annotation from six annotators (a target group of three Muslim immigrants in the UK and a control group of three) Akhtar et al. (2021). Each of them labeled a tweet for hate speech, which is the target class of the task. They also annotated tweets for being offensive and aggressive. The train, dev, and test set have 784, 168, and 168 tweets respectively.
Misogyny and Sexism are labeled in the _ArMIS_ dataset, rated by three annotators (Moderate Female, Liberal Female, and Conservative Male) Almanea and Poesio (2022). There are 657, 141, and 145 tweets in the train, dev, and test sets, respectively.
The _ConvAbuse_ dataset captures dialogues between a user and two conversational agents, and at least two annotators annotated the conversation for abusiveness Cercas Curry et al. (2021). The dataset also provides labels for a conversation being sexist, explicit, implicit, intellectual, racist, transphobic, homophobic, and the target of the abuse. The train, dev, and test set have 2,398, 812, and 840 tweets respectively.
## 3 System Description
For the textual data, we use a pretrained language representation model, called BERT Devlin et al. (2019). Since BERT is trained on the English data only, to handle the ArMIS task, we use Arabic-BERT Safaya et al. (2020). Figure 1 shows the system description. To address **Q1**, we compare two techniques- Post Aggregation and Disagreement Targeted Learning, and we also investigate the effect of metadata to address **Q2**. The performance is measured by F1-score and Cross-Entropy (CE) score.
### Post-Aggregation
In the _Post-Aggregation_ (Post-Agg) approach, separate models are trained to learn the annotation pattern of each annotator. First, the BERT model is fine-tuned to learn the target class, and the softmax score \(S\) is obtained for all annotators. Next, we process the metadata to extract important information. For the HS-Brexit dataset, in addition to labeling for hate speech, each annotator also labeled tweets for offensive and aggresive, which is available with the dataset. We compute the probability of a tweet being labeled as hate speech, given how it is labeled by an annotator as offensive and aggressive, which we denote as \(P\). For each tweet, the soft label \(\hat{SL}\) is then computed as,
\[\hat{SL}(w)=\frac{1}{N}\sum_{i=1}^{N}\frac{S_{i}+w*P_{i}}{1+w} \tag{1}\]
where N is the number of annotators. Since both \(S_{i}\) and \(P_{i}\) are predicted soft labels, we find their weighted average and select \(w\), where the minimum CE score and maximum F1-score are obtained based on the dev set.
### Disagreement Targeted Learning
While the Post-Agg approach considers learning from each annotator, the Disagreement Targeted Learning (Dis-Learning) approach learns only from the aggregated labels. First, a BERT model is fine-tuned using a downstream regression task of estimating the soft label, and the predicted variable,
\(SL_{BERT}\) is obtained. Next, we measure the average rating of each metadata for all annotators across the entire dataset. For example, in HS-Brexit dataset, if two annotator labels a tweet as offensive, while four as not-offensive, the average metadata (offensiveness score) for that tweet will be \(2/6=0.33\). Next, we train a linear regression model to predict the soft label based only on the available average metadata rating.
\[SL_{meta}=b_{0}+b_{1}*M_{1}+b_{2}*M_{2} \tag{2}\]
\(b_{0},b_{1},b_{2}\) are trained from the linear regression model, and \(M_{1}\) and \(M_{2}\) are two metadata scores. For HS-Brexit, we use average offensive and aggressive measures. For the ConvAbuse dataset, out of twelve metadata labels, we pick the top two, _explicit_ and _target system_ which yielded the best correlation coefficient with the soft label values. Finally, we find \(SL\) by averaging \(SL_{BERT}\) and \(SL_{meta}\).
## 4 Experimental Set-up
All of our models use "bert-base-uncased" version of BERT ("bert-base-arabic" in ArMIS). We deploy a two-layered fully-connected network for fine-tuning in both regression and classification tasks. We choose the hyper-parameters from all the combinations, by three-fold cross-validation in the practice phase, and on the released validation set in the evaluation phase. The hidden size and dropout rate are chosen from {32, 64, 128, 256}, and {.1,.3,.5}. The learning rate is chosen from {5e-4, 1e-5, 5e-5, 1e-6}. We keep the batch size small due to the GPU limitations and choose from {8, 16}. Since BERT models quickly overfit on the data, we kept the epoch size between 2 and 4. However, for "arabic-bert-base", the performance was unstable, and we train upto 10 epochs. For all cases, AdamW is used as optimizer [16]. For all our experiments, Pytorch version 1.11.0 is used [10].
To evaluate the result of capturing disagreement, we use the Cross-Entropy score provided by the competition. If the target soft label is \(T\), and predicted soft label is \(P\), for a dataset of size \(D\), the Cross-Entropy (CE) is computed as:
\[CE=-\frac{1}{D}\sum_{i=1}^{D}T_{i}*log(P_{i}+1e-9) \tag{3}\]
We further report the F1-score (micro) on the hard label to evaluate the model performance on the majority-voted final prediction task.
## 5 Result and Discussion
Table 1 shows that for HS-Brexit dataset, the Post-Agg approach does not improve the F1-score from the Dis-Learning approach. However, the Post-Agg approach is able to reduce the CE score by 0.1400 from the Dis-Learning approach. The reduction is even higher when metadata is used (by 0.1958). Similarly, for the ArMIS dataset, Dis-Learning approach has higher F1-score compared to the Post-Agg approach, while the CE score is lower in the Dis-Learning approach (reduced by 0.3070).
We further investigate why the Post-Agg approach works better at capturing disagreement. Since the Dis-Learning approach does not take into
Figure 1: The text is fed to a pretrained BERT model, and fine-tuned for the downstream task. For the Post-Aggregation approach, the downstream classification task of the BERT model is to predict the label for each annotator. The softmax value from each annotator model with the metadata is ensembled to produce the annotator-specific soft-label and hard-label prediction. For the Disagreement Targeted Learning approach, the BERT downstream task is to directly learn the soft labels. The ensemble mechanism is performed by regression approach to learn the final prediction.
account individual annotators, it mainly approximates the "intensity" of a text being derogatory. Conversely, the Post-Agg approach considers each annotator separately and learns their annotation pattern, which is aggregated afterward. Consequently, Dis-Learning has to depend only on textual data, making its job harder than Post-Agg. However, in a realistic case, the annotators may not be consistent (as in the MD and ConvAbuse datasets), or a large number of models are needed to be trained, rendering the Post-Agg technique infeasible. Therefore, the Post-Agg approach is better suited for modeling disagreement if a small number of annotators are consistent across the dataset. Hence, **Q1** is addressed.
Next, the results reveal that performance is enhanced when annotator metadata is utilized as opposed to when it is not (Table 1). Using the metadata reduced the CE score for the HS-Brexit dataset by 0.0852 and 0.0294 for the Post-Agg and Dis-Learning approaches, respectively. Similarly, for the ConvAbuse dataset, annotator metadata helps lower the CE score by 0.1676. The metadata contains useful annotation patterns of the annotators, which ameliorates the learning process. Notably, we have not used the metadata from MD and ArMIS, since they do not contain the related annotation information from the annotators. Therefore, **Q2** is addressed.
In the MD, HS-Brexit, ArMIS, and ConvAbuse datasets, our results were ranked 7th, 9th, 11th, and 12th, respectively. Overall, we ranked 9th in the CE score category and 8th in the F1-score category.
Error AnalysisFinally, we focus on the error analysis of this study. We find that both our approaches often make mistakes in prediction for the texts that do not use slang or curse words but are still voted by the majority as offensive. For example, three of the five annotators annotated the following sentence as offensive (soft label 0.60): _#TonyBobulinski #MAGA2020 #MAGA #ChangeYourVoteToTrump #BidenCrimeFamily #BidenHarris2020 #BidenCares #LaptopFromHell Joe is going down. <url>_. However, our model predicts the soft label as 0.15. Similarly, tweets that contain curse words but do not necessarily exhibit offensiveness, are sometimes mistaken by our model as hate speech. For example, the tweet: _Atoounding Words from the prolific and talented - <user> #BlackLivesMatter #flucktrump <url>_ is labeled as non-offensive by three annotators out of five, however, our model predicts the soft label as 0.85, due to the presence of profane language in one of the hashtags.
## 6 Related Works
Though the majority of AI learning still operates under the assumption that a single interpretation exists for each item, research is growing to build learning methods that do not rely on this assumption [17]. Rater's Disagreement is a familiar phenomenon in Natural Language Processing [14, 15]. The disagreement may take place because of the annotator error or interface problem [10], explicit or implicit ambiguity [14], item difficulty [11], and subjectivity [1]. Notwithstanding, the simpler task such as POS tagging [10] to subjective tasks like sentiment analysis, semantic role assignments also involve raters' disagreement [12, 13]. Hence, researchers argued for taking disagreement into account during the labeling process and retaining the implicit ambiguity [15, 16].
\begin{table}
\begin{tabular}{l|c c|c c||c c c|c c} \cline{2-10} & \multicolumn{2}{c}{**Post-Agg**} & \multicolumn{2}{c||}{**Post-Agg-meta**} & \multicolumn{2}{c}{**Dis-Learning**} & \multicolumn{2}{c}{**Dis-Learning-meta**} \\ \hline
**Dataset** & F1 & CE & F1 & CE & F1 & CE & F1 & CE \\ \hline \hline
**MD** & β & β & β & β & **0.8266*** & **0.5076*** & β & β \\
**HS-Brexit** & 0.8810 & 0.1686 & **0.9167** & **0.0834** & 0.8869 & 0.3086 & 0.9107* & 0.2792* \\
**ArMIS** & 0.7211 & **0.2683** & β & β & **0.7586*** & 0.5753* & β & β \\
**ConvAbuse** & β & β & β & β & 0.9321* & 0.2364* & **0.9667** & **0.0688** \\ \hline \end{tabular}
\end{table}
Table 1: Comparing the performances of disagreement modeling approaches in all four datasets. Because the MD and ConvAbuse datasets lack consistent annotators, their Post-Agg results are not reported. Also, MD and ArMIS lack annotator-specific metadata, and thus, their metadata-incorporated performance is not reported as well. The best performance in each dataset is denoted in bold numbers, whereas the performance submitted in the Le-Wi-Di task is indicated with asterisks (*).
2005))). To this end, we explore the Learning-With Disagreement task for derogatory text.
The previous version of this competition was launched in 2021, where the organizers used NL and image-classification task to address for disagreement in the labeling Uma et al. (2021). The winning team used the Sharpness-Aware Minimization technique (SAM) and a special NN layer called softmax Crowd-layer with BERT as baseline model Osei-Brefo et al. (2021). While the SAM architecture was mainly used for CIFAR-10 (image classification), the Crowdlayer architecture aims to map the label with each individual annotator. Since the current competition only involves text, we fine-tune a BERT model and use the annotator metadata to capture the disagreement.
## 7 Conclusion
Because of the proliferation of social media content, the internet has become a breeding ground for derogatory text. However, due to the differences in human perception and opinion, often there is no unanimous consensus among the annotators about the text being derogatory or not. Hence, it is imperative to store the soft labels and capture annotator disagreement in the modeling process. Our work compares the direct training on the soft label with the annotator-specific model and post-aggregation. We find that with the presence of consistent annotators, it might be helpful to take the latter approach. In addition, integrating annotator metadata has been proved to be beneficial in our experiments. Our work has a wide variety of potential future research directions, such as:
* We only modeled with one Transformer-based approach, BERT. In the future, we plan to use RoBERTA, ELECTRA and XLMNet
* We find a strong correlation between hate speech and offensiveness. Therefore, we plan to investigate how cross-dataset performance works. Such experiments will also help to make our model more generalizable.
* Because language evolves in response to social context and other phenomena, it is critical to include Continual Learning (CL) techniques and investigate the distribution shift in the annotation process. In the future, we intend to incorporate CL into our work.
|
2304.01379 | Extinction time of an epidemic with infection age dependent infectivity | This paper studies the distribution function of the time of extinction of a
subcritical epidemic, when a large enough proportion of the population has been
immunized and/or the infectivity of the infectious individuals has been
reduced, so that the effective reproduction number is less than one. We do that
for a SIR/SEIR model, where infectious individuals have an infection age
dependent infectivity, as in the model introduced in the 1927 seminal paper of
Kermack and McKendrick. Our main conclusion is that simplifying the model as an
ODE SIR model, as it is largely done in the epidemics literature, introduces a
biais toward shorter extinction time. | Anicet Mougabe-Peurkor, Ibrahima Drame, Modeste N'zi, Etienne Pardoux | 2023-04-03T21:11:38Z | http://arxiv.org/abs/2304.01379v1 | # Extinction time of an epidemic with infection age dependent infectivity
###### Abstract
This paper studies the distribution function of the time of extinction of a subcritical epidemic, when a large enough proportion of the population has been immunized and/or the infectivity of the infectious individuals has been reduced, so that the effective reproduction number is less than one. We do that for a SIR/SEIR model, where infectious individuals have an infection age dependent infectivity, as in the model introduced in the 1927 seminal paper of Kermack and McKendrick [9]. Our main conclusion is that simplifying the model as an ODE SIR model, as it is largely done in the epidemics literature, introduces a biais toward shorter extinction time.
**Keywords**: Epidemic model; Branching process; Extinction time; Infection age dependent infectivity; ODE SIR model; Effective reproduction number.
## 1 Introduction
Consider an epidemic which is declining: the number \(M\) of infected individuals is moderate, and decreases, while the total size \(N\) of the population is much larger. In such a phase, the approximation by the deterministic model is no longer valid. Rather, as the initial phase of an epidemic, the final phase can be well approximated by a branching process, in this case a subcritical branching process. The extinction time is thus random. It is of interest to have some information on the distribution function of this extinction time. Indeed, if the subcriticality is due in part to some rules imposed to the population, like mask wearing in public transport, classrooms, workplace, theaters etc., it is important to evaluate how long such rules must be maintained.
Our epidemic model is a SIR/SEIR model, i.e. we assume that after having been infected and having recovered, an individual remains immune to the disease for ever. This is not quite realistic. However, if the duration of the studied period is not too long, then the number of individuals who loose their
immunity during that period can be neglected. On the other hand, the stochastic SIR/SEIR model upon which we base our analysis is non Markov. Following the ideas of Kermack-McKendrick [9] and Forien, Pang, Pardoux [5], we consider a model where the infectivity of each infectious individual is infection age dependent (and random, the realizations corresponding to various individuals being i.i.d.). We characterize the distribution function of the extinction time of the approximating non Markov branching process with a single ancestor as the unique solution of a Volterra-type integral equation, for which we give a converging numerical approximation. The derivation of the equation is based upon a methodology introduced by Crump and Mode [3]. From this result, we deduce in Theorem 3.5 a formula for the time we have to wait after \(t_{0}\) for the epidemic to go extinct, if at time \(t_{0}\) we have \(M\) infected individuals in a population of size \(N\), with \(M<<N\).
With the help of a numerical scheme, we compute an approximation of the distribution function of the time of extinction, and compare the result with the distribution function of the extinction time of a Markov branching process which approximates the classical Markov SIR model (whose law of large numbers limit is the most standard SIR ODE model), which is known explicitly. This comparison is done between two models which have both the same effective reproduction number \(R_{eff}\) (the mean number of "descendants" one infectious individual has at this stage of the epidemic), and the same rate \(\rho\) of continuous time exponential decrease. Our conclusion is that the usual ODE SIR model leads to an underestimation of the extinction time.
Our work was inspired by the recent work of Griette et al. [7], where the authors neglect the new infections during the final phase. Note that this approximation is justified by the data, in the case of the end of the Covid epidemic in Wuhan in 2020. Our work does not make such a simplifying assumptions, and allows a very general law for the varying infectivity, and a completely arbitrary law for the duration of the infectious period.
The paper is organized as follows. We present our varying infectivity SIR model in Section 2, together with its branching process approximation. In Section 3, we study the distribution function of the extinction time of the branching process. In section 4, we present several examples of SIR/SEIR models, including the classical ODE SIR model, ODE SEIR model, and we specify the type of varying infectivity which we have in mind. In section 5, we compare the time of extinction of the branching approximations to our varying infectivity model, and to the ODE SIR model. In section 6, we discuss the results obtained in that comparison. Finally in section 7 (the Appendix), we establish the convergence of a numerical approximation scheme of the equation established in section 3.
**Notations** In what follows we shall use the following notations: \(\mathbb{Z}=\{...,-2,-1,0,1,2,...\}\), \(\mathbb{R}=(-\infty,\infty)\), \(\mathbb{R}_{+}=[0,\infty)\) and \(\mathbb{R}_{-}=[0,\infty)\). For \(x\in\mathbb{R}_{+}\), \([x]\) denotes the integer part of \(x\) and \(\lceil x\rceil\) ( resp. \(\lfloor x\rfloor\)) denotes the ceiling function (resp. the floor function). For \(x\in\mathbb{R}\), \(x^{+}\) (resp. \(x^{-}\)) denotes the positive part of \(x\) (resp. the negative part of \(x\)). For \((a,b)\in\mathbb{R}^{2}\), \(a<b\), \(\mathcal{U}([a,b])\) denotes the uniform distribution on \([a,b]\). \(D([0,\infty))\) denotes the space of functions from \([0,\infty)\) into \(\mathbb{R}\) which are right continuous and have left limits at any \(t>0\). We shall always equip the space \(D([0,\infty))\) with the Skorohod topology, for the definition of which we refer the reader to Billingsley [2].
## 2 The SIR model with Varying Infectivity
### The epidemic model
Let \(\{\lambda_{j}(t),\;t\geq 0\}\), \(j\in\mathbb{Z}\backslash\{0\}\) be a collection of mutually independent non negative functions, which are such that the \(\{\lambda_{j}\}_{j\geq 1}\) are identically distributed, as well as the \(\{\lambda_{j}\}_{j\leq-1}\). We assume that these function belongs \(a.s.\) to \(D([0,\infty))\). We consider a SIR model which is such that the \(j\)-th initially infected individual has the infectivity \(\lambda_{-j}(t)\) at time \(t\), while the \(j\)-th individual infected after time \(0\) has at time \(t\) the infectivity \(\lambda_{j}(t-\tau_{j})\), if \(0<\tau_{1}<\cdots<\tau_{\ell}<\cdots\) denote the successive times of infection in the population. The quantity \(t-\tau_{j}\) is the age of infection of individual \(j\) at time \(t\). Note that we assume that \(\lambda_{j}\) vanishes on \(\mathbb{R}_{-}\). The newly infected individual is chosen uniformly at random in the population, and if that individual is susceptible, then it jumps from the S to the I compartment at its time of infection while nothing happens if the individual is not susceptible. Examples of function \(\lambda_{j}(t)\) will be given below. That function can be first zero during the exposed period, then the individual becomes infectious, and at age of infection \(\eta_{j}=sup\{t,\;\lambda_{j}(t)>0\}\), the individual recovers (i.e. jumps into the R compartment) and is immune for ever. Clearly an important quantity is the total force of infection in the population at time \(t\): \(\bar{\mathfrak{F}}^{N}(t)\), which is the sum of all the infectivities of the infected individuals at that time. Here \(N\) is the total number of individuals in the population. The sum of the numbers of individuals in the three compartments is constant in time : \(S^{N}(t)+I^{N}(t)+R^{N}(t)=N\) for all \(t\geq 0\). For \(X=S,\bar{\mathfrak{F}},I,\) or \(R\), we define the renormalized quantity \(\bar{X}^{N}(t)=X^{N}(t)/N\). The main result of [5] is that as \(N\to\infty\), \((\bar{S}^{N}(t),\bar{\mathfrak{F}}^{N}(t),\bar{I}^{N}(t),\bar{R}^{N}(t))\to( \bar{S}(t),\bar{\mathfrak{F}}(t),\bar{I}(t),\bar{R}(t))\), where the limit is the unique solution of the following system of integral equations, which already appears in the seminal paper of Kermack and McKendrick [9]:
\[\begin{cases}\bar{S}(t)&=\bar{S}(0)-\int_{0}^{t}\bar{S}(s)\bar{\mathfrak{F}}( s)ds,\\ \bar{\mathfrak{F}}(t)&=\bar{I}(0)\bar{\lambda}^{0}(t)+\int_{0}^{t}\bar{ \lambda}(t-s)\bar{S}(s)\bar{\mathfrak{F}}(s)ds,\\ \bar{I}(t)&=\bar{I}(0)F_{0}^{c}(t)+\int_{0}^{0}F^{c}(t-s)\bar{S}(s)\bar{ \mathfrak{F}}(s)ds,\\ \bar{R}(t)&=\bar{R}(0)+\bar{I}(0)F_{0}(t)+\int_{0}^{t}F(t-s)\bar{S}(s)\bar{ \mathfrak{F}}(s)ds,\end{cases} \tag{2.1}\]
where \(\bar{\lambda}^{0}(t)=\mathbb{E}[\lambda_{-1}(t)]\) and \(\bar{\lambda}(t)=\mathbb{E}[\lambda_{1}(t)]\), \(F_{0}\) (resp. \(F\)) is the distribution function of \(\eta_{-1}\) (resp. of \(\eta_{1}\)) and \(F_{0}^{c}(t)=1-F_{0}(t)\), \(F^{c}(t)=1-F(t)\). This convergence holds true provided that \(\lambda\in D\) a.s. and for some \(\lambda^{*}>0\), \(0\leq\lambda_{j}(t)\leq\lambda^{*}\) a.s. for all \(j\in\mathbb{Z}\) and \(t\geq 0\), see [6]. The original proof in [5] puts more restrictions on \(\lambda\).
### The branching process approximation
Suppose that at time \(t_{0}\) only a moderate number \(M<<N\) of individuals in the population is infected, and the mean number \(R_{eff}=\bar{S}(t_{0})\int_{0}^{\infty}\bar{\lambda}(t)dt\) of individuals which an infected individual infects satisfies \(R_{eff}<1\). Then the epidemic is declining. It can be well approximated by the following non Markovian continuous time branching process. We will study its extinction time in te next section, and deduce a good approximation of the time we have to wait after \(t_{0}\) for the epidemic to go extinct. Note that we approximate the proportion \(\bar{S}(t)\) by \(\bar{S}(t_{0})\), for any \(t\geq t_{0}\).
We consider the branching process \(Z(t)\) with (to start with) a single ancestor at time \(0\). The \(j\)-th individual in the population, independently from all other individuals, lives for a duration \(\eta_{j}\), and during its life time gives birth to children (one at a time) at rate \(\bar{S}(t_{0})\lambda_{j}(t)\), where \((\eta_{j},\lambda_{j})\) are as
above. This is clearly a non Markov continuous time branching process, which belongs to the class of Crump-Mode-Jagers branching processes.
## 3 The extinction time of the branching process associated to the varying infectivity model
For now on, \(t\geq 0\) stands for \(t-t_{0}\) (\(t\geq t_{0}\)). We define \(\hat{\lambda}(t):=\bar{S}(t_{0})\lambda(t)\). Let \(Z(t)\) denote the number of descendants at time \(t\) of an individual born (i.e. infected) at time \(0\), in the continuous time branching process which describes the number of infected individuals at time \(t\). This ancestor infects susceptible individuals during the time interval \([0,\eta]\), at the random and varying rate \(\hat{\lambda}(t)\). His descendants have the same behaviour, each one independently from all the others.
In this paper, we make the following assumption on the infectivity function.
**Assumption (H)** We shall assume that there exists a constant \(\lambda^{*}>0\) such that
\[\lambda(t)\leq\lambda^{*}\mbox{ almost surely, for all }t\geq 0.\]
Let \(T_{ext}=\inf\{t>0:Z(t)=0\}\) denote the extinction time of the epidemic, \(G(s,t)=\mathbb{E}\left(s^{Z_{s}}\right)\), \(|s|\leq 1\), denote the probability generating function of \(Z(t)\) and \(F(t)=G(0,t)\) the distribution function of the extinction time.
### Distribution function of the extinction time
In this subsection, we will characterize the distribution function of the extinction time of \(Z\) as the unique solution of an integral equation. To this end, we imitate the computations done in the proof of Theorem 4.1 in [4]. We first start by determining the generating function \(G(s,t)\) of \(Z\) in order next to deduce the distribution function of the extinction time.
Denote by \(Z_{0}(t)\) the descendance of the ancestor at time \(t\), and for \(j\geq 1\), \(Z_{j}(t)\) the descendance of the \(j\)-th direct descendant of the ancestor at time \(t\) after its birth. Then \(\{Z_{j}(.),j\geq 0\}\) is a sequence of independent and identically distributed (i.i.d) random processes which have the law of \(Z\). In order to simplify our notations, we will write \(\hat{\lambda}_{0}\) (resp. \(\eta_{0}\)) for the value of \(\hat{\lambda}\) (resp. \(\eta\)) associated with \(Z_{0}\). Formula (3.1) from [3] reads
\[Z_{0}(t)=\mathds{1}_{\eta_{0}>t}+\sum_{j=1}^{Q_{0}(t)}Z_{j}(t-t^{j}), \tag{3.1}\]
where \(Q_{0}(t)\) is the number of direct descendants of the ancestor born on the time interval \((0,t]\). Moreover, \(Q_{0}(t)\) is a counting process, which conditionnally upon \(\hat{\lambda}_{0}(\cdot)\), is a nonhomogeneous Poisson process with varying intensity \(\hat{\lambda}_{0}(t)\), and \(0<t^{1}<t^{2}<\cdots\) are the successive jump times of the process \(Q_{0}(t)\).
We have
**Proposition 3.1**: _The probability generating function \(G\) satisfies the following integral equation_
\[G(s,t)=\mathbb{E}\left[s^{\mathds{1}_{\eta>t}}\exp\left\{\int_{0}^{t}\left(G( s,t-u)-1\right)\hat{\lambda}(u)du\right\}\right].\]
**Proof.** Since \(Z\) has the same law as \(Z_{0}\), we first to compute \(\mathbb{E}\left[s^{Z_{0}(t)}|\hat{\lambda}_{0}\right]\) in order to deduce the value of \(G\). From (3.1), we deduce that
\[\mathbb{E}\left[s^{Z_{0}(t)}|\hat{\lambda}_{0}\right]= \sum_{k=0}^{\infty}s^{\mathds{1}\eta_{0>t}}\mathbb{P}(Q_{0}(t)=k |\hat{\lambda}_{0})\mathbb{E}\left\{\prod_{j=1}^{k}s^{Z_{j}(t-t^{j})}\Big{|}Q_{ 0}(t)=k,\hat{\lambda}_{0}\right\}\] \[= \sum_{k=0}^{\infty}s^{\mathds{1}\eta_{0>t}}\mathbb{P}(Q_{0}(t)=k |\hat{\lambda}_{0})\mathbb{E}\left\{\prod_{j=1}^{k}G(s,t-t^{j})\Big{|}Q_{0}(t)= k,\hat{\lambda}_{0}\right\}\] \[= \sum_{k=0}^{\infty}s^{\mathds{1}\eta_{0>t}}\mathbb{P}(Q_{0}(t)= k|\hat{\lambda}_{0})\frac{k!}{\Big{(}\int_{0}^{t}\hat{\lambda}_{0}(v)dv\Big{)}^{k}}\times\] \[\int_{0}^{t}\int_{0}^{u_{k}}...\int_{0}^{u_{2}}\prod_{j=1}^{k}G(s,t-u_{j})\hat{\lambda}_{0}(u_{1})...\hat{\lambda}_{0}(u_{k})du_{1}...du_{k}\] \[= s^{\mathds{1}\eta_{0>t}}\exp\Bigg{(}-\int_{0}^{t}\hat{\lambda}_{ 0}(v)dv\Bigg{)}\times\] \[\sum_{k=0}^{\infty}\int_{0}^{t}\int_{0}^{u_{k}}...\int_{0}^{u_{2} }\prod_{j=1}^{k}G(s,t-u_{j})\hat{\lambda}_{0}(u_{1})...\hat{\lambda}_{0}(u_{k })du_{1}...du_{k}\] \[= s^{\mathds{1}\eta_{0>t}}\exp\Bigg{(}-\int_{0}^{t}\hat{\lambda}_{ 0}(v)dv\Bigg{)}\sum_{k=0}^{\infty}\frac{1}{k!}\Bigg{(}\int_{0}^{t}G(s,t-u) \hat{\lambda}_{0}(u)du\Bigg{)}^{k}\] \[= s^{\mathds{1}\eta_{0>t}}\exp\Bigg{\{}\int_{0}^{t}\Big{(}G(s,t-u) -1\Big{)}\hat{\lambda}_{0}(u)du\Bigg{\}}.\]
The third equality exploits the well known result on the law of the times of the jumps of a Poisson process on a given interval, given the number of those jumps (see Exercise 6.5.4 in [10], which treats the case of a constant rate, the general case follows via an obvious time change), and the fourth equality the conditional law of \(Q_{0}(t)\), given \(\hat{\lambda}_{0}\). We thus obtain that
\[G(s,t)=\mathbb{E}\left[s^{\mathds{1}\eta_{0>t}}\exp\Bigg{\{}\int_{0}^{t}\Big{(} G(s,t-u)-1\Big{)}\hat{\lambda}_{0}(u)du\Bigg{\}}\right].\]
Since \((\hat{\lambda}_{0},\eta_{0})\) has the same law as \((\hat{\lambda},\eta)\), we can drop the subindices \(0\) in the last formula, yielding the formula of the statement. \(\blacksquare\)
The term \(s^{\mathds{1}\eta_{0>t}}\) can be written as follows: \(s^{\mathds{1}\eta_{0>t}}=\mathds{1}_{\eta\leq t}+s\mathds{1}_{\eta>t}\). From this, we deduce readily the following Corollary for \(F(t)=G(0,t)\).
**Corollary 3.2**: _The distribution function \(F\) of the extinction time of the branching process with one unique ancestor born at time \(0\) satisfies the following integral equation:_
\[F(t)=\mathbb{E}\left[\mathds{1}_{\eta\leq t}\exp\Bigg{\{}\int_{0}^{t}\Big{(}F (t-u)-1\Big{)}\hat{\lambda}(u)du\Bigg{\}}\right]. \tag{3.2}\]
The fact that (3.2) characterizes \(F\) follows from the following crucial result.
**Proposition 3.3**: _Equation (3.2) has a unique \([0,1]\)-valued solution._
**Proof.** The distribution function of the extinction time solves this equation. Let us show that this equation has at most one \([0,1]\)-valued solution. To this end, suppose that the equation has two solutions \(F^{1}\) and \(F^{2}\) which are upper bounded by \(1\). We have
\[F^{1}(t)-F^{2}(t)=\mathbb{E}\Bigg{[}\mathds{1}_{\eta\leq t}\Bigg{(}\exp\Bigg{\{} \int_{0}^{t}\Big{(}F^{1}(t-u)-1\Big{)}\hat{\lambda}(u)du\Bigg{\}}-\exp\Bigg{\{} \int_{0}^{t}\Big{(}F^{2}(t-u)-1\Big{)}\hat{\lambda}(u)du\Bigg{\}}\Bigg{)}\Bigg{]}.\]
From the fact that \(|e^{-x}-e^{-y}|\leq|x-y|\), \(\forall x,y>0\), we deduce that
\[\Big{|}F^{1}(t)-F^{2}(t)\Big{|} \leq\mathbb{E}\left[\int_{0}^{t}\hat{\lambda}(u)\Big{|}F^{1}(t-u )-F^{2}(t-u)\Big{|}du\right]\] \[\leq\hat{\lambda}^{*}\int_{0}^{t}\Big{|}F^{1}(u)-F^{2}(u)\Big{|}du,\]
where we have used assumption **(H)** and the notation \(\hat{\lambda}^{*}=\bar{S}(t_{0})\lambda^{*}\). The desired result follows by combining this with Gronwall's lemma. \(\blacksquare\)
### Epidemic starting at time \(\chi<0\).
Now let us consider the case where the ancestor has been infected at a random time \(\chi<0\). Then the total progeny at time \(t\) of this ancestor can be written as follows:
\[Z_{0}(t)=\mathds{1}_{\eta_{0}>t+\chi}+\sum_{j=1}^{Q_{0}(t)}Z_{j}(t-t^{j}).\]
From an easy adaptation of the argument used in the proof of Proposition 3.1, we deduce the
**Proposition 3.4**: _The distribution function \(F_{\chi}\) of the extinction time of the epidemic starting with a unique ancestor at time \(0\), who was born at time \(\chi(<0)\) satisfies :_
\[F_{\chi}(t)=\mathbb{E}\left[\mathds{1}_{\eta\leq t+\chi}\exp\left\{\int_{0}^{t }\Big{(}F_{\chi}(t-u)-1\Big{)}\hat{\lambda}(u-\chi)du\right\}\right].\]
### Epidemic with multiple infected at the initial time.
In the first two subsections, we have considered an epidemic that starts with a single infected individual. In this subsection, we consider an epidemic that starts with \(M\in\mathbb{N}\) infected individuals at the initial time. The goal is to determine the distribution of the extinction time. To this end, let \((\hat{\lambda}_{i},\eta_{i})_{1\leq i\leq M}\) be a sequence of pairs of random variables where \(\hat{\lambda}_{i}\) (resp. \(\eta_{i}\)) denote the infectivity (resp. the lifetime) of the ancestor \(i\). Let \((u_{i})_{1\leq i\leq M}\) be a sequence of independent and identically distributed (i.i.d) random variables with law \(\mathcal{U}([0,1])\). Note that the sequence \(\Big{(}\hat{\lambda}_{i},\eta_{i},u_{i}\Big{)}_{1\leq i\leq M}\) is i.i.d and for each \(i\), we assume that \((\hat{\lambda}_{i},\eta_{i})\) and \(u_{i}\) are independent. We assume that the individual \(i\) was infected at time \(\chi_{i}=-u_{i}\eta_{i}\), which we believe is the most natural model.
From Proposition 3.4, we know that the distribution of the extinction time of the epidemic starting with the ancestor \(i\), is given by
\[\tilde{F}(t)=\mathbb{E}\left[\mathds{1}_{\tilde{\eta}\leq t}\exp\left\{\int_{0}^ {t}\left(\tilde{F}(t-r)-1\right)\tilde{\lambda}(r)dr\right\}\right], \tag{3.3}\]
with \(\tilde{\eta}_{i}=\eta_{i}(1-u_{i})\) and \(\tilde{\lambda}_{i}(t)=\tilde{\lambda}_{i}(t-\chi_{i})\). Since, as the dynamics of reproduction remains the same for all infected individuals resulting from each ancestor, hence from the branching property, we deduce the main result of this section.
**Theorem 3.5**: _The distribution function of the time we have to wait in order to see the extinction of the epidemic, if at time \(t_{0}\) we have \(M\) infected individuals, is well approximated by the following:_
\[H(t)=\left(\tilde{F}(t)\right)^{M}.\]
## 4 Several examples of random function \(\lambda(t)\)
Our varying infectivity model is in fact a SIR/SEIR, in the sense that it allows an exposed period just after infection, during which \(\lambda(t)=0\). However, we do not introduce the \(E\) compartment (\(E\) for _Exposed_, the status of an infected individual who is, just after being infected, in a latent period, not yet infectious), the \(I\) compartment including all infected individuals, whether latent or infectious. In all most used models \(\lambda(t)\) is piecewise constant, the jump times being random, following most classically an exponential distribution so that the stochastic model is Markovian and its law of large numbers limit is a system of ordinary differential equations (in contrast with the integral equation (2.1)).
We now review two classical examples of piecewise constant \(\lambda(t)\), which correspond respectively to the SIR and the SEIR model and finally present the example of varying infectivity \(\lambda(t)\) which we shall use in the next section for our comparison with the more classical SIR ODE model.
### The classical SIR model
The simplest commonly used example of the infectivity \(\lambda(t)\) is \(\lambda(t)=\lambda\mathds{1}_{t\leq\eta}\), where \(\lambda\) is a positive constant and \(\eta\) is the random duration of the infectious period. In that case equation (3.2) take the form
\[F(t)=\int_{0}^{t}\exp\Bigg{\{}\lambda\int_{0}^{r}\Big{(}F(t-u)-1\Big{)}du \Bigg{\}}\mathbb{P}_{\eta}(dr).\]
In the particular case of a deterministic \(\eta\) (\(i.e.\)\(\mathbb{P}_{\eta}=\delta_{a}\), with \(a\in\mathbb{R}_{+}\)), we have
\[F(t)=\mathds{1}_{t\geq a}\exp\Bigg{\{}\lambda\int_{0}^{a}\Big{(}F(t-u)-1\Big{)} du\Bigg{\}}\quad\text{with}\quad F(0)=0\quad\text{and}\quad F(a)=\exp(-\lambda a).\]
The most commonly used model corresponds to \(\eta\) following an exponential distribution with parameter \(\mu\). In this case, the system of integral equations (2.1) simplifies as follows :
\[\left\{\begin{array}{l}\frac{dS(t)}{dt}=-\lambda\overline{S}(t)I(t),\\ \\ \frac{dI(t)}{dt}=\left(\lambda\overline{S}(t)-\mu\right)I(t),\\ \\ \frac{dR(t)}{dt}=\mu I(t).\end{array}\right.\]
If we linearize the second equation for \(t\geq t_{0}\) by replacing \(\overline{S}(t)\) by \(\overline{S}(t_{0})\), we obtain
\[I(t)=I(t_{0})\exp\left[\left(\lambda\overline{S}(t_{0})-\mu\right)(t-t_{0}) \right].\]
From this, it is easy see that
\[\rho=\lambda\overline{S}(t_{0})-\mu. \tag{4.1}\]
The fact that the above derivation is correct, although the deterministic model is not valid for \(t\geq t_{0}\), is explained in [5]. Note also that solving equation (4.6) below gives the same result, as the reader can easily verify.
Let us now compute \(R_{eff}\). An infected individual has infectious contacts at rate \(\lambda\overline{S}(t_{0})\). This means that the expected number of infectious contacts equals
\[R_{eff}=\lambda\overline{S}(t_{0})\times\mathbb{E}[\eta]=\frac{\lambda \overline{S}(t_{0})}{\mu}. \tag{4.2}\]
The approximating branching process is the continuous time Markov branching process \((X(t))_{t\geq 0}\) which describes the number of descendants alive at time \(t\) of a unique ancestor born at time zero. Every individual in this population, independently of the others, lives for an exponential time with parameter \(\mu\), and during its lifetime it gives births at rate \(\lambda\overline{S}(t_{0})\). His descendants reproduce according to the same procedure. We consider the subcritical case \(\mu>\lambda\overline{S}(t_{0})\). Let \(G(s,t)=\mathbb{E}\left(s^{X(t)}\right)\), \(|s|\leq 1\), be the probability generating function of \(X(t)\). On page 109 of Athreya and Ney [1], or in formula (5) of Iwasa, Nowak, and Michor [8], we find the explicit form :
\[G(s,t)=\frac{\mu(s-1)-e^{-\rho t}(\lambda\overline{S}(t_{0})s-\mu)}{\lambda \overline{S}(t_{0})(s-1)-e^{-\rho t}(\lambda\overline{S}(t_{0})s-\mu)}.\]
where \(\rho\) was defined in (4.1). Let us define \(T_{ext}=\inf\{t>0:X(t)=0\}\). We notice that \(F(t)=G(0,t)=\mathbb{P}(X_{t}=0)=\mathbb{P}(T_{ext}\leq t)\) is the distribution function of the extinction time. From the expression for \(G(s,t)\), we deduce the value of \(F(t)\).
**Proposition 4.1**: _When starting with a single ancestor at time \(0\), the distribution function of the extinction time is given as :_
\[F(t)=\frac{1-e^{\rho t}}{1-R_{eff}\times e^{\rho t}},\]
_where \(R_{eff}\) was defined in (4.2)._
### The classical SEIR model
In this model, upon infection an individual is first exposed (compartment \(E\)), during a period \(\xi\), during which the individual is not infectious, then he becomes infectious and stays so for a duration \(\eta\), during which he infects susceptibles at rate \(\lambda\), and then finally recovers. In that case \(\lambda(t)\) is \(\lambda(t)=\lambda\mathbf{1}_{\xi\leq t<\xi+\eta}\) and equation (3.2) takes the form
\[F(t)=\int_{0}^{t}\int_{0}^{t-r}\exp\left\{\lambda\int_{s}^{s+r}\Big{(}F(t-u)-1 \Big{)}du\right\}\mathbb{P}_{(\xi,\eta)}(ds,dr).\]
When \(\xi\) and \(\eta\) are deterministic, that is to say \(\mathbb{P}_{(\xi,\eta)}(ds,dr)=\delta_{a}(ds)\delta_{b}(dr)\), with \((a,b)\in\mathbb{R}_{+}^{2}\), we have
\[F(t)=\mathbb{1}_{t\geq a+b}\exp\left\{\lambda\int_{a}^{a+b}\Big{(}F(t-u)-1 \Big{)}du\right\},\quad\text{with}\quad F(u)=0,\quad\text{for all }u\in[0,a].\]
In case \(\xi\) and \(\eta\) are independent and follow exponential distributions with parameters resp. \(\gamma\) and \(\mu\), the deterministic model obeys the ODE
\[\left\{\begin{array}{l}\frac{d\overline{S}(t)}{dt}=-\lambda\overline{S}(t) \overline{I}(t),\\ \\ \frac{d\overline{E}(t)}{dt}=\lambda\overline{S}(t)\overline{I}(t)-\gamma \overline{E}(t),\\ \\ \frac{d\overline{I}(t)}{dt}=\gamma\overline{E}(t)-\mu\overline{I}(t),\\ \\ \frac{d\overline{R}(t)}{dt}=\mu\overline{I}(t).\end{array}\right.\]
In this model, again \(R_{eff}=\frac{\lambda\overline{S}(t_{0})}{\mu}\). Solving the equation (4.6) below for \(\rho\), we find
\[\rho=\frac{1}{2}\left[\sqrt{(\gamma-\mu)^{2}+4\gamma\overline{S}(t_{0})\lambda }-(\mu+\gamma)\right]. \tag{4.3}\]
### Our varying infectivity model
We again define \(\hat{\lambda}(t)=\overline{S}(t_{0})\lambda(t)\). The infectivity \(\hat{\lambda}(t)\) is first zero (corresponding to the latency period) followed by a gradual increase for some days, and then \(\hat{\lambda}(t)\) starts decreasing down towards \(0\) which it hits when the individual has recovered (see Figure 1).
In the computations of section 5 below, we use a piecewise linear \(\hat{\lambda}(t)\), which allows the function to depend upon a small number of parameters, see Figure 2.
Here \(\tau\) is the duration of the exposed period, \(\eta\) that of the infectious period. We have arbitrarily fixed the length of the period of increase to 1.5 days, and taken the maximum value \(a\) to be a deterministic quantity at our disposal. In other word, in this case, we have
\[\hat{\lambda}(t)=\begin{cases}0&\text{if }t<\tau\\ \frac{a}{1.5}(t-\tau)&\text{if }\tau\leq t<\tau+1.5\\ a\frac{\tau+\eta-t}{\eta-1.5}&\text{if }\tau+1.5\leq t<\tau+\eta\\ 0&\text{if }\tau+\eta<t\end{cases} \tag{4.4}\]
Let \(\mathcal{J}\) be the joint law of \(\tau\) and \(\eta\). From Corollary 3.2, we deduce that
\[F(t)=\mathbb{E}\Bigg{[}\mathds{1}_{\zeta\leq t}\exp\Bigg{\{}\frac{a}{1.5}\int_ {\tau}^{\tau+1.5}(F(t-u)-1)(u-\tau)du+\frac{a}{\eta-1.5}\int_{\tau+1.5}^{\tau+ \eta}(F(t-u)-1)(\tau+\eta-u)du\Bigg{\}}\Bigg{]},\]
Figure 1: Example of trajectory of \(\hat{\lambda}(t)\).
Figure 2: trajectory of \(\hat{\lambda}(t)\) used for the comparisons below.
with \(\zeta=\tau+\eta\). Thus, we obtain
\[F(t)= \int_{0}^{t}\int_{0}^{t}\mathds{1}_{s+r\leq t}\exp\left\{\frac{a}{1.5 }\int_{s}^{s+1.5}(F(t-u)-1)(u-s)du\right.\] \[+\frac{a}{r-1.5}\int_{s+1.5}^{s+r}(F(t-u)-1)(s+r-u)du\right\} \mathcal{J}(ds,dr).\]
The effective reproduction number is defined by
\[R_{eff}=\mathbb{E}\left[\int_{0}^{\infty}\hat{\lambda}(t)dt\right] \tag{4.5}\]
and the rate of decrease \(\rho\) of the number of infected individuals is the unique solution of
\[\mathbb{E}\left[\int_{0}^{\infty}e^{-\rho t}\hat{\lambda}(t)dt\right]=1, \tag{4.6}\]
see Theorem 2.3 in [5].
## 5 Comparison between our Varying infectivity model and an ODE SIR model
In this section, we compare the distribution function of the extinction time in our varying infectivity model, with that of an ODE SIR model with the same \(R_{eff}\), which is the effective reproduction number at time \(t_{0}\), and the same rate of decrease \(\rho\) of the number of infected individuals.
In the following, we assume that the random variables \(\tau\) and \(\eta\) defined in (4.4) are independent, \(\tau\sim\mathcal{U}\left(1.5,2.5\right)\) and \(\eta\sim\mathcal{U}\left(7,13\right)\).
### Approximation of the distribution function of the extinction time in the varying infectivity model
Since it is not possible to obtain an explicit solution of (3.2), then we will use the approximation made in section 7. In other words, we will consider the following approximate function (whose convergence is established in section 7 below):
\[F_{n}\left(\frac{k}{n}\right)=\mathbb{E}\left[\mathds{1}_{\tau+\eta\leq\frac{ k}{n}}\exp\left\{\sum_{\ell=1}^{k}\left(F_{n}\left(\frac{k-\ell}{n}\right)-1 \right)\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\hat{\lambda}(u)du\right\} \right].\]
Let us define \(\xi_{n,\ell}=\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\hat{\lambda}(u)du\). It is easy to see that \(\xi_{n,\ell}\approx\frac{\hat{\lambda}\left(\frac{\ell}{n}\right)}{n}\). Combining this with (4.4), we deduce that
\[\xi_{n,\ell}\approx\frac{a}{1.5}\left(\frac{\ell}{n}-\tau\right)\mathds{1}_{ \tau\leq\frac{\ell}{n}<\tau+1.5}+a\left(\frac{\tau+\eta-\frac{\ell}{n}}{\eta- 1.5}\right)\mathds{1}_{\tau+1.5\leq\frac{\ell}{n}<\tau+\eta}.\]
Now, using the fact that the random variables \(\tau\) and \(\eta\) are independent, \(\tau\sim\mathcal{U}\left(1.5;2.5\right)\) and \(\eta\sim\mathcal{U}\left(7;13\right)\), we deduce that
\[F_{n}\left(\frac{k}{n}\right) \approx\frac{1}{6}\int_{1.5}^{2.5}\int_{7}^{13}\mathds{1}_{x+y \leq\frac{k}{n}}\exp\left\{\sum_{\ell=1}^{k}\left(F_{n}\left(\frac{k-\ell}{n} \right)-1\right)\frac{a}{1.5}\left(\frac{\ell}{n}-x\right)\mathds{1}_{x\leq \frac{\ell}{n}<x+1.5}\right\}\] \[\quad\times\exp\left\{\sum_{\ell=1}^{k}\left(F_{n}\left(\frac{k- \ell}{n}\right)-1\right)a\frac{x+y-\frac{\ell}{n}}{y-1.5}\mathds{1}_{x+1.5\leq \frac{\ell}{n}<x+y}\right\}dxdy\] \[\approx\frac{1}{6}\frac{1}{n^{2}}\sum_{j=7n}^{13n}\sum_{i=1.5n}^ {2.5n}\mathds{1}_{i+j\leq k}\exp\left\{\sum_{\ell=i}^{i+1.5n}\left(F_{n}\left( \frac{k-\ell}{n}\right)-1\right)\left(\ell-i\right)\frac{a}{1.5n^{2}}\right\}\] \[\quad\times\exp\left\{\sum_{\ell=i+1.5n}^{i+j}\left(F_{n}\left( \frac{k-\ell}{n}\right)-1\right)\frac{i+j-\ell}{j-1.5n}\frac{a}{n}\right\}. \tag{5.1}\]
### Computation of \(R_{eff}\)
Recall (4.5). We first compute the random quantity \(\int_{0}^{\infty}\hat{\lambda}(t)dt\). This is the surface below the curve \(\hat{\lambda}(t)\), \(i.e.\) the surface of the union of two triangles, and \(\int_{0}^{\infty}\hat{\lambda}(t)dt=\frac{a\eta}{2}\).
Therefore, we have
\[R_{eff}=\frac{a}{2}\mathbb{E}[\eta]=\frac{a}{2}\times 10=5a.\]
### Resolution of equation (4.6)
From (4.4), we have
\[\mathbb{E}\left[\int_{0}^{\infty}e^{-\rho t}\hat{\lambda}(t)dt\right]=a\left( A_{\rho}+B_{\rho}\right),\quad\text{with}\]
\[A_{\rho}=\mathbb{E}\left(\int_{\tau}^{\tau+1.5}e^{-\rho t}\frac{t-\tau}{1.5} dt\right)\quad\text{and}\quad B_{\rho}=\mathbb{E}\left(\int_{\tau+1.5}^{\tau+ \eta}e^{-\rho t}\frac{\tau+\eta-t}{\eta-1.5}dt\right).\]
Using the fact that \(\tau\) and \(\eta\) are independent, \(\tau\sim\mathcal{U}\left(1.5;2.5\right)\), \(\eta\sim\mathcal{U}\left(7;13\right)\), it is easy to check that
\[A_{\rho}=\frac{1}{\rho}\left(e^{-1.5\rho}-e^{-2.5\rho}\right)\left[\frac{1}{1. 5\rho^{2}}-e^{-1.5\rho}\left(\frac{1}{\rho}+\frac{1}{1.5\rho^{2}}\right) \right],\quad\text{and}\]
\[B_{\rho}=\frac{1}{\rho}\left(e^{-1.5\rho}-e^{-2.5\rho}\right)\left\{e^{-1.5 \rho}\left(\frac{1}{\rho}-\frac{1}{6\rho^{2}}\log\left(\frac{11.5}{5.5}\right) \right)+\frac{1}{\rho^{2}}\mathbb{E}\left[\frac{e^{-\rho\eta}}{(\eta-1.5)} \right]\right\}.\]
Note that the mapping \(\rho\mapsto\mathbb{E}\int_{0}^{\infty}e^{-\rho t}\hat{\lambda}(t)dt\) is decreasing. Consequently, it is easy to compute an approximate solution of equation (4.6).
Comparison of the distributions and the expectations of the extinction time between our Varying infectivity model and a ODE SIR model
In what follows, we compare the extinction time in our Varying infectivity model and in the ODE SIR model with the same \(R_{eff}\) and \(\rho\). Note that we compare \(F\)'s and not \(H\)'s (see the notations in section 3). We compare the distribution of the extinction time of our Varying infectivity model given in (5.1) and of the extinction time of the ODE SIR model given in Proposition 4.1.
We also compare the expectations of the extinction times of our varying infectivity model and of an ODE SIR model. To this end, recall that, the extinction time can be rewrite in the form \(T_{ext}=\inf\{t-t_{0}\ :\ I(t-t_{0})=0\}\). Thus, for the ODE SIR model, we obtain
\[\mathbb{E}[T_{ext}]=\int_{0}^{\infty}\mathbb{P}(T_{ext}>t)dt=\int_{0}^{\infty} \left(1-F(t)\right)dt=\frac{\left(1-R_{eff}\right)}{\rho R_{eff}}\ln(1-R_{eff }),\]
where we have used the formula of Proposition 4.1 for \(F(t)\).
For the varying infectivity model, we obtain
\[\mathbb{E}[T_{ext}]=\int_{0}^{\infty}\mathbb{P}(T_{ext}>t)dt=\int_{0}^{\infty} \left(1-F_{n}(t)\right)dt\approx\frac{1}{n}\sum_{k=1}^{n\Lambda}\left(1-F_{n} \left(\frac{k}{n}\right)\right).\]
Figure 4: Comparison of models with the same \(R_{eff}=0.8\) and \(\rho=-0.03816\).
Figure 3: Comparison of models with the same \(R_{eff}=0.66\) and \(\rho=-0.0683\).
where \(\Lambda\) is the point where we stop the calculation of the integral of \(1-F_{n}(t)\).
\begin{tabular}{|c|c|c|} \hline & \(R_{eff}=0.66\) & \(R_{eff}=0.8\) \\ \hline & \(\rho=-0.0683\) & \(\rho=-0.03816\) \\ \hline Varying infectivity model & \(\mathbb{E}[T_{ext}]\approx 18.7854\) & \(\mathbb{E}[T_{ext}]\approx 22.6568\) \\ \hline ODE SIR model & \(\mathbb{E}[T_{ext}]=8.1369\) & \(\mathbb{E}[T_{ext}]=10.544\) \\ \hline \end{tabular}
## 6 Conclusion :
Our comparison shows that, in the final phase of the epidemic, the varying infectivity SIR model (in fact its branching process approximation) tends to take more time to extinct then the branching process approximation of the ODE SIR model. This is not too surprising, since the varying infectivity model has a memory, contrary to the ODE SIR model. This fact is easily seen when there is a sudden change of the propagation of the epidemic like the lockdown that several countries have established during recent Covid epidemic. The authors who use an ODE model change gradually the infection rate, starting with the lockdown, while in reality the change of the infection rate was very sudden. This is a way to compensate the lack of memory of ODE models. We believe that the fact the varying infectivity SIR model takes more time than the ODE SIR model to forget its past explains that it takes more time to go extinct. The varying infectivity SIR model is more complex that the more classical ODE SIR model, and this probably explains why most authors who quote the seminal 1927 paper of Kermack and McKendrick [9] refer only to the very particular case of constant coefficients, studied in section 3.2 of that paper. Of course, it is very tempting and sometime preferable to use simple models, which allow to draw more conclusions. However, it is crucial to understand which biais the simple model introduce, compared to more realistic models. In this paper, we have identified one of those biaises, namely the shortening of the final phase of the epidemic. In future work, we intend to do similar computations with various classes of varying infectivity models, in order to confirm these first conclusions.
Appendix : Approximation of the distribution function of the extinction time
In this subsection, we define a sequence of functions \(\left\{F_{n},n\geq 1\right\}\) which will allow us to approach the solution of equation (3.2). To this end, for each \(k\in\mathbb{Z}_{+}\), we set
\[F_{n}\left(\frac{k}{n}\right)=\mathbb{E}\Bigg{[}\mathds{1}_{\eta\leq\frac{k}{ n}}\exp\Bigg{\{}\sum_{\ell=1}^{k}\left(F_{n}\left(\frac{k-\ell}{n}\right)-1 \right)\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du\Bigg{\}}\Bigg{]} \tag{7.1}\]
and for each \(t\in[\frac{k}{n},\frac{k+1}{n})\),
\[F_{n}(t)=\mathbb{E}\Bigg{[}\mathds{1}_{\eta\leq t}\exp\Bigg{\{}\sum_{\ell=1}^{ k-1}\left(F_{n}\left(\frac{k-\ell}{n}\right)-1\right)\int_{\frac{\ell-1}{n}}^{ \frac{\ell}{n}}\lambda(u)du-\int_{\frac{k-1}{n}}^{t}\lambda(u)du\Bigg{\}} \Bigg{]}. \tag{7.2}\]
The goal of this section is to prove that as \(n\longrightarrow+\infty\), \(\left\{F_{n}(t),t>0\right\}\longrightarrow\left\{F(t),t>0\right\}\) in \(D([0,+\infty))\), where \(F\) is the unique solution of (3.2).
We first check that
**Lemma 7.1**: _For any \(k\in\mathbb{Z}_{+}\), we have_
\[F_{n}\left(\frac{k}{n}\right)\leq F\left(\frac{k}{n}\right)\leq 1.\]
**Proof.** Let \(k\in\mathbb{Z}_{+}\). We first note that \(F(t)\leq 1\) (since \(F\) is a distribution function). To prove the next assertion, we will proceed by recurrence on \(k\). It is clear that \(F_{n}(0)=0\). Let us now suppose that \(F_{n}\left(\frac{\ell}{n}\right)\leq F\left(\frac{\ell}{n}\right)\), \(\forall 1\leq\ell\leq k-1\). Now let us show that \(F_{n}\left(\frac{k}{n}\right)\leq F\left(\frac{k}{n}\right)\). We have
\[F\left(\frac{k}{n}\right) =\mathbb{E}\left[\mathds{1}_{\eta\leq\frac{k}{n}}\exp\Bigg{\{} \int_{0}^{\frac{k}{n}}\left(F\left(\frac{k}{n}-u\right)-1\right)\lambda(u)du \Bigg{\}}\right]\] \[\geq\mathbb{E}\left[\mathds{1}_{\eta\leq\frac{k}{n}}\exp\Bigg{\{} \sum_{\ell=1}^{k}\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\left(F\left(\frac{k -\ell}{n}\right)-1\right)\lambda(u)du\Bigg{\}}\right]\] \[\geq\mathbb{E}\left[\mathds{1}_{\eta\leq\frac{k}{n}}\exp\Bigg{\{} \sum_{\ell=1}^{k}\left(F_{n}\left(\frac{k-\ell}{n}\right)-1\right)\int_{\frac {\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du\Bigg{\}}\right]\] \[=F_{n}\left(\frac{k}{n}\right),\]
where we have used the fact that \(F\) is non-decreasing and the recurrence assumption.
The previous extends to all \(t\).
**Lemma 7.2**: _For any \(t\geq 0\), we have_
\[F_{n}\left(t\right)\leq F\left(t\right)\leq 1.\]
**Proof.** We first note that
\[\int_{0}^{t}\left(1-F(t-u)\right)\lambda(u)du\leq\sum_{\ell=1}^{[nt]}\int_{\frac{ \ell-1}{n}}^{\frac{\ell}{n}}\left(1-F(t-u)\right)\lambda(u)du+\int_{\frac{[nt]} {n}}^{t}\lambda(u)du.\]
From the fact that \(F\) is non-decreasing and \(\frac{[nt]}{n}\leq t\), we deduce that
\[\int_{0}^{t}\left(1-F(t-u)\right)du \leq\sum_{\ell=1}^{[nt]}\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}} \left(1-F\left(\frac{[nt]-\ell}{n}\right)\right)\lambda(u)du+\int_{\frac{[nt]} {n}}^{t}\lambda(u)du\] \[\int_{0}^{t}\left(F(t-u)-1\right)du \geq\sum_{\ell=1}^{[nt]}\left(F_{n}\left(\frac{[nt]-\ell}{n} \right)-1\right)\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du-\int_{ \frac{[nt]}{n}}^{t}\lambda(u)du,\]
where we have used Lemma 7.1 for the last inequality. The desired result follows by combining the last inequality with (7.2). \(\blacksquare\)
We have
**Proposition 7.3**: _Let \(T>0\). Then there exists a constant \(C\) such that for all \(n\geq 1\) and \(0<s<t<T\),_
\[-\frac{C}{n}-C(t-s)\leq F_{n}(t)-F_{n}(s)\leq C(t-s)+\phi(t)-\phi(s)+\frac{C}{ n},\]
_where \(\phi(t)=\mathbb{P}(\eta\leq t)\) the distribution function of \(\eta\)._
For the proof of this proposition, we will need some several technical lemmas. In order to simplify the notations below we let
\[a_{n}(k)=\left[F_{n}\left(\frac{k+1}{n}\right)-F_{n}\left(\frac{k}{n}\right) \right]^{-}\quad\mbox{and}\quad b_{n}(k)=\left[F_{n}\left(\frac{k+1}{n} \right)-F_{n}\left(\frac{k}{n}\right)\right]^{+}. \tag{7.3}\]
Let us define, \(\forall n\geq 1\), \(k\in\mathbb{Z}_{+}\),
\[\Lambda_{n}(k)=\sum_{\ell=1}^{k}\left(F_{n}\Big{(}\frac{k-\ell}{n}\Big{)}-1 \right)\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du\leq 0, \tag{7.4}\]
(see Lemma 7.1) and let us rewrite (7.1) in the form
\[F_{n}\Big{(}\frac{k}{n}\Big{)}=\mathbb{E}\left[\mathbb{1}_{\eta\leq\frac{k}{ n}}\exp(\Lambda_{n}(k))\right]. \tag{7.5}\]
We will need the following lemmas.
**Lemma 7.4**: _For any \(n\geq 1\), \(k\in\mathbb{Z}_{+}\), we have_
\[A_{1}(n,k)\leq F_{n}\Big{(}\frac{k+1}{n}\Big{)}-F_{n}\Big{(}\frac{k}{n}\Big{)} \leq A_{2}(n,k),\]
_with_
\[A_{1}(n,k)=\exp\Bigg{\{}-\frac{\lambda^{*}}{n}\left[\sum_{\ell=0}^{k-1}a_{n}( \ell)+1\right]\Bigg{\}}-1 \tag{7.6}\]
_and_
\[A_{2}(n,k)=\exp\Bigg{\{}\frac{\lambda^{*}}{n}\sum_{\ell=0}^{k-1}b_{n}(\ell) \Bigg{\}}-1+\mathbb{P}\Big{(}\frac{k}{n}<\eta\leq\frac{k+1}{n}\Big{)}. \tag{7.7}\]
**Proof.** Recalling (7.4) and (7.5), we first note that
\[\Lambda_{n}(k+1)-\Lambda_{n}(k)=\sum_{\ell=1}^{k}\Big{(}F_{n}\Big{(} \frac{k+1-\ell}{n}\Big{)}-F_{n}\Big{(}\frac{k-\ell}{n}\Big{)}\Big{)}\int_{ \frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du-\int_{\frac{k}{n}}^{\frac{k+1}{ n}}\lambda(u)du.\]
It follows that
\[-\Big{(}\Lambda_{n}(k+1)-\Lambda_{n}(k)\Big{)}^{-}\geq-\sum_{\ell=1}^{k}\Big{(} F_{n}\Big{(}\frac{k+1-\ell}{n}\Big{)}-F_{n}\Big{(}\frac{k-\ell}{n}\Big{)} \Big{)}^{-}\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du-\int_{\frac{k }{n}}^{\frac{k+1}{n}}\lambda(u)du\]
and
\[\Big{(}\Lambda_{n}(k+1)-\Lambda_{n}(k)\Big{)}^{+}\leq\sum_{\ell=1}^{k}\Big{(} F_{n}\Big{(}\frac{k+1-\ell}{n}\Big{)}-F_{n}\Big{(}\frac{k-\ell}{n}\Big{)} \Big{)}^{+}\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du.\]
Thus, we have
\[F_{n}\Big{(}\frac{k+1}{n}\Big{)}-F_{n}\Big{(}\frac{k}{n}\Big{)} =\mathbb{E}\left[\mathds{1}_{\eta\leq\frac{k+1}{n}}\exp(\Lambda_{n }(k+1))-\mathds{1}_{\eta\leq\frac{k}{n}}\exp(\Lambda_{n}(k))\right]\] \[=\mathbb{E}\left[\Big{(}\mathds{1}_{\eta\leq\frac{k+1}{n}}- \mathds{1}_{\eta\leq\frac{k}{n}}\Big{)}\exp(\Lambda_{n}(k+1))+\mathds{1}_{ \eta\leq\frac{k}{n}}\left(\exp(\Lambda_{n}(k+1))-\exp(\Lambda_{n}(k))\right)\right]\] \[\leq\mathbb{E}\left[\Big{(}\mathds{1}_{\eta\leq\frac{k+1}{n}}- \mathds{1}_{\eta\leq\frac{k}{n}}\Big{)}+\mathds{1}_{\eta\leq\frac{k}{n}}\exp( \Lambda_{n}(k))\left(\exp(\Lambda_{n}(k+1)-\Lambda_{n}(k))-1\right)\right]\] \[\leq\mathbb{P}\left(\frac{k}{n}<\eta\leq\frac{k+1}{n}\right)+ \mathbb{E}\left(\exp\big{[}(\Lambda_{n}(k+1)-\Lambda_{n}(k))^{+}\big{]}-1\right)\] \[\leq\mathbb{E}\left[\exp\Bigg{\{}\sum_{\ell=1}^{k}\Big{(}F_{n} \Big{(}\frac{k+1-\ell}{n}\Big{)}-F_{n}\Big{(}\frac{k-\ell}{n}\Big{)}\Big{)}^{ +}\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du\Bigg{\}}\right]-1\] \[\quad+\mathbb{P}\Big{(}\frac{k}{n}<\eta\leq\frac{k+1}{n}\Big{)}\] \[\leq A_{2}(n,k),\]
where we have used (7.3) and (7.7) in the last inequality. We also have
\[F_{n}\Big{(}\frac{k+1}{n}\Big{)}-F_{n}\Big{(}\frac{k}{n}\Big{)} =\mathbb{E}\Big{[}\mathds{1}_{\eta\leq\frac{k+1}{n}}\exp(\Lambda_{ n}(k+1))-\mathds{1}_{\eta\leq\frac{k}{n}}\exp(\Lambda_{n}(k))\Big{]}\] \[\geq\mathbb{E}\Big{[}\mathds{1}_{\eta\leq\frac{k}{n}}\exp(\Lambda _{n}(k))\Big{(}\exp(\Lambda_{n}(k+1)-\Lambda_{n}(k))-1\Big{)}\Big{]}\] \[\geq\mathbb{E}\Big{[}\exp(-(\Lambda_{n}(k+1)-\Lambda_{n}(k))^{-} )-1\Big{]}.\]
Combining the above arguments with (7.3) and (7.6), we deduce that
\[F_{n}\Big{(}\frac{k+1}{n}\Big{)}-F_{n}\Big{(}\frac{k}{n}\Big{)} \geq A_{1}(n,k).\]
Recall (7.3). We have
**Lemma 7.5**: _Let \(T>0\). Then there exists a constant \(C\) such that for all \(n\geq 1\) and \(0\leq\frac{k}{n}<T\),_
\[\sum_{\ell=0}^{k}a_{n}(\ell)\leq C\quad\text{and}\quad\sum_{\ell=0}^{k}b_{n}( \ell)\leq C.\]
**Proof.** Let us show the first assertion. For this, we first prove that
\[a_{n}(k)\leq r\left(1+r\right)^{k-1},\quad\text{with}\quad r=\frac{\lambda^{* }}{n}.\]
According to Lemma 7.4, we have
\[a_{n}(k)\leq-A_{1}(n,k) =1-\exp\left\{-\,r\left[\sum_{\ell=0}^{k-1}a_{n}(\ell)+1\right] \,\right\}\] \[\leq r\left(\sum_{\ell=0}^{k-1}a_{n}(\ell)+1\right).\]
However, it easy to see that \(a_{n}(0)=0\) and \(a_{n}(1)\leq r\). Let us suppose \(a_{n}(\ell)\leq r\left(1+r\right)^{\ell-1}\), \(\forall 1\leq\ell\leq k-1\). Thus, it is easy to see that,
\[a_{n}(k) \leq r\left(1+r+r(1+r)+...+r(r+1)^{k-2}\right)\] \[=r\left(1+r\sum_{i=1}^{k-1}(1+r)^{i-1}\right)=r(1+r)^{k-1}.\]
Consequently, since \(\frac{k}{n}\leq T\),
\[\sum_{\ell=0}^{k}a_{n}(\ell)=\sum_{\ell=1}^{k}a_{n}(\ell)\leq\sum_{\ell=1}^{k} r(1+r)^{\ell-1}=(1+r)^{k}-1\leq e^{rk}\leq e^{\lambda^{*}T}\leq C_{T}.\]
We now show the seond assertion. We first have that \(b_{n}(0)=F_{n}\left(\frac{1}{n}\right)\). Then we have
\[\sum_{\ell=0}^{k}b_{n}(\ell) =F_{n}\left(\frac{1}{n}\right)+\sum_{\ell=1}^{k}\left(F_{n}\Big{(} \frac{\ell+1}{n}\Big{)}-F_{n}\Big{(}\frac{\ell}{n}\Big{)}\right)+\sum_{\ell=1 }^{k}a_{n}(\ell)\] \[=F_{n}\Big{(}\frac{k+1}{n}\Big{)}+\sum_{\ell=1}^{k}a_{n}(\ell).\] \[\leq 1+\sum_{\ell=1}^{k}a_{n}(\ell),\]
where we have used Lemma 7.1 in the last inequality. The desired result follows by combining this with the first assertion.
We shall need the following
**Lemma 7.6**: _Let \(T>0\). Then there exists a constant \(C\) such that for all \(n\geq 1\) and \(0\leq\frac{\ell}{n}<\frac{k}{n}<T\),_
\[-C\left(\frac{k-\ell}{n}\right)\leq F_{n}\left(\frac{k}{n}\right)-F_{n}\left( \frac{\ell}{n}\right)\leq C\left(\frac{k-\ell}{n}\right)+\phi\left(\frac{k}{n} \right)-\phi\left(\frac{\ell}{n}\right),\]
_where \(\phi(t)=\mathbb{P}(\eta\leq t)\) the distribution function of the random variable \(\eta\)._
**Proof.** Recall (7.6) and (7.7). We have
\[-A_{1}(n,k) =1-\exp\Bigg{\{}-\frac{\lambda^{*}}{n}\left[\sum_{\ell=0}^{k-1}a_ {n}(\ell)+1\right]\Bigg{\}}\] \[\leq\frac{\lambda^{*}}{n}\left[\sum_{\ell=0}^{k-1}a_{n}(\ell)+1\right]\] \[\leq\frac{C}{n},\]
where we have used Lemma 7.5. However, we have
\[A_{2}(n,k) =\exp\Bigg{\{}\frac{\lambda^{*}}{n}\sum_{\ell=0}^{k-1}b_{n}(\ell )\Bigg{\}}-1+\mathbb{P}\Big{(}\frac{k}{n}<\eta\leq\frac{k+1}{n}\Big{)}\] \[\leq C\frac{\lambda^{*}}{n}\exp\bigg{\{}C\frac{\lambda^{*}}{n} \bigg{\}}+\mathbb{P}\Big{(}\frac{k}{n}<\eta\leq\frac{k+1}{n}\Big{)}\] \[\leq\frac{C}{n}+\mathbb{P}\Big{(}\frac{k}{n}<\eta\leq\frac{k+1}{ n}\Big{)},\]
where we have used the fact that \(e^{x}-1\leq xe^{x}\), \(\forall x\geq 0\) and Lemma 7.5. Now combining the above arguments with Lemma 7.4, we deduce that
\[-\frac{C}{n}\leq F_{n}\Big{(}\frac{k+1}{n}\Big{)}-F_{n}\Big{(}\frac{k}{n} \Big{)}\leq\frac{C}{n}+\mathbb{P}\Big{(}\frac{k}{n}<\eta\leq\frac{k+1}{n} \Big{)}.\]
However, we note that
\[F_{n}\left(\frac{k}{n}\right)-F_{n}\left(\frac{\ell}{n}\right)=\sum_{j=\ell}^ {k-1}\left(F_{n}\Big{(}\frac{j+1}{n}\Big{)}-F_{n}\Big{(}\frac{j}{n}\Big{)} \right).\]
The desired result follows by combining this with the previous inequalities. \(\blacksquare\)
Let us define, \(\forall n\geq 1,t>0\), with \(k=\lceil nt\rceil\),
\[\Lambda_{n}(t)=\sum_{\ell=1}^{k-1}\left(F_{n}\left(\frac{k-\ell}{n}\right)-1 \right)\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du-\int_{\frac{k-1}{n }}^{t}\lambda(u)du, \tag{7.8}\]
(see Lemma 7.1) and let us rewrite (7.2) in the form
\[F_{n}(t)=\mathbb{E}\Big{[}\mathds{1}_{\eta\leq t}\exp\left(\Lambda_{n}(t) \right)\Big{]}. \tag{7.9}\]
We shall need the following
**Lemma 7.7**: _Let \(T>0\). Then there exists a constant \(C\) such that for all \(n\geq 1\) and \(0<\frac{\ell-1}{n}<s<\frac{\ell}{n}<\frac{k}{n}<t<\frac{k+1}{n}<T\),_
\[(\Lambda_{n}(t)-\Lambda_{n}(k))^{+}=0,\quad(\Lambda_{n}(t)-\Lambda_{n}(k))^{-} \leq C\left(t-\frac{k}{n}\right),\]
\[(\Lambda_{n}(\ell)-\Lambda_{n}(s))^{+}\leq\frac{C}{n}\quad\text{and}\quad( \Lambda_{n}(\ell)-\Lambda_{n}(s))^{-}\leq\frac{C}{n}+C\left(\frac{\ell}{n}-s \right),\]
_where \(\Lambda_{n}(.)\) was defined in (7.4)._
**Proof.** From (7.4) and (7.8), we have
\[-\lambda^{*}\left(t-\frac{k}{n}\right)\leq\Lambda_{n}(t)-\Lambda_{n}(k)=-\int_ {\frac{k}{n}}^{t}\lambda(u)du\leq 0.\]
Thus, we obtain the first two assertions. In the same way, from (7.4) and (7.8) we have
\[\Lambda_{n}(\ell)-\Lambda_{n}(s)= \sum_{j=1}^{\ell-2}\left(F_{n}\left(\frac{\ell-j}{n}\right)-F_{n }\left(\frac{\ell-1-j}{n}\right)\right)\int_{\frac{j-1}{n}}^{\frac{j}{n}} \lambda(u)du-\int_{\frac{\ell-1}{n}}^{\frac{\ell}{n}}\lambda(u)du\] \[+\left(F_{n}\left(\frac{1}{n}\right)-1\right)\int_{\frac{\ell-2}{ n}}^{\frac{\ell-1}{n}}\lambda(u)du+\int_{\frac{\ell-2}{n}}^{s}\lambda(u)du\] \[= \sum_{j=1}^{\ell-2}\left(F_{n}\left(\frac{\ell-j}{n}\right)-F_{n }\left(\frac{\ell-1-j}{n}\right)\right)\int_{\frac{j-1}{n}}^{\frac{j}{n}} \lambda(u)du+F_{n}\left(\frac{1}{n}\right)\int_{\frac{\ell-2}{n}}^{\frac{\ell-1 }{n}}\lambda(u)du\] \[-\int_{s}^{\frac{\ell}{n}}\lambda(u)du.\]
Combining this with Lemmas 7.1, 7.5 and (7.3), we deduce that
\[\left(\Lambda_{n}(\ell)-\Lambda_{n}(\ell-1,s)\right)^{+}\leq\frac{C}{n}\quad \text{and}\quad\left(\Lambda_{n}(\ell)-\Lambda_{n}(\ell-1,s)\right)^{-}\leq \frac{C}{n}+C\left(\frac{\ell}{n}-s\right),\]
**Lemma 7.8**: _Let \(T>0\). Then there exists a constant \(C\) such that for all \(n\geq 1\) and \(0<\frac{\ell-1}{n}<s<\frac{\ell}{n}<\frac{k}{n}<t<\frac{k+1}{n}<T\),_
\[-\frac{C}{n}-C\left(\frac{\ell}{n}-s\right)\leq F_{n}\left(\frac{\ell}{n} \right)-F_{n}(s)\leq\frac{C}{n}+\phi\left(\frac{\ell}{n}\right)-\phi\left(s\right)\]
_and_
\[-C\left(t-\frac{k}{n}\right)\leq F_{n}\left(t\right)-F_{n}\left(\frac{k}{n} \right)\leq C\left(t-\frac{k}{n}\right)+\phi\left(t\right)-\phi\left(\frac{k} {n}\right),\]
_where \(\phi(t)=\mathbb{P}(\eta\leq t)\) the distribution function of \(\eta\)._
**Proof.** Recall (7.5) and (7.9). From an easy adaptation of the argument of the proof of Lemma 7.4 and from Lemma 7.7, we have that
\[\mathbb{E}\left(e^{[\Lambda_{n}(\ell)-\Lambda_{n}(\ell-1,s)]^{-}}-1\right) \leq F_{n}\left(\frac{\ell}{n}\right)-F_{n}(s)\leq\mathbb{P}\left( s<\eta\leq\frac{\ell}{n}\right)+\mathbb{E}\left(e^{[\Lambda_{n}(\ell)- \Lambda_{n}(\ell-1,s)]^{+}}-1\right)\] \[-\mathbb{E}\left([\Lambda_{n}(\ell)-\Lambda_{n}(\ell-1,s)]^{-}\right) \leq F_{n}\left(\frac{\ell}{n}\right)-F_{n}(s)\leq\phi\left( \frac{\ell}{n}\right)-\phi\left(s\right)+C\mathbb{E}\left([\Lambda_{n}(\ell) -\Lambda_{n}(\ell-1,s)]^{+}\right)\] \[-\frac{C}{n}-C\left(\frac{\ell}{n}-s\right) \leq F_{n}\left(\frac{\ell}{n}\right)-F_{n}(s)\leq\frac{C}{n}+ \phi\left(\frac{\ell}{n}\right)-\phi\left(s\right).\]
In the same way, we get the other assertion. \(\blacksquare\)
We can now turn to the
**Proof of Proposition 7.3** : By combining Lemmas 7.6, 7.8 and the fact that
\[F_{n}(t)-F_{n}(s)=F_{n}(t)-F_{n}\left(\frac{k}{n}\right)+F_{n}\left(\frac{k}{n }\right)-F_{n}\Big{(}\frac{\ell}{n}\Big{)}+F_{n}\Big{(}\frac{\ell}{n}\Big{)} -F_{n}(s),\]
we deduce that
\[-\frac{C}{n}-C(t-s)\leq F_{n}(t)-F_{n}(s)\leq C\left(t-\frac{\ell}{n}\right)+ \phi(t)-\phi(s)+\frac{C}{n}.\]
It follows that
\[-\frac{C}{n}-C(t-s)\leq F_{n}(t)-F_{n}(s)\leq C(t-s)+\phi(t)-\phi(s)+\frac{C}{ n}.\]
The desired result follows \(\blacksquare\)
Recall that the goal of this subsection is to prove the convergence of the sequence \((F_{n})_{n\geq 1}\) towards \(F\), the unique solution of equation (3.2). For \(T>0\), we define \(w^{\prime}_{T}(x,.)\) the modulus of continuity of \(x\in D([0,+\infty))\) on the interval \([0,T]\) by
\[w^{\prime}_{T}(x,\delta)=\inf\max_{0\leq i<m}\sup_{t_{i}\leq s<t\leq t_{i+1}}| x(t)-x(s)|,\]
where the infimum is taken over the set of all increasing sequences \(0=t_{0}<t_{1}<\cdots<t_{m}=T\) with the property that \(\inf_{0\leq i<m}|t_{i+1}-t_{i}|\geq\delta\). Let \(\{x_{n},n\geq 1\}\) be a sequence function in \(D([0,+\infty))\). The following result is a version of Theorem 12.3 from [2] :
**Proposition 7.9**: _Let \(T>0\). A necessary and sufficient condition for the sequence \(\{x_{n},n\geq 1\}\) to be relatively compact in \(D([0,+\infty))\) is that_
\[(i)\sup_{n\geq 1}\sup_{0\leq t\leq T}|x_{n}(t)|<+\infty\]
_and_
\[(ii)\ \lim_{\delta\to 0}\limsup_{n\to+\infty}w^{\prime}_{T}(x_{n},\delta)=0.\]
We now show that the sequence \((F_{n})_{n\geq 1}\) satisfies the assertions of the above Proposition.
**Proposition 7.10**: _The sequence \((F_{n})_{n\geq 1}\) is relatively compact in \(D([0,+\infty))\)._
**Proof.** Condition \((i)\) follows from Lemma 7.2. Hence, it suffices to verify \((ii)\). To this end, let us define \(\psi(t)=\phi(t)+Ct\), \(\forall t>0\), where again \(\phi\) is the distribution function on \(\eta\). It follows from Proposition 7.3 that
\[|F_{n}(t)-F_{n}(s)|\leq\psi(t)-\psi(s)+\frac{C}{n},\quad\forall t>s>0. \tag{7.10}\]
It is easy to deduce from the definition of \(w_{T}^{\prime}(.,.)\) and (7.10) that
\[w_{T}^{\prime}(F_{n},\delta)\leq w_{T}^{\prime}(\psi,\delta)+\frac{C}{n}.\]
Note that, since \(\psi\in D([0,+\infty))\), \(w_{T}^{\prime}(\psi,\delta)\to 0\) as \(\delta\to 0\) (see Sect. 12, p. 123 in [2]). Thus, the desired result follows. \(\blacksquare\)
We are now ready to state the main result of this section.
**Proposition 7.11**: _As \(n\longrightarrow+\infty\), \(\{F_{n}(t),t>0\}\longrightarrow\{F(t),t>0\}\) in \(D([0,+\infty))\), where \(F\) is the unique solution of (3.2)._
**Proof.** From Proposition 7.10, we deduce that at least along a subsequence (but we use the same notation for the subsequence as for the sequence), \(F_{n}\) converges towards a limit denoted by \(J\) where \(J\) is continuous on the right and admits a limit on the left. In order to show that \(F=J\), it suffices to prove that \(J\) is a solution of equation (3.2) and then use Proposition 3.3. Indeed, let us rewrite (7.2) in the form
\[F_{n}(t)=\mathbb{E}\Bigg{[}\mathds{1}_{\eta\leq t}\exp\Bigg{\{}\int_{0}^{\frac {\lfloor nt\rfloor-1}{n}}\left(F_{n}\left(\frac{\lfloor nt\rfloor-\lceil nu \rceil}{n}\right)-1\right)\lambda(u)du\Bigg{\}}-\int_{\frac{\lfloor nt \rfloor-1}{n}}^{t}\lambda(u)du\Bigg{]}.\]
Thus, it only remains to show that
\[F_{n}(t)\longrightarrow J(t)=\mathbb{E}\left[\mathds{1}_{\eta\leq t}\exp \Bigg{\{}\int_{0}^{t}\left(J(t-u)-1\right)\lambda(u)du\Bigg{\}}\right],\text { as }n\rightarrow+\infty\]
to obtain the desired result. To this end, we note that
\[\int_{0}^{t}\Bigg{|}F_{n}\left(\frac{\lfloor nt\rfloor-\lceil nu \rceil}{n}\right)-J(t-u)\Bigg{|}\lambda(u)du\leq\] \[\int_{0}^{t}\Bigg{|}F_{n}\left(\frac{\lfloor nt\rfloor-\lceil nu \rceil}{n}\right)-F_{n}(t-u)\Bigg{|}\lambda(u)du+\int_{0}^{t}|F_{n}(t-u)-J(t -u)|\lambda(u)du\] \[\leq\lambda^{*}\int_{0}^{t}\psi(t-u)-\psi\left(t-u-\frac{2}{n} \right)du+\frac{C}{n}\lambda^{*}t+\lambda^{*}\int_{0}^{t}|F_{n}(t-u)-J(t-u)|du,\]
where we have used (7.10) in the last inequality. Since \(\psi\) is left continuous and locally bounded, the first term tends to \(0\) as \(n\rightarrow\infty\), thanks to Lebesgue's dominated convergence theorem. The second term tends clearly to \(0\). From the convergence in \(D\) for the Skorohod topology, \(F_{n}(t-u)\to J(t-u)\)\(du\) a.e.. Moreover Lemma 7.2 allows us to use Lebesgue's dominated convergence theorem again, and the result follows. \(\blacksquare\)
AcknowledgementThe authors thank Aurelien Velleret, whose computations confirmed the results shown in section 5.4,, and the Centre National de la Recherche Scientifique, which has supported the visits of Anicet Mougabe and Ibrahima Drame in Marseille within the program DSCA, jointly with the Institut de Mathematiques de Marseille.
|
2306.01500 | A Feature Reuse Framework with Texture-adaptive Aggregation for
Reference-based Super-Resolution | Reference-based super-resolution (RefSR) has gained considerable success in
the field of super-resolution with the addition of high-resolution reference
images to reconstruct low-resolution (LR) inputs with more high-frequency
details, thereby overcoming some limitations of single image super-resolution
(SISR). Previous research in the field of RefSR has mostly focused on two
crucial aspects. The first is accurate correspondence matching between the LR
and the reference (Ref) image. The second is the effective transfer and
aggregation of similar texture information from the Ref images. Nonetheless, an
important detail of perceptual loss and adversarial loss has been
underestimated, which has a certain adverse effect on texture transfer and
reconstruction. In this study, we propose a feature reuse framework that guides
the step-by-step texture reconstruction process through different stages,
reducing the negative impacts of perceptual and adversarial loss. The feature
reuse framework can be used for any RefSR model, and several RefSR approaches
have improved their performance after being retrained using our framework.
Additionally, we introduce a single image feature embedding module and a
texture-adaptive aggregation module. The single image feature embedding module
assists in reconstructing the features of the LR inputs itself and effectively
lowers the possibility of including irrelevant textures. The texture-adaptive
aggregation module dynamically perceives and aggregates texture information
between the LR inputs and the Ref images using dynamic filters. This enhances
the utilization of the reference texture while reducing reference misuse. The
source code is available at https://github.com/Yi-Yang355/FRFSR. | Xiaoyong Mei, Yi Yang, Ming Li, Changqin Huang, Kai Zhang, Pietro LiΓ³ | 2023-06-02T12:49:22Z | http://arxiv.org/abs/2306.01500v1 | # A Feature Reuse Framework with Texture-adaptive Aggregation for Reference-based Super-Resolution
###### Abstract
Reference-based super-resolution (RefSR) has gained considerable success in the field of super-resolution with the addition of high-resolution reference images to reconstruct low-resolution (LR) inputs with more high-frequency details, thereby overcoming some limitations of single image super-resolution (SISR). Previous research in the field of RefSR has mostly focused on two crucial aspects. The first is accurate correspondence matching between the LR and the reference (Ref) image. The second is the effective transfer and aggregation of similar texture information from the Ref images. Nonetheless, an important detail of perceptual loss and adversarial loss has been underestimated, which has a certain adverse effect on texture transfer and reconstruction. In this study, we propose a _feature reuse_ framework that guides the step-by-step texture reconstruction process through different stages, reducing the negative impacts of perceptual and adversarial loss. The _feature reuse_ framework can be used for any RefSR model, and several RefSR approaches have improved their performance after being retrained using our framework. Additionally, we introduce a single image feature embedding module and a texture-adaptive aggregation module. The single image feature embedding module assists in reconstructing the features of the LR inputs itself and effectively lowers the possibility of including irrelevant textures. The texture-adaptive aggregation module dynamically perceives and aggregates texture information between the LR inputs and the Ref images using dynamic filters. This enhances the utilization of the reference texture while reducing reference misuse. The source code is available at [https://github.com/Yi-Yang355/FRFSR](https://github.com/Yi-Yang355/FRFSR).
Reference-based image super-resolution, texture adaptive, feature reuse, feature embedding.
## I Introduction
Single Image Super-Resolution (SISR) involves generating a high-resolution image with high-frequency information from a low-resolution (LR) input. The practical significance of SISR in various contexts such as medical imaging and surveillance is notable. Based on the optimization criteria, the approaches of SISR can be divided into two categories. One approach optimizes pixel-level errors such as mean squared error (MSE) and mean absolute error (MAE), potentially resulting in images that are too smooth, and the other approach involves visual perception-based errors such as perceptual loss and adversarial loss. The latter results in images with better visual effects and greater alignment to human visual perception but may produce artifacts and unrealistic textures. These approaches face the inherent problem of SISR - the ill-posed nature of the problem - because different high-resolution images can be degraded to the same low-resolution image [1, 2]. Reference-based super-resolution (RefSR) alleviates the inherent problem of SISR to a certain extent by using an additional high-resolution reference (Ref) image to transfer relevant textures and achieve super-resolution. Methods of obtaining relevant Ref images are varied and include web search and video frames. RefSR has two primary limitations that compromise its performance. The first one is accurately finding the correspondence between the LR and Ref. Some existing methods address this through spatial alignment, such as CrossNet [3], which utilizes optical flow estimation to align LR and Ref, and SSEN [4], which employs deformable convolutions to learn adaptive LR and Ref alignment. Other methods, such as SRNTT [5], TTSR [6] adopt dense patch matching algorithms for patch matching to find corresponding matches, whereas MASA [7] employs a coarse-to-fine matching approach for reducing computational requirements. However, obtaining accurate matching is challenging due to differences in resolution and texture distribution. \(C^{2}\)-Matching [8] uses knowledge distillation and contrastive learning to train a feature extractor, and a combination of patch matching and deformable convolution to improve the accuracy of correspondence matching. However, deformable convolution [9, 10] still encounters difficulties in aligning features at long distances. The second challenge is effectively transferring texture features. TTSR proposes a cross-scale feature integration module that conveys texture information using multiple texture transformers in a stacked manner, whereas MASA uses a spatial adaptive module to remap the aligned Ref feature distribution, ensuring robustness to different color and brightness distributions. Additionally, DATSR [11] replaces the traditional ResBlock with the Swin-Transformer [12], resulting in considerable improvements in model performance.
Although deformable convolution is capable of learning implicit alignment between feature maps LR and Ref, it still faces challenges in aligning distant features. Furthermore, existing RefSR methods effectively prioritize aggregating textures over reconstructing their own textures. It is also important to note that during the feature aggregation process, the ResBlock treats all pixel features equally, resulting in the introduction of irrelevant textures from the Ref image. Even with DATSR
replacing ResBlock with Swin-Transformer, the window self-attention calculation will noticeably increase the parameters and runtime.
To address these three issues, we first do not make any modifications to the deformable convolution, but instead shuffle the reference image, thereby indirectly increasing the distance between similar features, increasing the training difficulty and improving performance; secondly, inspired by TADE [13], we use single-image feature embedding to assist the LR inputs to self-reconstruct their features while mitigating the introduction of irrelevant textures. Finally, we introduce a new feature aggregation module, namely Dynamic ResBlock (DRB). Specifically, the DRB module adds a group of decoupled filters to the residual block, which can aware texture information in both the spatial and channel domains, and then adaptively aggregate relevant textures, further reducing the introduction of irrelevant information such as noise, wrong textures, etc., using an efficient enhanced spatial attention mechanism (ESA) to enhance relevant texture information.
In addition to the aforementioned points, most previous works overlook a crucial fact: the increase in perceptual loss and adversarial loss adversely affects the texture transfer and reconstruction effects. To fully utilize the texture transfer and reconstruction abilities of the reconstruction loss-trained model, we propose a feature reuse framework. In the process of \(\mathcal{L}^{enc}\)+\(\mathcal{L}^{per}\)+\(\mathcal{L}^{adv}\) training and testing, we feed back the features trained only with \(\mathcal{L}^{rec}\) to the feature aggregation module. This maneuver effectively diminishes the impact of perceptual and adversarial losses on texture transfer and reconstruction. In summary, this paper's primary contributions are:
* We introduce a feature reuse framework that significantly reduces texture reconstruction degradation resulting from perceptual loss and adversarial loss. We apply this framework to various RefSR methods, which have shown consistent improvements in performance.
* To enhance the reconstruction of LR's self-texture and maintain texture relevance, we utilize a single-image feature reconstruction module. Unlike the approach used by [13], we exclude feature upsampling and final image reconstruction processes in this module, and focus solely on embedding the LR's own reconstructed features into the aggregation process.
* We designed a dynamic residual block and introduced it into the texture adaptive module. This block applies dynamic filters and enhanced spatial attention to selectively perceive and transfer textures from the Ref image. This approach adaptively reduces the likelihood of introducing incorrect textures.
* Our method achieved state-of-the-art (SOTA) performance in multiple benchmarks, demonstrating significant improvements in robustness to unrelated reference images and long-range feature alignment. Notably, even without the single-image feature reconstruction module, our method still achieved SOTA performance in CUFED5.
## II Related Works
### _Single Image Super-Resolution_
Single image super-resolution (SISR) aims to input a single LR image and reconstruct it to an image with high-frequency details. Before the emergence of deep learning, traditional methods such as various interpolation methods were usually used. With SRCNN [14] first using deep learning methods to perform super-resolution, deep learning-based super-resolution began to appear in large numbers. Later, ResNet [15] appeared, which deepened the network layers. EDSR [16], CARN [17] and other methods added residual structure in super-resolution models, thus improving the performance of super-resolution. After this, the attention mechanism merged, which can make the network selectively focus on some features and appropriately ignore unnecessary features. RCAN [18] was the first to apply the attention mechanism to super-resolution. Additionally, the game theory approach used by GAN [19] has enabled GAN-based super-resolution models, such as SRGAN [20], ESRGAN [21], RankSRGAN [22], AMPRN [23], and Real-ESRGAN [24] to deliver enhanced perceptual quality in produced images. Recently, SRGAT [25] used the graph attention network to help LR recover additional textures from neighboring patches. TDPN [26] utilizes a texture and detail-preserving network that preserves texture and detail while the features are reconstructed. However, the SISR problem is ill-posed, with low-resolution (LR) and super-resolution (SR) having a one-to-many relationship.
### _Reference-based Image Super-Resolution_
The biggest difference between RefSR and SISR is that the former has an additional high-resolution Ref image. The RefSR can transfer texture details from the Ref image to LR to help LR reconstruction, and these texture details should be similar to the ground truth (GT). CrossNet [3] twists the reference image and LR to align them through the flow estimation network. SEN [4] uses deformable convolution [9, 10] to align LR and Ref images. Both of these methods are implicit alignment, and some work performs feature matching between LR and Ref to achieve explicit alignment. SRNTT [5] enumerates patches to transfer multi-scale reference features. TTSR [6] introduces the Transformer architecture to more reasonably transfer reference features by combining soft and hard attention. MASA [7] uses a matching method from coarse to fine to reduce the computational complexity and a spatial adaptive module is used to make the transferred texture closer to GT. However, due to the resolution gap between the LR and Ref image, the matching performance is affected. \(C^{2}\)-Matching [8] introduces knowledge distillation and contrastive learning methods, which greatly improve the matching robustness between LR and Ref. WTRN [27] utilizes the benefits of wavelet transformation to categorize features into high-frequency and low-frequency sub-bands, which facilitates the transfer of texture patterns with more effectiveness. TADE [13] uses a decoupling framework, which divides RefSR into two parts: super-resolution and texture migration, which alleviates the two problems of reference-underuse and reference-misuse. However, it does not take into consideration the lack of
detailed textures in the super-resolution image, which results in inaccurate matching between SR and Ref. DATSR [11] uses the Swin-Transformer [12] to replace the traditional ResBlock for feature aggregation. Recently, RSRSR [28] implemented a reciprocal learning strategy, thereby strengthening the learning of the model. Reviewing the existing research findings, it can be seen that first, the existing methods do not fully take into consideration the textural dissimilarities between LR and Ref, so it is still inevitable that irrelevant textures are introduced in the texture transfer process. Second, existing studies have focused on improving the accuracy of matching and the ability of texture transfer, but few studies have focused on the texture detail reconstruction of LR itself. Third, no one has noticed that adding perceptual loss and adversarial loss will lead to a decline in the texture reconstruction effect. To address the aforementioned issues, we propose a dynamic residual block (DRB) to perceive texture information, adaptively transfer and aggregate relevant textures and suppress irrelevant textures and reconstruct their own features by embedding single-image feature reconstruction LR features. In addition, we propose a feature reuse framework to improve the texture reconstruction effect under perceptual loss and adversarial loss supervision.
### _Dynamic Weights_
Unlike the weight sharing in conventional convolutions, dynamic filters [29, 30, 31, 32, 33] have content-aware characteristics and are capable of dynamically adjusting and predicting filter weights based on input features. The dynamic weights approach has been successfully applied in various works, such as super-resolution [34, 35, 36], image deblurring [37], image denoising [38], and style transfer [39], because of its powerful representation and content-awareness capabilities. The work in [28], which introduces a set of reference-aware filters for selecting reference features to identify the most suitable texture, is strongly related to our study. However, the generation of these filters is computationally expensive due to their deep separable and spatially changing nature, leading to high time consumption. Inspired by [32], we propose to decouple the spatial and channel domains and use spatial and channel attention to dynamically filter each pixel, extending this to texture-adaptive aggregation.
## III Proposed Method
### _Feature Reuse Reconstruction Framework_
Feature reuse [40, 41, 42, 15, 43] prevents the vanishing gradient issue in deep networks to enhance network learning and parameter efficiency by inputting previous layers' features into subsequent layers. Various computer vision tasks, such as super-resolution [44], image compression [45], and image restoration [46], utilize the characteristic of feature reuse to enhance the efficiency and effectiveness of their models. Prior studies have shown that SR images which are produced using only reconstruction loss are much more detailed in texture compared to those generated by models that use perceptual and adversarial losses. To overcome this problem, we propose to utilize a pre-trained model that generates SR feature maps with fine textures through reconstruction loss only, and integrate them into the second model trained with all losses to
Fig. 1: The architecture of our FRFSR. We first utilized SIFE to reconstruct the features of the \(I_{LR}\), obtaining \(F_{sifc}\), which was then embedded into two RefSR models. We eliminated the upsampling and image reconstruction process in SIFE. Next, RefSR (1) was trained solely using the reconstruction loss (-_rec_) and then all loss was utilized in training RefSR (2). We fed back \(F_{SR}^{rec}\), which RefSR (1) reconstructed, during the process into the feature aggregation process to guide RefSR (2) in retaining more texture features.
supplement texture reconstruction and accelerate convergence of the second model, as shown in Fig.1. Therefore, we extend feature reuse to the training process of the two models. In summary, we first input the LR image \(I_{LR}\) and Ref image \(I_{R\_{\_}{\_}{\textit{Rf}}}\) into the network to obtain \(F_{\_}{\textit{SR}}^{\textit{rec}}\), which is then convolved to generate the output image \(I_{\_}{\textit{SR}}^{\textit{rec}}\). In this process, we only train the RefSR model with reconstruction loss, consistent with previous RefSR methods.
\[F_{\_}{\textit{SR}}^{\textit{rec}}=\textit{Net}_{1}(I_{\_}{LR},I_ {\_}{\textit{Rf}}), \tag{1}\] \[I_{\_}{\textit{SR}}^{\textit{rec}}=\textit{Conv}(F_{\_}{\textit {SR}}^{\textit{rec}}). \tag{2}\]
At this stage, we have acquired an RefSR network that exhibits impressive texture transfer and reconstruction abilities. However, to produce high-quality perceptual images, supervision using perceptual and adversarial losses is typically required. To further enhance the second network's texture transfer and reconstruction capabilities, we generate \(F_{\_}{\textit{SR}}^{\textit{rec}}\) with refined texture details using the first network, and then incorporate this feature map back into the training of the second network. Note that in this process, the first model is only responsible for inference and does not participate in weight updating. The aforementioned process can be represented as follows:
\[F_{\_}{\textit{SR}}^{\textit{all}}=\textit{Net}_{2}(I_{\_}{LR},I_ {\_}{\textit{Rf}},F_{\_}{\textit{SR}}^{\textit{rec}}), \tag{3}\] \[I_{\_}{\textit{SR}}^{\textit{all}}=\textit{Conv}(F_{\_}{\textit {SR}}^{\textit{all}}). \tag{4}\]
By utilizing this framework, we are able to obtain two models with identical texture transfer and reconstruction performance. It is significant to note that this framework has the ability to improve the performance of other RefSR methods. In the ablation study, we apply this framework to MASA [7] and \(C^{2}\)-Matching [8], demonstrating a significant improvement in their performance.
### _Correlation-based Texture Warp_
For the RefSR task, a large part of the work is focused on accurately finding the matching correspondence between the LR image and the Ref image. This is crucial for subsequent texture transfer. The CTW structure is shown in Fig.2. Firstly, we utilized zero-padding for LR images to ensure they remain the same size as the Ref images. Then, we use a parameter-sharing texture encoder to extract the texture features of LR and Ref images and generate \(F_{\_}{\textit{LR}}^{\textit{rec}}\in\mathbb{R}^{C\times I_{\_}{\textit{LR}} \times W_{\_}{\textit{Rf}}}\), \(F_{\_}{\textit{Rf}}^{\textit{rec}}\in\mathbb{R}^{C\times I_{\_}{\textit{Rf}} \times W_{\_}{\textit{Rf}}}\). We keep the texture encoder consistent with [8] because its training method of knowledge distillation and contrastive learning alleviates the problem of inaccurate matching between LR and the reference image due to different resolutions, and enhances the robustness of matching. Then, the texture features \(F_{\_}{\textit{LR}}^{\textit{rec}}\) and \(F_{\_}{\textit{Rf}}^{\textit{rec}}\) are respectively unfolded into \(l1\left(H_{\_}{\textit{LR}}\times W_{\_}{\textit{LR}}\right)\), \(l2\left(H_{\_}{\textit{Rf}}\times W_{\_}{\textit{Rf}}\right)\) patches to obtain \(\{Q_{\_}1,Q_{\_}2,Q_{\_}3,\ldots,Q_{\_}{l1}\}\), \(\{K_{\_}1,K_{\_}2,K_{\_}3,\ldots,K_{\_}l2\}\). The cosine similarity between \(Q_{\_}i\) and each patch \(K_{\_}j\) is calculated using the inner product formula to form the similarity matrix \(\mathcal{M}_{i,j}\in\mathbb{R}^{l1}\).
\[\hat{F}_{\_}{\textit{Rf}}^{\textit{rec}},\hat{F}_{\_}{\textit{LR} }^{\textit{rec}}=\textit{unfold}\left(F_{\_}{\textit{Rf}}^{\textit{rec}},F_{ \_}{\textit{LR}}^{\textit{rec}}\right),\] (5) \[\mathcal{M}_{i,j}=\left(\hat{F}_{\_}{\textit{LR}}^{\textit{rec} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
according to the concatenated channels, \(x_{i},y_{i}\in\mathbb{R}^{H_{1}\times W_{i}}\), and \(\mathcal{W}(\cdot,\cdot)\) represents the optical flow warping function.
### _Multi-scale Dynamic Texture Aggregation_
Using an effective texture transfer based on a corresponding matching relationship is another important goal of the RefSR task. To more effectively transfer and aggregate the textures in reference images, we propose a multi-scale dynamic texture transfer module based on the U-Net network [41], as shown in the gray background in Fig.1. Using the multi-scale characteristics of the U-Net network, we can progressively aggregate the texture features in multi-scale reference images and learn to generate richer textures. Unlike the direct texture transfer methods used in [5, 6], we use specific deformable convolutions [9, 10] for texture alignment between \(F_{LR}\) and \(F^{i}_{R\!f}\) for RefSR tasks, and finally use a texture-adaptive aggregation module to complete texture transfer and aggregation. In addition to this main task of texture transfer, RefSR often struggles to reconstruct high-frequency details from the LR itself. To address this issue, we use the SISR method to reconstruct LR's own features and embed the reconstructed feature \(F_{\textit{sife}}\) into the feature aggregation process. In this way, not only can we supplement the detailed textures that are difficult to reconstruct in RefSR, but also limit the introduction of irrelevant textures to some extent.
\[F_{\textit{sife}}=\textit{SIFE}(I_{LR}). \tag{14}\]
We chose the same SISR baseline used in [13] to ensure a more equitable comparison. Nevertheless, we removed the last upsampling stage which is present in SISR.
#### Iii-C1 Texture Alignment Module
To more accurately transfer the texture features in the multi-scale reference feature \(F^{i}_{\textit{R\!f}}\), we use specific deformable convolutions designed for RefSR to achieve accurate texture alignment. As shown in the flowchart in Fig.3 (a), to obtain the offset required for deformable convolution, we concatenate \(F_{LR}\), \(F^{i}_{\textit{R\!f}}\) and \(\tilde{F}^{i}_{\textit{R\!f}}\) to obtain the offset \(\Delta P_{k}\). This is because using the optically distorted reference feature to guide deformable convolution training can make the training process more stable.
\[\Delta\mathcal{P}_{k}=\textit{Conv}\Bigg{(}\textit{Conv}\bigg{(}\Big{[}F_{LR} ;F^{i}_{\textit{R\!f}};\tilde{F}^{i}_{\textit{R\!f}}]\bigg{)}\Bigg{)}, \tag{15}\]
where \(\Delta\mathcal{P}_{k}\) represents the offset, and \(\textit{Conv}\left(\cdot\right)\) represents the convolution layer. After this, for each patch \(\mathcal{P}_{LR}\) in LR, we used the previously obtained index matrix P to find the corresponding most similar patch \(\mathcal{P}_{\textit{R\!f}}\) in \(F^{i}_{\textit{R\!f}}\). We use \(\Delta\mathcal{P}\) to represent the spatial difference between \(\mathcal{P}_{LR}\) and \(\mathcal{P}_{\textit{R\!f}}\), that is, \(\Delta\mathcal{P}=\mathcal{P}_{LR}-\mathcal{P}_{\textit{R\!f}}\), which is the pre-offset output by CTW. Finally, the improved deformable convolution is used to aggregate \(\mathcal{P}_{LR}\) and its surrounding textures. The specific process is shown below.
\[F^{p}_{\textit{\!tex}}=\sum_{k=1}^{K}w_{k}\cdot y\left(\mathcal{P}_{LR}+ \Delta\mathcal{P}+\mathcal{P}_{k}+\Delta\mathcal{P}_{k}\right)\cdot\Delta m_ {k}, \tag{16}\]
where \(y\) represents the original reference feature, \(\mathcal{P}_{k}\in\left\{\left(-1,\ 1\right),\ \left(-1,\ 0\right),\ldots, \left(1,\ 1\right)\right\}\); \(\Delta P_{k}\) represents the learnable offset; \(w_{k}\) represents the convolution weight; \(\Delta m_{k}\) represents the modulation scalar; \(F^{p}_{\textit{\!tex}}\) represents the reference feature after alignment at position \(p\). Through the above texture alignment method based on deformable convolution, the surrounding textures of the most similar patches in each corresponding reference feature can be aggregated, fully utilizing the contextual information in each patch, thus providing a guarantee for subsequent texture transfer.
#### Iii-C2 Texture-Adaptive Aggregation Module
To effectively aggregate the features of \(F_{LR}\), \(F^{i}_{\textit{\!tex}}\), and \(F_{\textit{sife}}\). Inspired by [32], we propose TAAM for self-adapting transfer and aggregating related texture features, as shown in the flowchart in Fig.3 (b). Specifically, we concatenate the aligned texture feature \(F^{i}_{\textit{\!tex}}\) with \(F_{LR}\) and \(F_{\textit{sife}}\), and input them into a convolution layer. Then, we use the dynamic residual block to transfer and aggregate the related textures in the reference feature to obtain the output \(F_{\textit{\!agg}}\). It is worth noting that we only embed \(F_{\textit{sife}}\) in the TAAM module corresponding to the smallest scale, that is, only the feature mapping of the smallest scale is used. The other TAAM modules at other scales only aggregate \(F_{LR}\) and \(F_{\textit{\!tex}}\) features.
\[F^{\textit{\!rec}}_{\textit{\!agg}}=\textit{DRB}\Bigg{(}\textit{Conv}\bigg{(} \Big{[}F_{\textit{\!tex}};F_{LR};F_{\textit{sife}}\Big{]}\bigg{)}\Bigg{)}+F_{ LR}. \tag{17}\]
To train the second model, we reused the feature map \(F^{\textit{\!rec}}_{\textit{\!agg}}\) created by the first model. As a result, we added this feature map to the feature aggregation process to enhance the texture features. Equation 17 can be expressed in the following form:
\[F^{all}_{\textit{\!agg}}=DRB\Bigg{(}\textit{Conv}\bigg{(}\Big{[}F_{\textit{ \!tex}};F_{LR};F_{\textit{sife}};F^{\textit{\!rec}}_{\textit{\!agg}}\Big{]} \bigg{)}\Bigg{)}+F_{LR}. \tag{18}\]
Fig. 3: The structure of the feature alignment module (FAM) and texture-adaptive aggregation module (TAAM). The gray background in TAAM signifies the Dynamic ResBlk (DRB) that was designed by us.
The DRB module consists of two decoupled dynamic filters and a ResBlock with an ESA (Enhanced Spatial Attention) [48]. The decoupled dynamic filter and ESA are shown in Fig.5 and Fig.4. The decoupled dynamic filter is inspired by [32] and is decoupled into channel filters and spatial filters, which can effectively perceive the related texture content between \(F_{LR}\) and \(F_{tex}^{i}\), \(F_{side}\). The decoupled dynamic filter operation can be written as:
\[\hat{F}_{(k,i)}=\sum_{j\in\Omega(i)}H_{i}^{\textit{gf}}(p_{i}\!-\!p_{j})H_{k}^{ \textit{gf}}(p_{i}\!-\!p_{j})F_{(k,j)}, \tag{19}\]
where \(\Omega(i)\) represents the convolution window around the i-th pixel, \(p_{i}\) and \(p_{j}\) represent the pixel coordinates, \(H_{i}^{\textit{gf}}(\cdot)\) represents the spatial filter, \(H_{i}^{\textit{gf}}(\cdot)\) represents the channel filter, and the feature value of the \(k\)-th channel and \(j\)-th pixel before and after dynamic filtering are denoted as \(F_{(k,i)}\) and \(\hat{F}_{(k,i)}\), respectively. Then the routing weight and the final aggregated features can be generated:
\[(\omega_{1},\omega_{2},\ldots,\omega_{n})=\left(\gamma^{\textit{gf}}\!\left( \hat{H}_{i}^{\textit{gf}}\right)+\beta^{\textit{gf}}\right)\odot\left(\gamma^ {\textit{gf}}\!\left(\hat{H}_{i}^{\textit{gf}}\right)+\beta^{\textit{gf}} \right)\!, \tag{20}\]
\[F_{tex}^{{}^{\prime}}=(\omega_{1},\omega_{2},,\ldots,\omega_{n})*F_{tex}, \tag{21}\]
where, \(\hat{H}_{i}^{\textit{gf}}\) and \(\hat{H}_{i}^{\textit{gf}}\) represent the values obtained from the spatial and channel filter branches, respectively, after normalization is applied, while \(\omega\) denotes the routing weights. \(\gamma^{\textit{gf}}\), \(\gamma^{\textit{gf}}\), \(\beta^{\textit{gf}}\), and \(\beta^{\textit{gf}}\) are similar to BN [49] and specify the learnable mean and standard deviation of the two branches. '\(\odot\)' and '\(*\)' are used to denote element-wise multiplication and the convolutional operation, respectively.
ESA has been proven to be efficient and effective in previous work [48, 50]. This is because it uses \(1\times 1\) convolution and \(3\times 3\) convolution with a stride of 2 to compress the channel size and spatial size respectively, and further reduces the feature size using max pooling. The specific process of ESA is as follows:
\[F_{0}=\textit{Conv}_{1}^{1}\left(F\right), \tag{22}\]
\[F_{1}=\textit{Conv}_{1}^{1}\left(F_{0}\right), \tag{23}\]
\[F_{2}=\textit{\mathcal{B}}\!\left(\textit{Conv}_{1}^{3}\!\left(\textit{Conv}_{ 1}^{3}\!\left(\textit{Pool}\!\left(\textit{Conv}_{2}^{3}(F_{0})\right)\right) \right)\right)\!, \tag{24}\]
\[F^{\prime}=\sigma\!\left(\textit{Conv}_{1}^{1}(F_{1}+F_{2})\right)\otimes F, \tag{25}\]
where \(\textit{Conv}_{a}^{b}(\cdot)\) represents a convolution layer with kernel size a and stride b, \(\textit{Pool}(\cdot)\) represents a max pooling layer, \(\mathcal{B}(\cdot)\) represents bilinear interpolation, \(\sigma(\cdot)\) represents the sigmoid function, '\(\otimes\)' represents element-wise product, \(F\) and \(F^{{}^{\prime}}\) represent input feature and output feature respectively.
ResBlock with the ESA module can enhance the related texture features of \(F_{\textit{LR}}\), and aggregate reference features with high relevance while suppressing interference features with low relevance. It is worth noting that this attention module is lightweight and only adds a small number of parameters.
This attention-based texture-adaptive aggregation method; not only transfers and fuses effective textures from reference images and reduces interference from irrelevant textures, it also ensures that the features \(F_{\textit{side}}\) reconstructed by the SISR method are well integrated into \(F_{LR}\). By aggregating \(F_{\textit{side}}\) features, it not only makes up for the defect that reference-based super-resolution is difficult to reconstruct its own texture, it also suppresses the generation of irrelevant textures to a large extent.
### _Loss Functions_
Reconstruction lossTo ensure the model has an excellent texture transfer ability and image reconstruction ability, we use the following reconstruction loss to train the model.
\[\mathcal{L}^{\textit{rec}}=\left.\left\|I_{HR}-I_{\textit{SR}}\right\|_{1},\right. \tag{26}\]
where \(I_{HR}\) represents the ground truth image, \(I_{\textit{SR}}\) represents the super-resolved image. \(\left\|\cdot\right\|_{1}\) represents \(l_{1}\) norm. Only using reconstruction loss to train the model will cause the image to be too smooth.
Perceptual lossBy calculating perceptual loss [51] in the feature domain, the generated image can be more semantically similar to GT. Perceptual loss is shown as follows:
\[\mathcal{L}^{\textit{per}}=\frac{1}{V}\sum_{i=1}^{C}\left\|\phi_{i}\left(I_{ HR}\right)-\phi_{i}\left(I_{\textit{SR}}\right)\right\|_{F}, \tag{27}\]
Fig. 4: The structure of Enhanced Spatial Attention (ESA).
Fig. 5: The structure of dynamic filter module (DFM). βFCβ denotes the fully connected layer and βGAPβ denotes the global average pooling. βFNβ denotes filter normalization proposed in [32].
where \(\phi_{i}(\cdot)\) represents the \(i\)-th intermediate layer of VGG19 [47]. \(\left\lVert\cdot\right\rVert_{F}\) represents Frobenius norm, \(C\) and \(V\) represent the number of channels and volume of feature maps respectively.
Adversarial lossThe generator \(G\) and discriminator \(\mathcal{D}\) improve together in a game against each other, ensuring the model is able to generate output images with pleasing visual effects. The adversarial loss we choose is WGAN [52], which is shown as follows:
\[\mathcal{L}^{adv}=-\mathcal{D}\left(I_{SR}\right). \tag{28}\]
During the training process, the loss of discriminator \(\mathcal{D}\) is shown as follows:
\[\mathcal{L}^{\mathcal{D}}=\mathcal{D}\left(I_{SR}\right)-\mathcal{D}\left(I_{ GT}\right)+\lambda\Big{(}\Big{\|}\nabla_{\tilde{I}}\mathcal{D}\Big{(}\tilde{I} \Big{)}\Big{\|}_{2}-1\Big{)}^{2}, \tag{29}\]
where \(\nabla_{\tilde{I}}\) represents the random convex combination of \(I_{HR}\) and \(I_{SR}\).
Finally, the total loss function is shown as follows:
\[\mathcal{L}^{all}=\lambda_{1}\mathcal{L}^{rec}+\lambda_{2}\mathcal{L}^{per}+ \lambda_{3}\mathcal{L}^{adv}, \tag{30}\]
where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are respectively the weight coefficients for each loss.
## IV Experiments
This section commences by presenting the datasets essential to the training and testing of the models. Subsequently, we comparatively analyze several super-resolution methods along various aspects for our approach. Ablation studies are conducted on the SIFE and DRB components, along with the feature reuse framework. Lastly, we evaluate the efficacy of our proposed approach against other super-resolution methods in a practical implementation.
### _Datasets and Metrics_
Training DatasetWe use CUFED [5] to train our model, which consists of two parts: a training set and a testing set. The training set contains 11871 pairs of input and reference images, each with a resolution of 160\(\times\)160.
Testing DatasetOur study evaluates the efficiency of our model across five benchmark datasets: CUFED5 [5], Urban100 [54], Manga109 [55], Sun80 [53], and MR-SR [8]. CUFED5 consists of 126 image pairs, each with an input image and five distinct reference images. Urban100 contains 100 images of urban buildings, from which we use the LR image as the reference due to its strong self-similarity. For Manga109, we randomly selected a single reference image from a total of 109 nine images. Sun80 is composed of 80 input images, each with 20 reference images, and one of them was randomly selected as the reference. MR-SR is similar to CUFED5, but with a one-to-one correspondence between the LR and Ref images, resulting in a total of 80 image pairs. Our metrics for evaluation consisted of PSNR and SSIM calculated on the Y channel in the YCbCr color space.
Implementation DetailsTo obtain the LR inputs, we downsample the HR images by a scale factor of 4. For data augmentation, we apply horizontal flip, vertical flip, and random rotation. To increase the training difficulty and improve the performance of long-distance feature alignment, we divide the reference images into patches and shuffle them randomly. We use the official RRDB [21] parameters as the pre-trained model for the single image feature embedding module, which we train in two stages. First, we use \(\mathcal{L}^{rec}\) as the only loss function. Second, we use \(\mathcal{L}^{rec}\), \(\mathcal{L}^{per}\), and \(\mathcal{L}^{adv}\) for joint supervision. During the training process, we choose the Adam optimizer and set the \(\beta_{1}\) and \(\beta_{2}\) parameters to 0.99
and 0.999, respectively. We set the initial learning rate of the model to 1e-4 and the batch size to 9. The weights \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) of \(\mathcal{L}^{\textit{rec}}\), \(\mathcal{L}^{\textit{err}}\), and \(\mathcal{L}^{\textit{adv}}\) are set to 1.0, \(10^{-4}\), and \(10^{-6}\), respectively.
### _Comparison with State-of-the-art Methods_
We conduct quantitative and qualitative comparisons between our proposed method and some existing SISR and RefSR methods. The SISR methods are SRCNN [14], EDSR [16], RCAN [18], Enet [56], SRGAN [20], ESRGAN [21], RankSRGAN [22]. The RefSR methods are Cross-Net [3], SSEN [4], SRNTT [5], TTSR [6], MASA [7], \(C^{2}\)-Matching [8], TADE [13], DATSR [11], and RSRR [28]. We train two sets of parameters, one using only the reconstruction loss (denoted by \(-rec\)), and the other using all losses.
Quantitative ComparisonAs shown in Table 1, our method achieves state-of-the-art results on four benchmark datasets using only the reconstruction loss. Our method leverages effective texture matching, dynamic texture transfer, and complementary SISR features in the reconstruction process, which enables it to transfer similar textures from the high-resolution reference images in CUFED5 and WR-SR datasets to the LR images, enhancing their high-frequency information, and to transfer self-features to assist LR reconstruction on the self-similar dataset Urban100. As shown in Table 2, our model outperforms all the other methods on all datasets under the joint supervision of losses, although its performance slightly degrades compared to the results obtained when only using reconstruction loss. Interestingly, our method still maintains a significant advantage (+0.8dB) over the other RefSR methods, even with the presence of perceptual loss and adversarial loss. The quantitative comparison under the two paradigms demonstrates that our model exhibits a strong generalization ability and achieves optimal performance.
Qualitative EvaluationFig.6 shows the visual comparison of our model trained only with reconstruction loss and the existing SISR and RefSR methods. It can be clearly seen that RCAN and RRDB have difficulty in reconstructing texture information due to the severe degradation of high-frequency information, especially for some texts, faces, and some fine textures. Compared with SISR, RefSR can transfer similar textures from the reference images, thus producing more texture details. Compared with some existing RefSR methods, the adaptive nature of FRFSR allows for the perception and transferal of texture information from the Ref images. Thus, the model is capable of compensating for missing high-frequency details in LR, leading to the reconstruction of images with texture details more closely resembling the ground truth. For example, the second pair of local details on the right indicate that RCAN and RRDB fail to reconstruct any blind texture, and the existing RefSR methods generate some texture details, but the images are very unrealistic and far from the ground truth. Our proposed method can generate a sharper, clearer blind texture that is very close to the ground truth. Another example is the fourth pair of images on the left, where the texture is very fine and challenging for RefSR. As can be seen, our method contains more texture details than the existing RefSR methods, which struggle to reconstruct this part and generate very few textures. This demonstrates the effectiveness of our texture search and texture-adaptive aggregation methods. Due to the feature reuse framework, FRFSR can preserve increasingly more realistic texture information when trained with \(\mathcal{L}^{\textit{rec}}+\mathcal{L}^{\textit{err}}+\mathcal{L}^{\textit{adv}}\), such as the text on the clothes in the first pair of images on the right in Fig.9, and the stone pillar texture in the fourth pair of images on the left. Compared with the other RefSR methods, our method can generate complete text texture and stone pillar texture, reflecting the advantages of the feature reuse framework and our method.
Comparison of Robustness of Texture TransformationsTexture transfer robustness is an important criterion for evaluating the performance of RefSR models. As shown in Fig.7, SOTA methods suffer from texture mis-transfer. |
2310.02937 | Emergence of flat bands in the quasicrystal limit of boron nitride
twisted bilayers | We investigate the electronic structure and the optical absorption onset of
close-to-30\degree twisted hexagonal boron nitride bilayers. Our study is
carried out with a purposely developed tight-binding model validated against
DFT simulations. We demonstrate that approaching 30\degree (quasicrystal
limit), all bilayers sharing the same moir\'e supercell develop identical band
structures, irrespective of their stacking sequence. This band structure
features a bundle of flat bands laying slightly above the bottom conduction
state which is responsible for an intense peak at the onset of the absorption
spectrum. These results suggest the presence of strong, stable and
stacking-independent excitons in boron nitride 30\degree-twisted bilayers. By
carefully analyzing the electronic structure and its spatial distribution, we
elucidate the origin of these states as moir\'e-induced K-valley scattering due
to interlayer B$-$B coupling. We take advantage of the the physical
transparency of the tight-binding parameters to derive a simple triangular
model based on the B sublattice that accurately describes the emergence of the
bundle. Being our conclusions very general, we predict that a similar bundle
should emerge in other close-to-30{\degree} bilayers, like transition metal
dichalcogenides, shedding new light on the unique potential of 2D materials. | Lorenzo Sponza, Van Binh Vu, Elisa Serrano Richaud, Hakim Amara, Sylvain Latil | 2023-10-04T16:19:24Z | http://arxiv.org/abs/2310.02937v3 | # Emergence of flat bands in the quasicrystal limit of boron nitride twisted bilayers
###### Abstract
We investigate the electronic structure and the optical absorption onset of hexagonal boron nitride bilayers with twist angles in the vicinity of 30\({}^{\circ}\). Our study is carried out with a tight-binding model that we developed on purpose and validated against DFT simulations. We demonstrate that approaching 30\({}^{\circ}\)(quasicrystal limit), all bilayers sharing the same moire supercell develop identical band structures, irrespective of their stacking sequence. This band structure features a bundle of flat bands laying slightly above the bottom conduction state which is responsible for an intense peak at the onset of independent-particle absorption spectra. These results reveal the presence of strong, stable and stacking-independent optical properties in boron nitride 30\({}^{\circ}\)-twisted bilayers. By carefully analyzing the electronic spatial distribution, we elucidate the origin of these states as due to interlayer B-B coupling. We take advantage of the the physical transparency of the tight-binding parameters to derive a simple triangular model based on the B sublattice that accurately describes the emergence of the bundle. Being our conclusions very general, we predict that a similar bundle should emerge in other close-to-30\({}^{\circ}\) bilayers, like transition metal dichalcogenides, shedding new light on the unique potential of 2D materials.
The unconventional physical properties exhibited by twisted bilayers has led to significant advances giving rise to the field of twistronics [1; 2; 3; 4]. By stacking 2D atomic layers to form van der Waals heterostructures, a geometric moire superlattice emerges as a result of lattice mismatch and/or a rotational twist [5]. The resulting pattern modulates the potential at the supercell scale and hence changes the electronic band structure typically through the formation of low dispersing bands which possibly lead to peculiar properties [6; 7; 8]. Typical examples of moire composites include the pioneering twisted bilayer graphene [9], twisted hexagonal boron nitride (hBN) [2; 10; 11] or hetero- and homobilayers of transition metal dichalcogenides (TMDs) [12].
In semiconducting twisted bilayers, like hBN, the width of band edge states decreases continuously with the angle of twist (no magic angle) and the presence of different atomic species generates several stacking possibilities with specific electronic properties, providing an additional degree of freedom with respect to graphene bilayers [13; 14; 15]. More specifically, hBN has the widest band gap in the monolayer (larger than 7 eV [16; 17; 18; 19]) and thus presents outstanding optical properties [20; 21] with many possible applications [22].
Since the early stages of the research on twist-angle physics, the scientific community has mostly focused on the small twist angle limit. In this context, continuous models [23; 24] or tight-binding (TB) hamiltonians [25; 26; 27] have been developed and DFT calculations [24; 25] carried out on several 2D material bilayers. Actually, few works treat also larger twisting angles [28; 29; 30] but still restricted in an intermediate range spanning approximately between 15\({}^{\circ}\) and 28\({}^{\circ}\). Even less works consider rotations equal to 30\({}^{\circ}\)[31; 32] or very close to it [25; 33]. In any case, all these studies focus only on graphene bilayers and mention large angle configurations just among other possible structures, none of them being specifically devoted to the investigation of close-to-30\({}^{\circ}\) twists. Regarding hBN, only the 30\({}^{\circ}\) twisted bilayers have been considered very recently [34]. Even thought density functional theory (DFT) calculations suggest that this BN material is a new wide-gap 2D quasicrystal, its electronic and optical properties have never been addressed specifically.
In this Letter, we determine the characteristics of the band structure and optical response for twist angles in the vicinity of 30\({}^{\circ}\) for different stackings of twisted hBN bilayers by means of a dedicated TB model. We demonstrate that all structures develop an identical bundle of flat states just above the bottom of the conduction band. We trace its physical origin and we develop a simple model describing its formation.
To identify univoquely the structures, we use the definitions derived from [35] and introduced in [15] which are briefly recalled in Appendix A. We developed a TB model purposely designed to describe accurately the last occupied and the first empty states. Besides the advantage of being much lighter than DFT calculations thus enabling supercells with twists much closer to 30\({}^{\circ}\), the TB Hamiltonian also permits to unravel some fundamental mechanism.
Our TB model is inspired by literature [25; 26; 36] and detailed in Appendix B. A feature it is worth stressing is the distance-dependent exponential decay of the interlayer hopping terms whose prefactor \(\gamma^{XY}\) gives a measure of the interlayer coupling strength, \(XY\) labelling the pairings \(BN\), \(BB\) and \(NN\). Our TB model differs from other twist-angle models [37; 38; 14; 27] in being purposely derived and parametrized to tackle the close-to-30\({}^{\circ}\) limit.
The quality of the parametrization can be appreciated in Figure 1 where we report the top valence, the bottom conduction, and independent-particle optical spectra of some chosen bilayers computed both with our TB model and ab-initio methods. Ab-initio simulation details are reported in Appendix C. We present results from the BB(1,3), BN(1,3) and BN(3,8) bilayers, chosen as paradigmatic because of specific characteristics of their band structure, but we checked that the agreement is equally good in the other stackings (cfr. Figure 5 in Appendix B).
Our parametrization reproduces the general dispersion of DFT bands and the gapwidth. Particular attention has been paid on describing the bottom of the conduction band. Let us first consider panels (\(a\)) and (\(b\)), i.e. the two (1,3) supercells. DFT predicts the formation of a pretty flat dispersion in the M-K region in both systems. Actually the two bands avoid each other in the BB(1,3), even though the splitting is extremely small. Instead, they cross at K in the BN(1,3), consistently with what simulated at smaller angles [15]. Our TB model catches very well these features although the splitting in the BB stacking is somewhat overestimated. At a larger angle, like in the BN(3,8) bilayer, the agreement is even better as panel (\(c\)) exemplifies well. In particular, the model reproduces DFT remarkably well in predicting the emergence of a group of densely packed and low dispersing bands concentrated between 4.37 eV and 4.46 eV, that we will call the 'bundle' of flat states, highlighted by an olive green side bar in Figure 1c). It is useful to split the conduction bands into a lower energy region (blue bar in Figure 1.c) that we will call'shallow conduction', and a higher energy region where bands are particularly flat called the 'deep bundle' region (orange bar in Figure 1c). All energy intervals are given with respect to the top of the valence band.
We also evaluated the imaginary part of the independent-particle dielectric function \(\varepsilon(\omega)\) in the same three systems ab-initio and with our TB model. Details on the calculation can be found in Appendix B and results are reported in panel \(d\) of Figure 1. Because of the large size of the (3,8) supercell (388 atoms), the ab-initio spectrum is computed only in the \(\Gamma\) point, and the same in TB for sake of comparison. Both methods predict a well-detached peak at 4.5 eV, corresponding to transitions towards the bottom conduction states. Differences between the stackings are negligible, indicating that not only the band structure but also the wavefunctions are remarkably similar.
Having validated the TB model, we extend our investigation to twist angles closer to 30\({}^{\circ}\)and systems hardly attainable with DFT. In Figure 2 panels \(a\) to \(f\) we report the bottom conduction states of the BB and BN stackings in the (5,13), (4,11) and (11,30) supercells, corresponding to twist angles ranging from 28.78\({}^{\circ}\) to 29.96\({}^{\circ}\). The tendency observed already in the (1,3) and (3,8) supercells is here confirmed and strengthen: the stackings present basically the same band structure at fixed supercell. This is actually true for all five stacking, as we assess in Fig. 7 in Appendix E. The indistinguishability of the stacking sequence when approaching 30\({}^{\circ}\) twist can be explained by the fact that, in this limit, the BN bilayer approaches a quasicrystal without any translation symmetry. As a consequence, all local configurations are realized somewhere in the bilayer and a sort of self-similarity arise such that in each supercell one can find approximate replicas of smaller cells of all the five stackings. We will encounter another manifestation of this property later on. Actually, this is true for any homobilayer formed of hexagonal monolayers, so we expect a similar behavior to occur also in close-to-30\({}^{\circ}\) twisted graphene, TMDs, silicene and many of the most popular 2D materials.
More interestingly, we observe a bundle of flat states forming in the conduction band in all structures at all angles, comprised in a narrow interval (about 100 meV) centered around 4.40 eV, This characteristic appears to be a very robust feature of all hBN bilayers approaching a 30\({}^{\circ}\) twist. Such behaviour is in contrast to small-angle twisted hBN bilayers where one or more single states are formed directly in the gap and clearly separated in energy
Figure 1: (\(a,b,c\)): Conduction bands of BB(1,3), BN(1,3) and BN(3,8) bilayers from left to right in TB (black solid) and DFT (red dashed). The top valence of all structures have been aligned to 0.0 eV. In (\(c\)), thick bars on the canvas highlight notable energy intervals. Olive green: the bundle states (4.37 eV to 4.46 eV); orange: the deep bundle states (4.19 eV to 4.46 eV); blue: shallow conduction states below 4.19 eV. (\(d\)): Onset of independent-particle absorption spectra of the same systems. The BN(3,8) spectra are computed only in the \(\Gamma\) point. All spectra have been broadened with a Lorentzian with variance 0.1 eV.
by about 0.1 eV [2; 10; 13].
It is particularly worthwhile to study the impact of these flat states on the absorption properties. We used TB to compute the optical response in the (5,13) and (4,11) supercells, here reported in Figure 2 panels \(g\) and \(h\). As expected, the BN and the BB stackings present very small differences that are further washed out as the twist angle approaches 30\({}^{\circ}\). Spectral onsets are dominated by the same intense and well detached peak observed in Figure 1.d). To gain insight in its origin, we recomputed the spectra including only transitions toward the bundle (4.34 eV to 4.48 eV) and we recover essentially the same signal. This is a strong indication that, despite not being the lowest empty bands, the bundle states are solely responsible of the absorption onset. Given the intensity of the onset at the independent-particle level and the low dispersion of the conduction states involved, we predict that hBN bilayers twisted at angles close to 30\({}^{\circ}\) will display exceptionally strong, robust and localised electron-hole excitations.
To go beyond in the analysis, we look at the spatial distribution of the conduction states by evaluating the local density of states (L-DOS) in intervals corresponding to the deep bundle energy and the shallow conduction energy highlighted in Figure 1.c). We show results in the upper layer of the (3,8) supercells, because pictures are easier to read than in larger structures. In Figure 2, the radius of the blue circles in panels \(i\) and \(j\) are proportional to the L-DOS in the upper layer of the BB(3,8) and BN(3,8) respectively. With no surprise, no DOS is centered on N sites since they contribute only to valence states [19]. Despite the resemblance of both band structures and optical spectra, the two stacking develop quite different patterns. This may be look contradictory at first sight, but actually these patterns hide fascinating similarities. Inside each structure, one can find infinite rearrangements of smaller cell approximants of all the five stackings which repeat themselves in a kind of self-similar scheme. The L-DOS patterns of Figure 2 arise from a sort of frustrated interference between these lower order configurations. Examples of this are presented in Appendix F, but further studies go beyond the scope of this article. From a structural analysis, the L-DOS appears to be stronger on sites where B atoms of the two layers are almost vertically aligned. To highlight better this feature, we evaluate at each B site \(j_{B}\) the coincidence function \(\mathcal{C}_{j_{B}}(\mathbf{d})=1-d_{xy}/D\), where \(D=1.452\) A is the in-plane interatomic distance and \(d_{xy}\) is the in-plane component of the vector \(\mathbf{d}\) connecting the site \(j_{B}\) of one layer to the closest B site of the other layer. \(\mathcal{C}_{j_{B}}(\mathbf{d})\) varies linearly between 1, where two B atoms are perfectly stacked on top of each other, and 0, where a B atom of one layer coincides with a N or a hexagon center of the other layer. The green circles of Figure 2\(i\) and \(j\), have radii proportional to \(\mathcal{C}_{j_{B}}(\mathbf{d})\) of all B sites of the upper layer for which \(\mathcal{C}_{j_{B}}(\mathbf{d})>\kappa\) where \(\kappa=0.8\) in BB(3,8) and \(\kappa=0.73\) in BN(3,8). The resemblance between the high-coincidence patterns and the shallow conduction L-DOS is striking. The same can be shown for the "deep bundle" states and low coincidence patterns (cfr. Appendix E). This analysis reveals that all the lowest conduction bands come from B-B interlayer states, and that the more perfect is the vertical B-B coincidence, the stronger is the coupling and hence the lower is the energy of the corresponding empty state. We actually verified that these are bonding states by checking that the TB coefficients of coinciding and quasi-coinciding sites have opposite sign in the two layers.
Figure 2: \((a-c)\): Conduction bands of the BB(5,13), BB(4,11) and BB(11,30) bilayers respectively. \((d-f)\): Same as \((a-c)\) in the BN stacking. The top valence of all structures have been aligned to 0.0 eV. \((g,h)\): Independent-particle absorption spectra of the BB(5,13) and BB(4,11) (red solid curves) and BN(5,13) and BN(4,11) (black dashed) obtained inlcuding all empty and occupied states (full). Shaded and patterned areas correspond to spectra obtained by restricting accessible empty states between 4.34 eV and 4.48 eV.\((i,j)\): Blue circles: Radius proportional to the L-DOS in the shallow conduction interval. Green circles: Radius proportional to \(\mathcal{C}_{j_{B}}(\mathbf{d})\) wherever it is higher than \(\kappa\). See main text for details. Data are shown only for the upper layers of the BB(3,8) \((i)\) and BN(3,8) systems \((j)\).
Inspired by this analysis, we use the TB model to study the influence of the B-B interlayer hopping on the formation of the bundle states. We calculate the band structure of the BN(3,8) bilayer including all parameters of our TB model except for the \(\gamma^{BB}\).The resulting states are reported in Figure 3a) and are basically indistinguishable from those of a monolayer with no flat band. If we now increase the interlayer B-B coupling, setting \(\gamma^{BB}=1.225\) eV (50%) (panel b), then localized states begin to form until a bundle of completely flat bands is constituted at \(\gamma^{BB}=2.45\) eV (100%) in panel c. Besides confirming the results obtained with \(\mathcal{C}_{j_{B}}(\mathbf{d})\) that interlayer B-B coupling is at the origin of the bundle states, it demonstrates that the interlayer coupling is due solely to the B-B interactions. The weakness of the interlayer N-N coupling explains why there is no such a feature in the valence band, contrary to what happens in graphene twisted bilayers where conduction-conduction and valence-valence couplings are equivalent [40].
We then devise an even simpler TB model which describes the formation of the bundle states. We select only the B sublattices obtaining a structure formed of two triangular lattice where only B-B interactions are taken into account. Details of the model and its relation to the full honeycomb model are reported in Appendix D. The conduction band structure of this simplified triangular model is reported in Figures 3f and g respectively for vanishing and non-vanishing interlayer coupling. The model reproduces the isolated honeycomb monolayer at no coupling and gives rise to a bundle of flat bands at full coupling, clearly demonstrating the key role of the B-B interlayer interaction in localizing the electrons in high-angle twisted boron-nitride bilayers. This model allows us to unravel a fundamental mechanisms common to other large-angle twisted bilayers. We deem probable that a similar bundle of flat states will emerge in the valence band of close-to-30\({}^{\circ}\) twisted TMDs. As well discussed in [24], the interlayer coupling is mostly due to \(p_{z}\) states of chalcogens whose electronic states participate essentially to the top of the valence band.
To conclude, we have investigated the electronic and optical properties of hBN bilayers at twist angles close to 30\({}^{\circ}\) by means of a purposely developed TB model. We have demonstrated that at twist angles close to 30\({}^{\circ}\) all hBN bilayers develop the same electronic properties, irrespective of the stacking sequence. This is characterised by the emergence of a bundle of low-dispersing states right above the bottom of the conduction band which are responsible of an intense and robust peak at the onset of the absorption spectrum resulting from a strong coupling between B atoms belonging to different layers. We captured this fundamental mechanism with a very simple triangular-lattice TB model which can be applied to many other twisted bilayers (e.g. homobilayers of TMDs). Our results suggest that 30\({}^{\circ}\)-twisted BN bilayers may host extremely strong excitonic phenomena originated by the bundle of flat bands and independent on the stacking sequence. Moreover, the indistinguishability of the band structure with respect to the stacking sequence is expected to be an ubiquitous characteristic in the quasicrystal limit, and to occur in twisted bilayers of other 2D materials including TMDs, antimonene, silicene, transition metal monochalcogenides, and all homostructures formed of hexagonal single-layers.
The authors acknowledge funding from the European Union's Horizon 2020 research and innovation program under grand agreement No 881603 (Graphene Flagship core 3) and from public grants overseen by the French National Research Agency (ANR) as part of the 'Investissements d'Avenir' program (Labex NanoSaclay, reference: ANR-10-LABX-0035) and under the EXCIPLINT project (Grant No. ANR-21-CE09-0016).
Figure 3: (\(a\)): TB conduction band of the hBN monolayer in the (3,8) supercell (dashed red) and of the BB(3,8) bilayer with \(\gamma^{BB}=0\) eV. (\(b\) and \(c\)): The same bilayer with \(\gamma^{BB}\) equal to 50% and 100% of the correct value. (\(d\)): Ball and stick model of the honeycomb lattice twisted bilayer. Dashed magenta line: the \(\gamma^{BB}\) interlayer coupling. Red shaded area: the isolated monolayer model. (\(e\)) Ball and stick model of the triangular lattice twisted bilayer made only of the B sites. Dashed magenta line: the \(\gamma\) interlayer coupling. (\(f\) and \(g\)): condcution bands of the triangular model with \(\gamma\)= 0 eV and 1.715 eV respectively. The top valence of all structures have been aligned to 0.0 eV.
## Appendix A Structural definitions and nomenclature
Here we report the structural details to reproduce our structures and recall the definitions and nomenclature[15] most relevant for the current work.
The cell parameter of the honeycomb lattice is 2.54 A. Every BN bilayer with hexagonal symmetry can be identified univoquely by a stacking label and a couple of integers \((q,p)\) defining the moire supercell. These indeces define two matrices that, once applied to the monolayer unitary vectors, generate the supercell of the lower layer and the twisted supercell of the top layer. Since the number of primitive cells required to span the supercell is
\[\Omega=p^{2}+q^{2}+pq \tag{1}\]
the total number of atoms in the bilayer supercell is \(4\Omega\).
Moreover, there exist only five hexagonal stacking sequences for each \((q,p)\)-pair. The corresponding five stacking labels take their name from the pair of atoms placed exactly on top of each other on the three high-symmetry points of the hexagonal supercell. These are the origin (0, 0), the point (1/3, 1/3) and the point (2/3, 2/3). The five stackings divide into single-coincidence stackings (labelled BB, BN and NN) and double coincidence stackings (labelled BBNN and BNNB). As an example, we report in Figure 4 all five stackings of the (1,3) supercell. If we take, without loss of generality, \(p>q\) then the rotation of the top layer with respect to the bottom layer will be given either by an angle \(\theta\) (in the single-coincidence systems, or "atom-on-hexagon" geometries) or \(-\theta^{\prime}\) (in the double-coincidence case, or "hexagon-on-hexagon" geometries) with \(\theta^{\prime}=\pi/3-\theta\). Actually, both twist angles are derived from \(p\) and \(q\) with specific formulae[15], namely:
\[\tan\theta =\sqrt{3}\frac{p^{2}-q^{2}}{p^{2}+q^{2}+4pq} \tag{2}\] \[\tan\theta^{\prime} =\sqrt{3}\frac{q^{2}+2pq}{2p^{2}-q^{2}+2pq}\]
Our work sheds light on supercells with angles in the vicinity of \(30^{\circ}\). Therefore, from (2), we have to choose integer \((q,p)\)-pairs that approximate
\[p\simeq q(1+\sqrt{3}) \tag{3}\]
so that \(\theta\) and \(\theta^{\prime}\) tend to \(30^{\circ}\) asymptotically. The best set of approximants of equation (3) are listed in Table 1.
## Appendix B Tight-binding model
Since the hexagonal BN monolayer band structure is easily obtained with a first neighbors tight-binding Hamiltonian, our basis is constituted of the \(p_{z}\) orbitals of B and N. The intralayer part of the Hamiltonian relies on three parameters: the on-site energies and the first-neighbor in-plane hopping
\[\varepsilon_{B}=4.90\ \mathrm{eV}\,,\quad\varepsilon_{N}=0.00\ \mathrm{eV}\,,\quad t_{ \parallel}=-2.65\ \mathrm{eV}.\]
Our rule for defining the interlayer hopping integrals is strongly inspired by a model originally developed for graphene moire bilayers [25; 36] and successively extended to twisted bilayer MoS\({}_{2}\)[26]. In this model, the matrix elements between two \(p_{z}\) orbitals separated by vector \(\mathbf{d}\) obey the Slater-Koster angular dependence like
\[t_{\perp}(\mathbf{d})=n^{2}V_{pp\sigma}(d)+(1-n^{2})V_{pp\pi}(d)\,,\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((q,p)\) & \(N_{\mathrm{atoms}}\) & \(\theta\) & \(\theta^{\prime}\) \\ \hline
**(1,3)** & **52** & **32.20\({}^{\circ}\)** & **27.80\({}^{\circ}\)** \\
**(5,13)** & **1036** & **28.78\({}^{\circ}\)** & **31.22\({}^{\circ}\)** \\
**(11,28)** & 4852 & 28.25\({}^{\circ}\)** & 31.75\({}^{\circ}\) \\
**(6,17)** & 1708 & 30.87\({}^{\circ}\) & 29.13\({}^{\circ}\) \\
**(3,8)** & **388** & **29.41\({}^{\circ}\)** & **30.59\({}^{\circ}\)** \\
**(9,25)** & 3724 & 30.40\({}^{\circ}\) & 29.60\({}^{\circ}\) \\
**(4,11)** & **724** & **30.16\({}^{\circ}\)** & **29.84\({}^{\circ}\)** \\
**(11,30)** & **5404** & **29.96\({}^{\circ}\)** & **30.04\({}^{\circ}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The set of the best \((q,p)\)-pairs that approximate equation (3), ordered from top to bottom by decreasing distance to \(30^{\circ}\). The list is limited to structures containing 5404 atoms. Bold: systems discussed in the main text.
Figure 4: The five hexagonal stackings in the (1,3) moirΓ© supercell. Red circles highlight the coincidence sites. A red dashed line separates the double-coincidence stackings with twist angle \(-\theta^{\prime}\) (top section) from the single-coincidence once with twist angle \(\theta\) (bottom section).
where \(n=d_{z}/d\) is the vertical direction cosine. The \(V_{pp\sigma}\) and \(V_{pp\pi}\) bond integrals follow an exponential form. Differently to the model cited above [25] we firstly restrict this rule to interlayer matrix elements only, hence between \(p_{z}\) orbitals belonging to different layers. Secondly, we include only the \(\sigma\) component in the interlayer hopping which has finally the form
\[t_{\perp}^{XY}(\mathbf{d})=n^{2}\gamma^{XY}F_{c}^{XY}(d)\exp\left[Q_{XY}(a_{ \perp}-d)\right] \tag{4}\]
with \(a_{\perp}\) being the interlayer distance [15]
\[a_{\perp}=3.22\ \text{\AA}\]
and \(XY\) labelling the pairings \(BN\), \(BB\) or \(NN\). The values of the \(\gamma^{XY}\) and \(Q_{XY}\) parameters are reported in Table 2. In (4), the function \(F_{c}^{XY}\) is the smooth cutoff function defined in reference [25] as
\[F_{c}^{XY}(d)=\left\{1+\exp\left[(d-r_{c}^{XY})/l_{c}\right]\right\}^{-1}\,, \tag{5}\]
where \(l_{c}=0.265\ \text{\AA}\) and \(r_{c}^{XY}\) is the selected cutoff, that depends on the value of \(Q_{XY}\) according to the relation
\[r_{c}^{XY}=a_{\perp}+\frac{\ln(10^{3})}{Q_{XY}}\,.\]
The performance of this model on the top valence and bottom conduction bands of all the five stackings in the (3,8) supercell can be appreciated in Figure 5.
The imaginary part of the independent-particle transverse dielectric function \(\varepsilon(\omega)\) is obtained at first order expansion in the coupling with the vector potential \(\mathbf{A}(\mathbf{r},t)\)
\begin{table}
\begin{tabular}{c c|l c} \hline \hline prefactor & value (eV) & decay & value (Γ
\({}^{-1}\)) \\ \hline \(\gamma^{BB}\) & 2.45 & \(Q_{BB}\) & 3.0 \\ \(\gamma^{BN}\) & 0.75 & \(Q_{BN}\) & 2.0 \\ \(\gamma^{NN}\) & 0.32 & \(Q_{NN}\) & 1.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Set of parameters for the interlayer hopping.
Figure 5: Top valence (bottom panels) and bottom conduction states (top panels) of the (3,8) supercell in the BNNB, BN, NN, BB and BBNN stackings from left to right. Colored solid curves are from our tight-binding model, dotted grey curves from DFT.
through the formula [41]:
\[\varepsilon(\omega)=\frac{e^{2}\pi}{m_{0}^{2}\varepsilon_{0}\omega^{2}}\sum_{{ \bf k},m,\mu}\left|{\rm v}_{{\bf k}\mu m}(\hat{\mathbf{e}})\right|^{2}\delta(E_{{ \bf k}\mu}-E_{{\bf k}m}-\hbar\omega)\,, \tag{6}\]
where \(e\) is the elementary charge, \(m_{0}\) is the electron mass and \(\varepsilon_{0}\) is the vacuum permittivity. In actual implementations, the delta function appearing in (6) is actually replaced by a Lorentzian distribution the width of which has been fixed to 0.1 eV. Expression (6) describes the absorption of a photon with energy \(\hbar\omega\) and polarization vector \(\hat{\bf e}\) and the resulting promotion of an electron from the valence state \(\left|m,{\bf k}\right>\) of energy \(E_{{\bf k}m}\) to the conduction state \(\left|\mu,{\bf k}\right>\) of energy \(E_{{\bf k}\mu}\). The velocity matrix element \({\rm v}_{{\bf k}\mu m}=\left<\mu,{\bf k}\right|\hat{\bf e}\cdot\hat{\bf v}\left| m,{\bf k}\right>\) is obtained from the eigenstates of the operator \(\hbar\hat{\bf v}=i[\hat{\bf H},\hat{\bf r}]\).
## Appendix C _Ab-initio_ calculation details
The DFT simulations have been carried out with the free simulation package Quantum ESPRESSO [42; 43]. We used norm conserving pseudopotentials, a cutoff energy of 60.0 Ry for the wavefunctions and of 240 Ry for the charge. The exchange-correlation potential has been approximated with the Perdew-Burke-Ernzerhof model [44]. The Brillouin zone has been sampled with shifted Monkhorst-Pack [45] grids of \(5\times 5\) k-points in the \(x\)\(y\) plane in the (1,3) supercells and \(3\times 3\) in the (3,8) supercell.
_Ab-initio_ independent-particle absorption spectra have been calculated using the free simulation package Yambo [46; 47]. We sampled the Brillouin zone of the (1,3) supercells with a shifted Monkhorst-Pack [45] grid of \(4\times 4\) k-points and included 400 bands in the sum over states. Note that this value of bands is actually much higher than what is required for the absorption onset alone. In the BB(3,8) calculation, because of the larger size of the calculation which comprises 1552 electrons, we included bands with index ranging between 700 and 800 and we computed the sum over states only in the \(\Gamma\) point. In both cases we truncated the Coulomb interaction in the \(z\) axis using the analytic formulation as implemented in Yambo [46; 47].
## Appendix D Details of the triangular model
In our notation, the monolayer conduction band within the BN primitive cell writes
\[\begin{split} E_{c}({\bf k})&=\varepsilon_{B}^{2} /2+\sqrt{\varepsilon_{B}^{2}/4+t_{\parallel}^{2}\left|f({\bf k})\right|^{2}} \\ f({\bf k})&=\sum_{j=1,2,3}\exp\left(i{\bf k} \cdot{\mathbf{\tau}}_{j}\right)\,.\end{split} \tag{7}\]
As shown in a previous work [19], in the vicinity of the gap, the conduction eigenstates are mainly localised on boron sites, which is related to the first term expansion
\[E_{c}({\bf k})\simeq\varepsilon_{B}+\frac{t_{\parallel}^{2}}{\varepsilon_{B} }\left|f({\bf k})\right|^{2}\,. \tag{8}\]
Actually, expression (8) is identical to the unique eigenvalue of a first-nearest-neighbour TB model made on the triangular lattice formed by the boron sites. The B\(-\)B hopping integral and the on-site energy of such a triangular lattice model can be replaced respectively by
\[\begin{split} t_{\triangle}&=\frac{t_{\parallel}^{ 2}}{\varepsilon_{B}}\quad\text{and}\\ E_{\triangle}&=\varepsilon_{B}+3t_{\triangle}\,. \end{split} \tag{9}\]
Finally, the interlayer hopping integrals (between B atoms only) follow the same construction rule as that for the honeycomb calculations, defined in (4). The numerical value of \(\gamma=1.175\) eV indicated in the main manuscript has been set by comparison with the twisted honeycomb TB model (cfr. Figure 6).
## Appendix E Band structure and L-DOS of all stackings approaching 30\({}^{\circ}\)
In this section we provide additional TB results on the large angle limit. In Figure 7 we present the TB conduction bands of all five stackings in the (5,13), the (4,11) and the (11,30) supercells. In Figure 8 we report the L-DOS and the coincidence map in the upper layers of the BB(4,11) and the BN(4,11) bilayers. Like in the main text, we divide the L-DOS the density of the'shallow conduction' states (blue circles) whose energies lie below 4.19 eV, and the 'deep bundle' states (orange circles) with energies comprised between 4.19 eV and 4.46 eV, like in the (3,8) supercells. Analogously, the coincidence map is split into high coincidence sites and low coincidence, the separation being fixed at \(\kappa=0.67\).
Figure 6: The triangular model used to approximate the conduction bands in the Figure 4 of the main manuscript.
## Appendix F About the self-similar repeating patterns
In this section we illustrate with some images the presence of self-similar patterns repeating inside each bilayer. We show that the moire supercell of the approximant of a given stacking contains the lower-order approximants of all the stackings. In effect, by construction one given approximant can not be obtained as a simple repetition of lower-size approximants. As a consequence, it is impossible to tile perfectly a given approximant with a lower-order one. So the inclusion and the repetition of the smaller cells within a bigger one is not perfect, which gives rises to a sort of frustration. The interference resulting from the superposition of this frustrated self-similar repetition of the lower-order approximants of all stackings is at the origin of the L-DOS patterns of the BB(3,8) and BN(3,8) reported in the main text, and of the BB(4,11) and BN(4,11) reported here in Figure 8 As an illustration of this intriguing phenomenon, we report in Figure 9 four supercells of the BB(3,8) and the BB(4,11) bilayers. In the former (top panel), we highlight with different colours local configurations that are similar to the five stackings of the (1,3) supercell. In the latter (bottom panel), we highlight local replicas of the five stackings of the (3,8) supercell using the same color code. To help the identification, we added circles in the regions where the characteristic coincidence is realised. The scheme can be repeated again and again at all scales. We give the same kind of representation in Figure 10 for the BN stacking sequence.
Figure 7: From left to right: The tight-binding conduction bands of the BB, the BN, the BBNN, the BNNB and the NN stacking sequences. From top to bottom: Supercells (5,13), (4,11) and (11,30) with twist angles approaching \(30^{\circ}\). The twist angle is either \(\theta\) in single-coincidence stackings or \(-\theta^{\prime}\) in double coincidence ones. |
2307.01629 | The Gaia alerted fading of the FUor-type star Gaia21elv | FU Orionis objects (FUors) are eruptive young stars, which exhibit outbursts
that last from decades to a century. Due to the duration of their outbursts,
and to the fact that only about two dozens of such sources are known,
information on the end of their outbursts is limited. Here we analyse follow-up
photometry and spectroscopy of Gaia21elv, a young stellar object, which had a
several decades long outburst. It was reported as a Gaia science alert due to
its recent fading by more than a magnitude. To study the fading of the source
and look for signatures characteristic of FUors, we have obtained follow-up
near infrared (NIR) spectra using Gemini South/IGRINS, and both optical and NIR
spectra using VLT/X-SHOOTER. The spectra at both epochs show typical FUor
signatures, such as a triangular shaped $H$-band continuum, absorption-line
dominated spectrum, and P Cygni profiles. In addition to the typical FUor
signatures, [OI], [FeII], and [SII] were detected, suggesting the presence of a
jet or disk wind. Fitting the spectral energy distributions with an accretion
disc model suggests a decrease of the accretion rate between the brightest and
faintest states. The rapid fading of the source in 2021 was most likely
dominated by an increase of circumstellar extinction. The spectroscopy
presented here confirms that Gaia21elv is a classical FUor, the third such
object discovered among the Gaia science alerts. | ZsΓ³fia Nagy, Sunkyung Park, PΓ©ter ΓbrahΓ‘m, Γgnes KΓ³spΓ‘l, Fernando Cruz-SΓ‘enz de Miera, MΓ‘ria Kun, MichaΕ Siwak, ZsΓ³fia Marianna SzabΓ³, MΓ‘tΓ© SzilΓ‘gyi, Eleonora Fiorellino, Teresa Giannini, Jae-Joon Lee, Jeong-Eun Lee, GΓ‘bor Marton, LΓ‘szlΓ³ Szabados, Fabrizio Vitali, Jan Andrzejewski, Mariusz Gromadzki, Simon Hodgkin, Maja JabΕoΕska, Rene A. Mendez, Jaroslav Merc, Olga Michniewicz, PrzemysΕaw J. MikoΕajczyk, Uliana Pylypenko, Milena Ratajczak, Εukasz Wyrzykowski, Michal Zejmo, PaweΕ ZieliΕski | 2023-07-04T10:26:04Z | http://arxiv.org/abs/2307.01629v1 | # The _Gaia_ alerted fading of the FUor-type star Gaia21elv
###### Abstract
FU Orionis objects (FUors) are eruptive young stars, which exhibit outbursts that last from decades to a century. Due to the duration of their outbursts, and to the fact that only about two dozens of such sources are known, information on the end of their outbursts is limited. Here we analyse follow-up photometry and spectroscopy of Gaia21elv, a young stellar object, which had a several decades long outburst. It was reported as a _Gaia_ science alert due to its recent fading by more than a magnitude. To study the fading of the source and look for signatures characteristic of FUors, we have obtained follow-up near infrared (NIR) spectra using Gemini South/IGRINS, and both optical and NIR spectra using VLT/X-SHOOTER. The spectra at both epochs show typical FUor signatures, such as a triangular shaped \(H\)-band continuum, absorption-line dominated spectrum, and P Cygni profiles. In addition to the typical FUor signatures, [O i], [Fe ii], and [S ii] were detected, suggesting the presence of a jet or disk wind. Fitting the spectral energy distributions with an accretion disc model suggests a decrease of the accretion rate between the brightest and faintest states. The rapid fading of the source in 2021 was most likely dominated by an increase of circumstellar extinction. The spectroscopy presented here confirms that Gaia21elv is a classical FUor, the third such object discovered among the _Gaia_ science alerts.
keywords: Stars: variables: T Tauri - stars: pre-main sequence
## 1 Introduction
Studying the accretion in young stellar objects (YSOs) is important to understand their formation. Most of what we know about accretion in YSOs is based on the magnetospheric accretion scenario, according to which the material accretes onto the forming star from the infalling envelope through the disk, by following the magnetospheric lines (Hartmann et al., 2016). The accretion rates of YSOs are known to be highly variable, with extreme cases of eruptive YSOs, which experience outburst events, when their luminosity increases
up to two orders of magnitude. These events are detected as 2-5 mag brightening in optical and near-infrared (NIR) bands. During the outbursts the mass accretion rate can increase from \(\sim\)10\({}^{-8}\)\(M_{\odot}\) yr\({}^{-1}\) in quiescence to \(\sim\)10\({}^{-4}\)\(M_{\odot}\) yr\({}^{-1}\)(Audard et al., 2014, Fischer et al., 2022). Studies with large samples of objects indicate that young stars experience these events once every \(10^{3}-10^{4}\) years (e.g. Fischer et al., 2019). Episodic accretion is one of the possible explanations for the observed large luminosity spread of young stellar objects (Fischer et al., 2022). FU Orionis objects (FUors) are well-studied examples of episodic accretion (Hartmann and Kenyon, 1996). FUors are low-mass (\(<2\)\(M_{\odot}\)) eruptive YSOs that exhibit large-amplitude (\(>\)4 mag) outbursts at optical and infrared wavelengths. These outbursts are expected to last up to a century, suggesting that these events will not only increase the final stellar mass by a significant amount, but also affect the evolution of the circumstellar disc. The representative characteristics of FUors are brightness increase on a time scale of 1-10 yr, P Cygni profile of H\(\alpha\), Li i 6707 A absorption, strong CO absorption features, triangular shape of the \(H\)-band continuum due to the strong water absorption bands on both sides of the \(H\)-band window, typical of late M-type stars (Hartmann and Kenyon, 1996; Connelley and Reipurth, 2018). So far the number of confirmed FUors is limited to no more than two dozens (Audard et al., 2014). One of the important, so far nuclear points is the end of the FUor outbursts, i.e. their return to quiescence. FUor outbursts are expected to end when the inner disc depletes. However, due to the typically decades-long duration of the outbursts, no bona fide FUor has returned to quiescence yet, apart from cases of short, temporary halt in the accretion, e.g. V899 Mon (Ninan et al., 2015; Park et al., 2021) and V346 Nor (Kraus et al., 2016; Kospal et al., 2020). Another example is V1647 Ori, an eruptive YSO that has shown some FUor characteristics (Aspin et al., 2009), and returned to quiescence after a ten-years long outburst (Semkov et al., 2018; Giannini et al., 2018). The spectroscopic deviation of V1647 Ori from well-known FUors, however, ruled out its FUor classification (Connelley and Reipurth, 2018).
Therefore, it is not known whether the end of FUor outbursts is an abrupt event when accretion suddenly stops and the brightness drops back to the quiescent level in 1-2 years, or it is a slow gradual decrease of the accretion rate resulting in a slowly decreasing light curve over perhaps decades. The first scenario would indicate some instability, like the thermal instability model proposed by Bell and Lin (1994). To understand how FUors end their outbursts, it is important to increase their sample.
One of the best tools to discover the brightening or fading of eruptive young star candidates is the _Gaia_ Photometric Science Alerts system, due to its large sky coverage and typically monthly cadence (Hodgkin et al., 2021). Several eruptive YSOs have already been discovered based on the _Gaia_ Science Alerts, including the FUors Gaia17bpi (Hillenbrand et al., 2018) and Gaia18dyy (Szeged-Elek et al., 2020), and the EX1-Dijk-type eruptive YSOs (EXors) Gaia18dyz (Hodapp et al., 2019), Gaia20eac (Ghosh et al., 2022; Cruz-Senae de Miera et al., 2022) and Gaia19fct (Park et al., 2022). Some additional eruptive YSOs were found, which cannot be classified as either a FUor or an EXor, such as Gaia19ajj (Hillenbrand et al., 2019), Gaia19bey (Hodapp et al., 2020), and Gaia21bty (Siwak et al., submitted). Two _Gaia_ alerted sources with light curves similar to eruptive YSOs, Gaia20bwa and Gaia20fgx (Nay et al., 2022), turned out to be classical T Tauri stars (CTTS), while the brightening of another _Gaia_ alerted YSO, V555 Ori (Gaia17afn), was confirmed to be caused by variable circumstellar extinction, rather than a change in its accretion rate (Nagy et al., 2021). Here we present a study of a previously known YSO, which triggered the _Gaia_ Science Alerts system due to its fading.
Gaia21elv (ESO H\(\alpha\)-148 or 2MASS J08410676-4052174, \(\alpha_{\rm J2000}\) = 08h 41m 06:75, \(\delta_{\rm J2000}\) = -40\({}^{\circ}\) 52\({}^{\prime}\) 17\(\aas@@fstack{\prime\prime}\).44) had a _Gaia_ alert on 2021 October 6 due to its quick fading by 1.2 mag over 18 months. Its archival photometry based on photographic plates of the SuperCOSMOS Sky Survey (SSS) showed a long-term brightening (Contreras and Pena et al., 2019). It is a known young, Class II type star (Petersson and Reipurth, 1994, Marton et al., 2019), associated with the Vela Molecular Ridge (Pettertsson and Reipurth, 1994), and in particular, with the RCW 27 HII region located at a distance of \(\sim\)1 kpc (Petersson, 2008). Its _Gaia_ ID3 (Gaia Collaboration et al., 2022) parallax is \(1.0727\pm 0.0397\) mas. The Renormalised Unit Weight Error (RUWE) of 1.291 and the astrometric excess noise of 0.437 mas suggest that the astrometry is accurate. We derived a zero-point correction of \(-0.02513\) based on Lindegren et al. (2021) for this parallax. After the zero-point correction, the _Gaia_ DR3 parallax can be converted to a distance of 910.9\(\pm\)33.7 pc, which we use in this paper. This distance is close to the estimate of 905\({}^{+36}_{-26}\) pc by Bailer-Jones et al. (2021).
In this paper, we provide spectroscopic evidence that Gaia21elv is a FUor, and discuss the cause of its fading that triggered the _Gaia_ Alerts system. We describe the photometric and spectroscopic observations in Sect. 2 and present their results in Sect. 3. We analyse the FUor signatures in the NIR spectra in Sect. 4, discuss the nature of the fading of the source, and provide a comparison to other similar sources. We summarize our main findings in Sect. 5.
## 2 Observations
### Optical photometry
In 2022 June, we obtained optical photometric observations of Gaia21elv with the 60-cm Ritchey-Chretien Rapid Eye Mount (REM) telescope operated by the Italian National Institute for Astrophysics (INAF) at La Silla (Chile) using its ROS2 instrument, an optical imager operating at four simultaneous passbands (Sloan \(g^{\prime}r^{\prime}i^{\prime}z^{\prime}\)) with a field of view (FoV) of 9\(\aas@@fstack{\prime}\)1\(\times\)9\(\aas@@fstack{\prime}\)1 and pixel scale of 0\(\aas@@fstack{\prime\prime}\)58. Three images were taken per filter on four nights, 2022 June 5, 6, 8, and 9. After the usual bias and flat field correction, and removal of hot pixels, we obtained aperture photometry for Gaia21elv and about 15 comparison stars in the FoV. We selected the comparison stars from the APASS9 catalog (Henden et al., 2015) making sure that they are sufficiently constant in brightness (\(\sigma_{V}<0.08\) mag). We calculated the \(z\)-band brightness of the comparison stars by plotting their spectral energy distribution (SED) using APASS9 \(B_{g}V^{\prime}r^{\prime}i^{\prime}\) and 2MASS \(JHK_{s}\) magnitudes (Cutri et al., 2003) and interpolating between these points for the effective wavelength of the \(z^{\prime}\) filter, 1.05 \(\mu\)m. We used an aperture radius of 6 pixels (3\(\aas@@fstack{\prime\prime}\)5) and sky annulus between 20 and 40 pixels (11\(\aas@@fstack{\prime\prime}\)68 and 23\(\aas@@fstack{\prime\prime}\)36). Because all comparison stars were much bluer than Gaia21elv, in order to avoid introducing large uncertainties by extrapolation, we converted the instrumental magnitudes by averaging the calibration factors of all comparison stars without fitting a colour term. The results can be seen in Table 1.
Further observations of the target have been performed with REM between 2022 Oct 26 and 2023 Jan 4, during 12 nights. These observations, taken in Sloan \(g^{\prime}r^{\prime}i^{\prime}\) passbands, were uploaded to the BHTOM service.1 40, 38 and 44 images were reduced in Sloan \(g^{\prime}r^{\prime}i^{\prime}\), respectively.
This telescope is a part of SkyNET robotic network and is supplied with FLI CCD camera with 15.1 \(\times\) 15.1 arcmin field-of-view (2048 \(\times\) 2048 pixels, 0.44 arcsec/pix). All 42 observations (14 frames per band) were taken in Johnson-Cousins \(V\), \(R\) and \(I\) bands and uploaded to the BHTOM service, where they were reduced and converted to standard magnitudes (in APASS/V, APASS/\(r\) and APASS/\(i\) respectively).
We obtained photometric observations with the 1.54m Danish telescope, located at La Silla, Chile. The telescope is equipped with the CCD camera (E2V231-42) in the Cassegrain focus, cooled by liquid nitrogen. The FoV is \(13.7\times 13.7\) arcmin (\(2048\times 2048\) pixels; pixel scale of 0.4 arcsec/pixel). The filters used were Johnson-Cousins \(BVR_{c}I_{c}\). In all cases, the exposure time was 90 seconds.
We collected data using the 50cm CDK telescope equipped with a QHY268M pro camera. This telescope (ROTUZ) is part of the DeepSkyChile2, and belongs to the Janusz Gil Institute of Astronomy, University of Zielona Gora, Poland. We reduced the data by applying bias, dark, and flat correction using AstroImageJ software (Collins et al., 2017). The photometry was done using the BHTOM server. The photometry done using the BHTOM server is based on the method described in Zielinski et al. (2019) and Zielinski et al. (2020).
Footnote 2: [https://www.deepskychile.com/en](https://www.deepskychile.com/en)
The results are shown in Fig. 1 and are summarized in Tables 1 and 2.
### Infrared photometry
In 2022 June, we obtained infrared photometric observations with the REM, using the infrared imaging camera, REMIR. The reduction of the \(JHK\) images, performed with our own IDL routines, included the construction and subtraction of a sky image, and flat-fielding. We extracted the instrumental magnitudes for the target as well as for all good-quality 2MASS stars (i.e. with a 2MASS photometric quality flag of AAA) in the field in an aperture with a radius of \(\sim 3\farcs 7\). No extended nebulosity is visible around the source on the 2MASS images. The final step was the determination of an average constant calibration factor between the instrumental and the 2MASS magnitudes of typically 30-50 stars, and this offset was applied to the target observations. The results can be found in Table 1.
REMIR was used again between October 2022 and January 2023 for \(J\)-band imaging. Each image came from the five single images jittered along a circle thanks to a dithering wedge from which a median sky was derived. Every image was then sky-subtracted with the median sky. Subsequently, the five images were re-aligned and averaged into a single \(J\) band exposure. Calibrated images were then uploaded to the BHTOM service, reduced and matched to 2MASS \(J\) band as described above for the optical data.
We used mid-infrared photometry from the Wide-field Infrared Survey Explorer (_WISE_) and _NEOWISE_ surveys from the NASA/IPAC Infrared Science Archive. _NEOWISE_ observes the full sky on average twice per year with multiple exposures per epoch. For a comparison with the photometry from other instruments, we computed the average of multiple exposures of a single epoch. _NEOWISE W1_ and _W2_ photometry is known to display a photometric bias for saturated sources. We corrected for this bias using the correction curves given in the Explanatory Supplement to the _NEOWISE_ Data Release Products. We derived the average of the uncertainties of the single exposures (err1). We also calculated the standard deviation of the points we averaged per season (err2). For the error of the data points averaged per epoch we used the maximum of err1 and err2.
### Spectroscopy
We obtained high-resolution (R\(\sim\)45,000) NIR spectra of Gaia21elv on 2020 November 14 (Program ID: GS-2020B-Q-218, PI: S. Park) using the Immersion GRating INfrared Spectrograph (IGRINS; Yuk et al., 2010; Park et al., 2014; Mace et al., 2016) of Gemini South, in the \(H\) and \(K\) bands. The spectrum was obtained with a slit size of \(0.34\arcsec\times 5\arcsec\). Gaia21elv was observed with two sets of ABBA nodding observations to subtract the sky background better. The total exposure time of Gaia21elv was 192 sec with 24 sec exposure of each frame. The data were reduced using the IGRINS pipeline (Lee and Gullikson, 2017) for flat-fielding, sky subtraction, correcting the distortion of the dispersion direction, wavelength calibration, and combining the spectra. In order to correct for telluric absorption features, a nearby A0 telluric standard star (HIP 21514) was observed right before the target. Then, the telluric correction and flux calibration were applied as done in Park et al. (2018). Finally, barycentric velocity correction using barycorrpy (Kanodia and Wright, 2018) was applied (\(V_{\rm bary}\) = 16.715 km s\({}^{-1}\)).
A spectrum using the X-SHOOTER instrument of the Very Large Telescope (VLT) at ESO's Paranal Observatory in Chile (Vernet et al., 2011) was taken on 2021 December 12 (Program ID: 108.23M6, PI: Z. Nagy). X-SHOOTER simultaneously covers a wavelength range from 300 nm to 2480 nm, and the spectra are divided into three arms, the ultraviolet (UVB, 300 - 550 nm), the visible (VIS, 500 - 1020 nm), and the near-infrared (NIR, 1000 - 2480 nm). The observations were performed with the narrow slits of \(1\arcsec\), \(0.9\arcsec\), and \(0.4\arcsec\) in the UVB, VIS, and NIR respectively, leading to spectral resolution of R \(\sim 5400,8900\), and 11600, respectively. The exposure time was 1800 s in each of the three arms. We obtained additional exposures with the \(5\arcsec\) slits, which resulted in data without slit losses, which we used for the correct flux calibration of the spectra obtained with the narrower slits. The ABBAAB nodding pattern was used. The observations were processed with the official ESO pipeline. Telluric correction was performed using ESO's Molectif program (Kausch et al., 2015; Smette et al., 2015) running in the same EsoReflex environment (Freudling et al., 2013).
## 3 Results
### Light and colour variations
Figure 1 shows the optical and infrared light curves of Gaia21elv, including archival data from 1977 (Contreras Pena et al., 2019 and references therein), the All-Sky Automated Survey for Supernovae (ASAS-SN, Shappee et al., 2014, Kochanek et al., 2017), and the Asteroid Terrestrial-impact Last Alert System (ATLAS, Tonry et al., 2018, Smith et al., 2020, Heinze et al., 2018) survey downloaded from the ATLAS Forced Photometry web service (Shingles et al., 2021). Based on these data, the eruption occurred around between 1991 and 1996. The amplitude of the brightening was 4-4.5 mag from a quiescent 16.5-17 mag to around 12 mag in the \(R\)-band. A slow fading of the source is already seen after 2010 based on data points from Contreras Pena et al. (2019) (collected from the AAVSO Photometric All Sky Survey (APASS) DR9 (Henden et al., 2015), the VST Photometric Halpha Survey (VPHAS+) DR2 (Drew et al., 2014), the Bochum Galactic disc survey (Hackstein et al., 2015)), and the _Gaia_\(G\)-band light curve.
In 2021, the source started a more rapid fading, and had a _Gaia_ alert in 2021 October due to its 1.2 mag fading in 18 months. After the _Gaia_ alert, a temporary brightening by about 0.2 mag was seen in early 2022, and after that, the source stayed at the same brightness for several months, around 14.25 mag in _Gaia_\(G\)-band. Between 2022 July and November, the source brightened again, by about 0.3 mag as is seen in the lower panel of Fig. 1. A slow long-term fading is also seen in the _WISE_ data points.
Figure 2 shows a colour-magnitude diagram based on the _WISE_\(W1\) and W2 bands. As the changes are mostly grey, extinction can be excluded as the physical mechanism between the flux changes observed at the _WISE_ wavelengths.
Figure 3 shows the \(J-H\) vs \(H-K_{s}\) diagram for the bright state (2MASS data point from 1999 February) and for the faint state (REM data point from 2022 June). The difference between the two data points in this diagram (\(\Delta J\sim 0.61\) mag, \(\Delta(J-H)\sim 0.16\) mag, \(\Delta(H-K_{s})\sim 0.13\) mag) may be consistent with the reddening of the source between 1999 and 2022. In this case, the colour change implies a visual extinction increase by \(A_{V}\sim 2\) mag. However, the colour change in the \(J-H\) vs \(H-K_{s}\) diagram can also be caused by accretion. Eruptive young stars in the \(J-H\) vs \(H-K_{s}\) plot usually
Figure 1: Light curve of Gaia21elv in _Gaia_ G (black), _WISE_\(W1\) (blue) and \(W2\) (grey) bands, and in \(g\) band from the ASAS-SN (green). The solid vertical line shows the epoch of the Gemini/GRINS spectrum, the dashed vertical line shows the epoch of the VLT/\(\times\)SHOOTER, and the arrow shows the epoch of the _Gaia_ alert. The ranges covered by the colour-magnitude diagrams in Figures 4 and 5 during and after the fading phase, respectively, are also indicated.
move toward or away from the main sequence (e.g. Szegedi-Elek et al., 2020).
Figure 4 shows a colour-magnitude diagram during the fading, as shown in Fig. 1 based on the \(o\) and \(c\) band magnitudes from the ATLAS survey. There is an indication of a long-term increasing trend of the extinction. Since the period of the quick fading in 2021 is not sampled well by these data points (as seen in Fig. 1), it is not clear based on them, whether the increasing extinction also applies for this period.
Figure 5 shows colour-magnitude diagrams after the fading of the source, based on the \(o\) and \(c\) band magnitudes from the ATLAS survey, \(g-r\) versus \(g\) and \(r-i\) versus \(r\) colour-magnitude diagrams based on our follow-up observations between 2022 June and 2023 January. The periods covered by these figures are also indicated in Fig. 1. These colour-magnitude diagrams show extinction-related variations between 2022 June and 2023 January. The colour-magnitude diagram based on the ATLAS \(o\) and \(c\) band also includes data points from a period between 2021 October and 2022 May. These data points do not show an extinction-related trend, indicating, that mechanisms other than the extinction may also play a role in this post-fading phase.
Based on the colour variations alone, it is not possible to make a conclusion on the origin of the brightness variations of Gaia21elv. The \(o\) and \(c\) band data from the ATLAS survey as well as the \(g-r\) versus \(g\) and \(r-i\) versus \(r\) colour-magnitude diagrams suggest extinction-related brightness variations both during the fading and the brightening. Such extinction-related variations are not seen in the _WISE_ colour-magnitude diagrams, whereas the \(J-H\) vs \(H-K_{s}\) diagram can be interpreted both as a result of extinction and accretion. Therefore, we do not make a conclusion on the origin of the brightness variations based on the colour variations, and will further investigate it in Sect. 3.3.
### Reddening and spectral features
Figure 6 shows the spectra taken at the two epochs in optical and NIR using Gemini South/IGRINS and VLT/X-SHOOTER and their comparison to the VLT/X-SHOOTER spectrum of FU Ori.
Following the method of Connelley and Reipurth (2018), we used the X-SHOOTER spectrum to estimate the visual extinction toward the source by comparing it to the spectrum of FU Ori, which has a low and well known extinction (\(A_{V}=\)1.7\(\pm\)0.1 mag; e.g. Siwak et al., 2018, Lykou et al., 2022). We dereddened the spectrum of Gaia21elv with increasing \(A_{V}\) until it matched the scaled, flux calibrated spectrum of FU Ori. The resulting \(\Delta A_{V}\) is \(\sim\)4 mag, which suggests \(A_{V}\sim 5.7\) mag for Gaia21elv in its faint state.
Figure 4: Colour-magnitude diagram based on \(o\) and \(c\) magnitudes from the ATLAS survey during the fading of Gaia21elv. The typical error bar is shown in the lower left corner.
Figure 3: (\(J-H\)) versus (\(H-K_{S}\)) colourβcolour diagram for the bright state (2MASS data point from 1999 February) and during the fading (REM data point from 2022 June). The solid curve shows the colours of the zero-age mainβsequence, and the dotted line represents the giant branch (Bessell and Brett, 1988). The long-dashed lines delimit the area occupied by the red reddened normal stars (Cardelli et al., 1989). The dashβdotted line is the locus of unreddened CTTS (Meyer et al., 1997) and the grey shaded band borders the area of the reddened \(K_{S}\)βexcess stars.
Figure 2: Colour-magnitude diagram based on _WISE_ W1 and \(W2\) data.
Table 1 lists the lines we identified in the VLT/X-SHOOTER spectrum of Gaia21elv. Most detected lines are seen in absorption, such as Ba ii, Li i, Na D, K i, Al i, He i, Pa\(\beta\), and Mg i (Fig. 7). Some of these absorption lines show two (or more) components, such as the Ba ii, He i, and Pa\(\beta\) lines. Some lines show a P Cygni profile, such as H\(\alpha\) and H\(\beta\) (Fig. 8) and the Ca ii triplet (Fig. 9). Forbidden lines of [O i], [Fe ii], and [S ii] were detected in emission (Fig. 10). These lines may indicate the presence of a jet associated with Gaia21elv, similarly to what was seen for the classical FUor V1057 Cyg (e.g. Szabo et al., 2021). Forbidden emission lines in young stars were also suggested to trace disk winds (Paatz & Camenzind, 1996, Iguchi & Itoh, 2016, Ballabio et al., 2020). The \(H\) and \(K\)-band spectra were observed at two different epochs: in 2020 November, just before the rapid fading of the source (Gemini South/IRGNIS) and in 2021 December, soon after the _Gaia_ alert reporting the fading (VLT/X-SHOOTER). These spectra display very similar features (Fig. 6), including a triangular shaped \(H\)-band continuum and the CO-bandhead features in absorption, both typical FUor signatures. Fig. 11 shows lines detected at both epochs, such as Mg i, Br\(\gamma\), Na i, and Ca i. The line profiles did not change significantly between the two epochs.
To interpret the CO bandhead features observed at the two epochs, we used an isothermal slab model to find a best-fitting CO column density and excitation temperature of the absorbing material, similarly to Kospal et al. (2011) and Park et al. (2021). We found the best-fitting CO column density to be \(\sim\)10\({}^{22}\) cm\({}^{-2}\), and a best-fitting excitation temperature of \(2800\pm 100\) K at the first epoch (Gemini South/IRINS) and \(2300\pm 100\) K at the later epoch (VLT/X-SHOOTER). The results are shown in Figure 12. In Sect. 4.1 we analyse the spectra in more detail and compare the observed features to those seen in FUors.
### Spectral Energy Distribution modeling
In the following, we analyse the Spectral Energy Distribution (SED) of Gaia21elv at three different epochs. To create a SED for the state of the maximum brightness, we used archival data from APASS9 (Henden et al., 2015), DENIS (Epchtein et al., 1994), 2MASS (Cutri et al., 2003), and the ALLWISE (Cutri et al., 2013) catalogues. A comparison of the DENIS \(I\)-band flux from 1996 December with the APASS9 \(t^{\prime}\)-band flux from 2010 December shows that the brightness of the star did not change significantly between these dates, thus the fact that the used archival data correspond to different epochs is not expected to affect the modeling of the SED in the bright state. In addition to the epoch of the bright state, we compiled an SED for 2020 Oct-Nov that is very close to the epoch of the Gemini/IGRINS spectrum, and as such, it is just before the fast fading phase of the source. We used the available ASAS-SN \(g\), _Gaia \(G\)_ and _WISE \(W1\)_ data, as well as photometry in the cyan and orange bands of the ATLAS survey for this epoch. The third epoch we considered is the epoch of the VLT/X-SHOOTER spectrum in 2021 December, as it represents the faint state at the end of the fast fading of the source. We obtained synthetic photometry in the APASS9 and 2MASS bands from the X-SHOOTER spectrum, and also used the _NEOWISE_ W1 data point closest to this epoch. The three SEDs are shown in Fig. 13.
As we will discuss in Sec. 4, the properties of Gaia21elv resemble those of FU Orionis-type stars. In these objects the circumstellar matter is expected to form an accretion disc (Hartmann & Kenyon, 1996). To estimate the properties of the accretion disc in Gaia21elv at the three epochs, we modelled the SEDs using a steady, optically thick and geometrically thin viscous accretion disc, whose mass accretion rate is constant in the radial direction. This method was successfully applied to estimate the accretion rate in several eruptive YSOs including HBC 722 (Kospal et al., 2016), V582 Aur (Abraham et al., 2018), 2MASS 22352345 + 7517076 (Kun et al., 2019), Gaia18dvy (Szegdi-Elek et al., 2020), V1057 Cyg (Szabo et al., 2021), and V1515 Cyg (Szabo et al., 2022). In this model, the temperature profile of the disc is defined based on Hartmann & Kenyon (1996) as:
\[T(r)=\left[\frac{3GM_{\star}\dot{M}}{8\pi R_{\star}^{3}\sigma}\left(1-\sqrt{ \frac{R_{\star}}{r}}\right)\right]^{1/4}, \tag{1}\]
where \(r\) is the distance from the star, \(R_{\star}\) is the stellar radius, \(M_{\star}\) is the stellar mass, \(\dot{M}\) is the accretion rate, and \(G,\sigma\) are the gravitational and Stefan-Boltzmann constants, respectively. The model SED was calculated by integrating black-body emission in concentric annuli between the inner disc radius and the outer disc radius. The resulting SED was then reddened by different A\({}_{V}\) values.
One of the input parameters of the model is the inclination, and as it is unknown for Gaia21elv, we used an intermediate value of 45\({}^{\circ}\). We
Figure 5: _Left panel:_ Colour-magnitude diagram based on \(o\) and \(c\) magnitudes from the ATLAS survey after the fading of Gaia21elv. The typical error bar is shown in the lower left corner. _Middle and right panels:_ Colour-magnitude diagrams based on follow-up photometry shown in Tables A1 and A2.
Figure 6: Optical and NIR spectra of Gaia21elv taken with VLT/X-SHOOTER and Gemini South/IGRINS in comparison with those of FU Ori (taken also with VLT/X-SHOOTER, ESO archival data from program 094.C-0233). Arbitrary scaling factors were applied for a better comparison of the spectra. The Gemini South/IGRINS spectrum was smoothed for a better comparison.
assumed a distance of 910.9 pc, as derived above from the _Gaia_ DR3 parallax and its zero-point correction. There is a known degeneracy in the model between the inner disc radius and A\({}_{V}\). To break this degeneracy we adopted the A\({}_{V}\) value of \(\sim\)5.7 mag obtained from the X-SHOOTER spectrum in Sect. 3. This choice fixed the inner disc radius to \(R_{\rm in}=2R_{\odot}\), a reasonable value, as it is the same as determined for FU Ori by Zhu et al. (2007).
The remaining free parameters of the disc model are \(M_{\bullet}\dot{M}\), A\({}_{V}\), and \(R_{\rm out}\). Finding the best \(M_{\bullet}\dot{M}\) and A\({}_{V}\) combinations was performed with \(\chi^{2}\) minimization over a large grid in both the accretion rate and the extinction, by taking into account all flux values between 0.4 and 4.0 \(\mu\)m. The formal uncertainties of the data points were set to a homogeneous 5% of the measured flux values. We ran several models assuming different outer disc radii in the range between 0.2
\begin{table}
\begin{tabular}{l c c c c c} \hline Species & Lab. \(\lambda\) & Obs. \(\lambda\) & EW & FWHM & Note \\ & [nm] & [nm] & [nm] & [nm] & \\ \hline
[S ii] & 406.860 & 406.702 & \(-\)1.01\(\pm\)0.05 & 0.20\(\pm\)0.03 & emission \\ H\(\delta\) & 410.171 & 410.038 & 0.05\(\pm\)0.01 & 0.10\(\pm\)0.01 & absorption \\ H\(\gamma\) & 434.047 & 433.850 & 0.13\(\pm\)0.03 & 0.32\(\pm\)0.01 & absorption \\ H\(\beta\) & 486.129 & 485.997 & 0.13\(\pm\)0.01 &... & P Cygni absorption \\ H\(\beta\) & 486.129 & 486.251 & \(-\)0.04\(\pm\)0.01 & 0.16\(\pm\)0.02 & P Cygni emission \\ NaD & 588.995 & 588.988 & 0.30\(\pm\)0.01 & 0.27\(\pm\)0.01 & absorption \\ NaD & 589.592 & 589.508 & 0.25\(\pm\)0.01 & 0.24\(\pm\)0.02 & absorption \\
[O i] & 630.030 & 629.801 & \(-\)0.28\(\pm\)0.02 & 0.29\(\pm\)0.01 & emission \\ Ba ii & 649.690 & 649.515 & 0.08\(\pm\)0.01 & 0.38\(\pm\)0.01 & absorption \\ H\(\alpha\) & 656.282 & 656.155 & 0.42\(\pm\)0.01 & 0.18\(\pm\)0.02 & P Cygni absorption \\ H\(\alpha\) & 656.282 & 656.377 & \(-\)0.09\(\pm\)0.01 & 0.18\(\pm\)0.02 & P Cygni emission \\ Li i & 670.776 & 670.785 & 0.03\(\pm\)0.01 & 0.13\(\pm\)0.01 & absorption \\
[S ii] & 673.082 & 672.960 & \(-\)0.05\(\pm\)0.01 & 0.28\(\pm\)0.01 & emission \\
[Fe ii] & 715.517 & 715.364 & \(-\)0.08\(\pm\)0.01 & 0.24\(\pm\)0.01 & emission \\ K i & 766.490 & 766.457 & 0.15\(\pm\)0.01 & 0.20\(\pm\)0.02 & absorption \\ K i & 769.896 & 769.851 & 0.10\(\pm\)0.01 & 0.20\(\pm\)0.02 & absorption \\ Ca ii & 849.802 & 849.647 & 0.05\(\pm\)0.01 & 0.19\(\pm\)0.02 & P Cygni absorption \\ Ca ii & 849.802 & 849.854 & \(-\)0.04\(\pm\)0.01 & 0.12\(\pm\)0.01 & P Cygni emission \\ Ca ii & 854.209 & 854.208 & 0.10\(\pm\)0.01 & 0.28\(\pm\)0.01 & P Cygni absorption \\ Ca ii & 854.209 & 854.276 & \(-\)0.06\(\pm\)0.01 & 0.16\(\pm\)0.01 & P Cygni emission \\ Ca ii & 866.214 & 866.043 & 0.09\(\pm\)0.01 & 0.23\(\pm\)0.01 & P Cygni absorption \\ Ca ii & 866.214 & 866.283 & \(-\)0.04\(\pm\)0.01 & 0.14\(\pm\)0.01 & P Cygni emission \\ Pa8 & 954.620 & 954.607 & 0.04\(\pm\)0.01 & 0.21\(\pm\)0.02 & absorption \\ He i & 1083.025 & 1081.46 & 0.60\(\pm\)0.10 & 1.20\(\pm\)0.20 & absorption, two components \\ Pa\(\beta\) & 1281.807 & 1281.819 &... &... & absorption, two components \\ Al i & 1312.342 & 1312.382 & 0.06\(\pm\)0.01 & 0.18\(\pm\)0.01 & absorption \\ Al i & 1315.075 & 1315.107 & 0.07\(\pm\)0.01 & 0.22\(\pm\)0.01 & absorption \\ \hline \end{tabular}
\end{table}
Table 1: Lines detected in the X-SHOOTER spectrum of Gaia21elv. The FWHM values were derived using a Gaussian fitting, and are not provided for line profiles, which cannot be fitted by a Gaussian. For lines with multiple components, we provide the parameters of the one with the highest intensity.
Figure 8: Hydrogen Balmer lines in the VLT/X-SHOOTER spectrum of Gaia21elv.
Figure 7: Examples of absorption lines detected toward Gaia21elv.
and 2 au, and found that the WISE data points are reasonably well fitted with \(R_{\rm out}=1\) au, though this value is less constrained than the other two parameters. The best-fitting visual extinctions and products of the stellar mass and the accretion rate are plotted in Fig. 14. Since the outcome of the model is the product \(M_{\star}\dot{M}\), the true accretion rate depends on the stellar mass. However, FUors are typically low-mass objects (Hartmann & Kenyon, 1996), thus our obtained values provide a good approximation to the accretion rate.
Considering the results for all three epochs, the three data points suggest that the accretion rate followed a monotonic decay in the last 15 years. Our models suggest a slight increase of the extinction toward the source from 3.6 mag to 4.4 mag between the maximum brightness and the Gemini epoch in 2020 November. Remarkably, the quick fading in 2021, corresponding to the _Gaia_ alert, was mostly caused by an increase in extinction. The accretion luminosity of the source also dropped in parallel to the accretion rate between the first and last epoch, from 106 L\({}_{\odot}\) to 68 L\({}_{\odot}\), although the absolute values depend on the unknown inclination angle, too.
## 4 Discussion
### Classification of Gaia21elv as a FUor
To investigate, whether Gaia21elv is indeed a FUor, we used the criteria from Connelley & Reipurth (2018), which they list in their Table 3. In the following, we list these defining characteristics and check if Gaia21elv fulfills them.
- The eruption is observed for each bona fide FUor, unlike for FUor-like and peculiar objects. This criterion is fulfilled for Gaia21elv. The date of the eruption can be constrained based on the light curve shown in Contreras Pena et al. (2019) in their figure B2, which includes data points from the literature starting from 1977 (Fig. 1). The outburst of Gaia21elv based on the long term light curve occurred between 1991 and 1996.
Figure 11: Comparison of lines detected at both epochs using Gemini South/GRINS (red) and VLT/X-SHOOTER (blue). Arbitrary scaling factors were applied for a better comparison of the spectra.
Figure 12: CO overtone features of Gaia21elv shown in black observed using Gemini South/IGRINS (top panel) and VLT/X-SHOOTER (bottom panel). The best fit models are overplotted in red.
Figure 9: Ca II triplet lines observed for Gaia21elv using VLT/X-SHOOTER, compared to those observed using VLT/X-SHOOTER for FU Ori.
- Bona fide FUors have well defined CO absorption features. Strong CO absorption was also observed for Gaia21elv (Fig. 12) at both of our observing epochs.
- Water vapor bands can be identified in the NIR spectra of bona fide FUors, including the feature at 1.33 \(\mu\)m and the triangular shaped \(H\)-band continuum, which is due to water vapor bands on each end of the \(H\)-band (Fig. 6). Gaia21elv shows these features at both epochs.
- Bona fide FUors show other molecular bands in their \(J\)-band spectra, such as those from vanadium oxide (at 1.05 \(\mu\)m and 1.19 \(\mu\)m) and titanium oxide (0.88, 0.92, and 1.11 \(\mu\)m). The X-SHOOTER spectrum of Gaia21elv shows all these molecular bands as wide absorption features (Fig. 6).
- Another characteristic of FUors is their hydrogen lines, especially the Pa\(\alpha\), \(\beta\), \(\gamma\), and \(\delta\) lines, are in absorption, Br\(\gamma\) line is very weak, with the rest of the Brackett series not observed. For Gaia21elv the Pa\(\beta\) and Pa\(\delta\) lines are indeed seen in absorption, however, the other two Paschen lines are not detected. It was not possible to detect the Pa\(\alpha\) line due to the poor atmospheric transmission at its wavelength (1.87 \(\mu\)m). The Br\(\gamma\) line shows a weak absorption, while the rest of the Brackett series is not detected, similarly to what was expected for FUors.
- FUors show very few, if any, emission lines, and even those are typically the emission components of P Cygni profiles. Gaia21elv shows a few P Cygni profiles in H\(\alpha\), H\(\beta\), and the Ca ii triplet, and in addition to those, there are forbidden lines of [O i], [Fe ii], and [S ii] in emission. The absorption lines and P Cygni profiles typically detected in the spectra of FUors are related to the disc, while the forbidden emission lines trace a jet or disk wind. Forbidden emission lines are not always detected in the spectra of known FUors, but were identified for a few examples, including the classical FUors V2494 Cyg (Connelley and Reipurth, 2018 and references therein) and V1057 Cyg (Szabo et al., 2021), therefore, their detection does not rule out a classification as a bona fide FUor.
- FUors show weak absorption lines of Na i (2.208 \(\mu\)m) and Ca i (2.256 \(\mu\)m) (Connelley and Reipurth, 2018). As shown in Fig. 11, these lines are detected in the spectra of Gaia21elv at both epochs.
- Another spectroscopic signature of FUors is the He i line at 1.083 \(\mu\)m, which is also present in the spectrum of Gaia21elv (Fig. 6). The He i line detected toward Gaia21elv is double-peaked, where the higher intensity component is largely blueshifted, detected at a velocity of around \(-\)400 km s\({}^{-1}\), and the lower intensity component is seen at a velocity of around \(+\)25 km s\({}^{-1}\) (Fig. 7). Most bona fide FUors show blueshifted absorption lines, with a mean velocity of \(-\)350 km s\({}^{-1}\) (see Fig. 4. in Connelley and Reipurth, 2018).
Another characteristics of FUors is that their spectral type is wavelength-dependent (Hartmann and Kenyon, 1996). To check whether this applies to Gaia21elv, we used the VLT/X-SHOOTER spectrum, and compared it to the synthetic stellar spectra calculated by Coelho et al. (2005) in the 300 nm to 1.8 \(\mu\)m wavelength range. These stellar templates are given for effective temperatures in the range between 3500 K and 6000 K in steps of 250 K. We compared the VLT/X-SHOOTER spectrum to these stellar templates at optical and at NIR wavelengths, separately. At optical wavelengths, the best match was found with the stellar template corresponding to an effective temperature of 5500\(\pm\)250 K, while at NIR wavelengths, the best fit corresponds to an effective temperature of 3750\(\pm\)250 K. This is consistent with the expectation for FUors, that the stellar type is wavelength-dependent.
Figure 14: Visual extinctions and the product of the stellar mass and accretion rate for the three epochs based on the accretion disc models. The long-term light curve using the _Gaia_\(G\) magnitudes and the data from Contreras PeΓ±a et al. (2019) are shown as a comparison.
Figure 13: The SED of Gaia21elv at the three modelled epochs. The SED at the brightest state based on archival data is shown with circles. The SED close to the epoch of the Gemini/GRINS spectrum is shown with asterisks. The SED at the epoch of the VLT/X-SHOOTER spectrum representing the faint state is shown with triangles. Solid curves show the results of the accretion disc models for the individual epochs.
Based on the above criteria from Connelley and Reipurth (2018) as well as its wavelength-dependent spectral type, we conclude that Gaia21elv can be classified as a bona fieFUor. This classification is consistent with the high accretion luminosity of the source implied by our accretion disc modelling.
### On the recent fading of Gaia21elv
Until now, no bona fide FUor is known to have completely ended its outburst. This is why it is important to monitor their brightness variations, and study their fading episodes. A temporary fading of V346 Nor was reported by Kraus et al. (2016) and Kospal et al. (2020), which was due to a decrease in the accretion rate, however, after the fading, the star brightened again to nearly reach its outburst brightness. Another eruptive young star, V899 Mon, which shows properties of both FUors and EXors, faded to quiescence for a little less than a year (Ninan et al., 2015; Park et al., 2021). However, this quiescent phase was followed by another outburst. In addition to their fading being temporary, neither V346 Nor, nor V899 Mon is a bona fide FUor. The long-term fading of a classical FUor, V1515 Cye was recently reported by Szabo et al. (2022): its fading started around 2006 and is approximately consistent with an exponential decay with an e-folding time of 12 years. Another classical FUor, V733 Cep also shows long-term fading (Park et al., in prep.), which was found to be the result of a decrease in the accretion rate.
Brightness variations of young stars are only partly related to changes in the accretion rate (Fischer et al., 2022). The other main process is variable circumstellar extinction. To probe whether the fading of Gaia21elv was the result of a decrease of the accretion rate, we estimated the accretion rate by fitting the SEDs with an accretion disc model in Sec. 3. The accretion rates derived for Gaia21elv are typical of FUors (Fischer et al., 2022 and references therein). The accretion rate between the brightest and faintest states decreased by \(\sim\)36%. However, according to the accretion disc models fitted to the SEDs, the decreasing accretion rate was combined with increasing circumstellar extinction, especially between 2020 and 2022. It is most likely, that the increased circumstellar extinction dominated the rapid fading of the source that triggered the _Gaia_ Alerts system in 2021. After the _Gaia_ alert, the brightness of the source also started a slow increase, though it is still almost a magnitude fainter than in early 2020, before the start of this fading episode. The decrease found between the accretion rates at the brightest and faintest states indicates an e-folding time of about 25 years. Based on our results, the fading of Gaia21elv found by the _Gaia_ alert is likely a temporary event. Future photometric and spectroscopic monitoring of the source is important to provide more information on the evolution of its outburst.
## 5 Summary
We analysed the photometry and spectroscopy of a young star exhibiting a long-term outburst and a recent fading alerted by the _Gaia_ Science Alerts system.
Optical and NIR spectra confirm that Gaia21elv is a bona fide FUor. This is the third FUor which was found based on the _Gaia_ alerts. In addition to the classical FUor signatures, forbidden emission lines were detected, which are typically tracing a jet or disk winds.
Fitting the SEDs at the maximum brightness and and its faint state using an accretion disc model suggests a decrease in the accretion rate. However, fitting the SED at an epoch close to the onset of the quick fading in late 2020-2021 indicates that this episode was mostly caused by an increase of circumstellar extinction.
In the future, a photometric and spectroscopic monitoring of Gaia21elv is important to characterize its behavior after its fading episode.
## Acknowledgements
We thank the referee for comments which helped to improve our paper.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 716155 (SAC-CREED).
We acknowledge support from the ESA PRODEX contract nr. 4000132054.
G.M. and Z.N. were supported by the Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences.
G.M. has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101004141.
Zs.M.Sz. acknowledges funding from a St Leonards scholarship from the University of St Andrews. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
E.F. and T.G. acknowledge financial support from the project PRIN-INAF 2019 "Spectroscopically Tracing the Disk Dispersal Evolution (STRADE)".
We acknowledge ESA Gaia, DPAC and the Photometric Science Alerts Team ([http://gsaweb.ast.cam.ac.uk/alerts](http://gsaweb.ast.cam.ac.uk/alerts)).
This work used the Immersion Grating Infrared Spectrometer (IGRINS) that was developed under a collaboration between the University of Texas at Austin and the Korea Astronomy and Space Science Institute (KASI) with the financial support of the Mt. Cuba Astronomical Foundation, of the US National Science Foundation under grants AST-1229522 and AST-1702267, of the McDonald Observatory of the University of Texas at Austin, of the Korean GMT Project of KASI, and Gemini Observatory.
This work was supported by K-GMT Science Program (PID: GS-2020B-Q-218) of Korea Astronomy and Space Science Institute (KASI).
Based on observations collected at the European Southern Observatory under ESO programme 108.23M6.
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
This project used data obtained via BHTOM ([https://bhtom.space](https://bhtom.space)), which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreements No. 101004719.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.01385 | Maurer-Cartan type cohomology on generalized Reynolds operators and
NS-structures on Lie triple systems | The purpose of this paper is to introduce and study the notion of generalized
Reynolds operators on Lie triple systems with representations (Abbr.
\textsf{L.t.sRep} pairs) as generalization of weighted Reynolds operators on
Lie triple systems. First, We construct an $L_{\infty}$-algebra whose
Maurer-Cartan elements are generalized Reynolds operators. This allows us to
define a Yamaguti cohomology of a generalized Reynolds operator. This
cohomology can be seen as the Yamaguti cohomology of a certain Lie triple
system with coefficients in a suitable representation. Next, we study
deformations of generalized Reynolds operators from cohomological points of
view and we investigate the obstruction class of an extendable deformation of
order $n$. We end this paper by introducing a new algebraic structure, in
connection with generalized Reynolds operator, called NS-Lie triple system.
Moreover, we show that NS-Lie triple systems can be derived from NS-Lie
algebras. | Rahma Gharbi, Sami Mabrouk, Abdenacer Makhlouf | 2023-09-04T06:31:19Z | http://arxiv.org/abs/2309.01385v1 | Maurer-Cartan type cohomology on generalized Reynolds operators and NS-structures on Lie triple systems
###### Abstract
The purpose of this paper is to introduce and study the notion of generalized Reynolds operators on Lie triple systems with representations (Abbr. L.tsRep pairs) as generalization of weighted Reynolds operators on Lie triple systems. First, We construct an \(L_{\infty}\)-algebra whose Maurer-Cartan elements are generalized Reynolds operators. This allows us to define a Yamaguti cohomology of a generalized Reynolds operator. This cohomology can be seen as the Yamaguti cohomology of a certain Lie triple system with coefficients in a suitable representation. Next, we study deformations of generalized Reynolds operators from cohomological points of view and we investigate the obstruction class of an extendable deformation of order \(n\). We end this paper by introducing a new algebraic structure, in connection with generalized Reynolds operator, called NS-Lie triple system. Moreover, we show that NS-Lie triple systems can be derived from NS-Lie algebras.
**Key words** : Lie triple system, generalized Reynolds operator, Maurer-Cartan element, \(L_{\infty}\)-algebra, Lie-Yamaguti cohomology, deformation, NS-Lie triple system.
**Mathematics Subject Classification** (2020) : 17B15, 17A40, 17B56, 17B10, 17B38.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Generalized Reynolds operators on Lie triple systems with representations
* 3.1 Weighted Reynolds operators on L.t.s
* 3.2 Generalized Reynolds operators on L.t.sRep pairs
* 4 |
2304.10274 | Counting geodesics of given commutator length | Let $\Sigma$ be a closed hyperbolic surface. We study, for fixed $g$, the
asymptotics of the number of those periodic geodesics in $\Sigma$ having at
most length $L$ and which can be written as the product of $g$ commutators. The
basic idea is to reduce these results to being able to count critical
realizations of trivalent graphs in $\Sigma$. In the appendix we use the same
strategy to give a proof of Huber's geometric prime number theorem. | Viveka Erlandsson, Juan Souto | 2023-04-20T12:55:59Z | http://arxiv.org/abs/2304.10274v1 | # Counting geodesics of given commutator length
###### Abstract.
Let \(\Sigma\) be a closed hyperbolic surface. We study, for fixed \(g\), the asymptotics of the number of those periodic geodesics in \(\Sigma\) having at most length \(L\) and which can be written as the product of \(g\) commutators. The basic idea is to reduce these results to being able to count critical realizations of trivalent graphs in \(\Sigma\). In the appendix we use the same strategy to give a proof of Huber's geometric prime number theorem.
The first author gratefully acknowledges support from EPSRC grant EP/T015926/1 and UiT Aurora Center for Mathematical Structures in Computations (MASCOT). No data was used in this research.
where \(g_{\Sigma}\) is the genus of \(\Sigma\). Phillips-Sarnak [16] get this same result with an estimate on the error term.
Here we will be counting certain kinds of homologically trivial curves. Every homologically trivial curve \(\gamma\) can be represented, up to free homotopy, as the product of commutators in \(\pi_{1}(\Sigma)\). The smallest number of commutators needed is the _commutator length_ of \(\gamma\). This quantity agrees with the genus of the smallest connected oriented surface \(S\) with connected boundary \(\partial S\) for which there is a (continuous) map \(S\to\Sigma\) sending \(\partial S\) to \(\gamma\). This explains why we think of the commutator length \(cl(\gamma)\) of \(\gamma\) as the _genus of \(\gamma\)_.
In this paper we study, for fixed but otherwise arbitrary \(g\geqslant 1\), the asymptotic behavior of the cardinality of the set
\[\mathbf{B}_{g}(L)=\left\{\gamma\subset\Sigma\ \middle|\begin{array}{l} \text{closed geodesic with }\ell_{\Sigma}(\gamma)\leqslant L\text{ and }\\ \text{commutator length }cl(\gamma)=g\end{array}\right\} \tag{1.3}\]
This is our main result:
**Theorem 1.1**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface and for \(g\geqslant 1\) and \(L>0\) let \(\mathbf{B}_{g}(L)\) be as in (1.3). We have_
\[|\mathbf{B}_{g}(L)|\sim\frac{2}{12^{g}\cdot g!\cdot(3g-2)!\cdot\operatorname{ vol}(T^{1}\Sigma)^{2g-1}}\cdot L^{6g-4}\cdot e^{\frac{L}{2}}\]
_as \(L\to\infty\)._
Here, as we will throughout the paper, we have endowed the unit tangent bundle \(T^{1}\Sigma\) with the Liouville measure, normalized in such a way that \(\operatorname{vol}(T^{1}\Sigma)=2\pi\cdot\operatorname{vol}(\Sigma)=-4\pi^{2} \chi(\Sigma)\). In particular we have that, as in (1.2), the quantities in Theorem 1.1 depend on the topology of the underlying surface \(\Sigma\) but there is no dependence on its geometry.
Note that by definition the curves in \(\mathbf{B}_{g}(L)\) bound a surface of genus \(g\), but that this surface is just the image of a continuous, or if you want smooth, map. We prove that only a small but definite proportion of the elements in \(\mathbf{B}_{g}(L)\) arise as the boundary of an immersed surface of genus \(g\) (and connected boundary):
**Theorem 1.2**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface and for \(g\geqslant 1\) and \(L>0\) let \(\mathbf{B}_{g}(L)\) be as in (1.3). We have_
\[|\{\gamma\in\mathbf{B}_{g}(L)\text{ bounds immersed surface of genus }g\}|\sim\frac{1}{2^{4g-2}}|\mathbf{B}_{g}(L)|\]
_as \(L\to\infty\)._
Theorem 1.2 should perhaps be compared with the Immersion Theorem in [4, Theorem 4.79]. The Immersion Theorem asserts that every homologically trivial geodesic \(\gamma\) in \(\Sigma\) virtually bounds an immersed surface, meaning that there is an immersion into \(\Sigma\) of a compact surface \(S\) in such a way that the boundary of \(S\) is mapped onto \(\gamma\) with positive degree. In some sense Theorem 1.2 seems to suggest that, most of the time, the genus of \(S\) does not agree with the genus of \(\gamma\).
In the course of the proof of Theorem 1.1 we will need a different counting result that we believe has its own interest. Suppose namely that \(X\) is a compact trivalent graph. Under a critical realization of \(X\) in \(\Sigma\) we understand a (continuous) map
\[\phi:X\to\Sigma\]
sending each edge to a non-degenerate geodesic segment in such a way that any two (germs of) edges incident to the same vertex are sent to geodesic segments meeting at angle \(\frac{2\pi}{3}\) (see Lemma 2.2 for an explanation of the etymology of this terminology). Although it may not be completely evident to the reader at this point, the set
\[\mathbf{G}^{X}(L)=\left\{\begin{array}{c}\phi:X\to\Sigma\text{ critical realization}\\ \text{with length }\ell_{\Sigma}(\phi)\leqslant L\end{array}\right\} \tag{1.4}\]
is finite, where the _length_ of a critical realization is defined to be the sum
\[\ell_{\Sigma}(\phi)=\sum_{e\in\mathbf{edge}(X)}\ell_{\Sigma}(\phi(e))\]
of the lengths of the geodesic segments \(\phi(e)\) when \(e\) ranges over the set of edges of \(X\). The following is the key to Theorem 1.1:
**Theorem 1.3**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface. For every connected trivalent graph \(X\) we have_
\[|\mathbf{G}^{X}(L)|\sim\left(\frac{2}{3}\right)^{3\chi(X)}\cdot\frac{\operatorname {vol}(T^{1}\Sigma)^{\chi(X)}}{(-3\chi(X)-1)!}\cdot L^{-3\chi(X)-1}\cdot e^{L}\]
_as \(L\to\infty\). Here \(\chi(X)\) is the Euler-characteristic of the graph \(X\)._
Let us sketch the proof of Theorem 1.3. Recall first that there are basically two approaches (that we know of) to establish Huber's theorem (1.1): either one approaches it a la Huber [10], that is from the point of spectral analysis or, as Margulis did later [12], exploiting the ergodic properties of the geodesic flow. Indeed, already the predecessor to Huber's theorem, namely Delsarte's lattice point counting theorem [7] can be approached from these two different points of view. In Section 3 below we will sketch the argument to derive Delsarte's theorem from the fact that the geodesic flow is mixing, discussing also the count of those geodesic arcs in \(\Sigma\) going from \(x\) to \(y\) and whose initial and terminal speed are in predetermined sectors of the respective unit tangent spaces--see Theorem 3.1 for the precise statement. This is indeed all the dynamics we will need in the proof of the theorems above. To be clear: we do not make use of any of the slightly rarified refinements of the mixing property of the geodesic flow. Specifically, we do not need exponential mixing or such.
The basic idea of the proof of Theorem 1.3 is to note that, for a fixed graph \(X\), the set of all its realizations in \(\Sigma\), that is maps \(X\to\Sigma\) mapping each edge geodesically, is naturally a manifold. Generically, each connected component contains a unique critical realization. To count how many such
critical realizations there are with total length less than \(L\) we consider for small \(\varepsilon\) the set \(\mathcal{G}_{\varepsilon-\operatorname{crit}}(L)\) of geodesic realizations of length at most \(L\) and where the angles, instead of being \(\frac{2\pi}{3}\) on the nose, are in the interval of size \(\varepsilon\) around that number. Delsarte's theorem allows us to compute the volume \(\operatorname{vol}(\mathcal{G}_{\varepsilon-\operatorname{crit}}(L))\) of that set of geodesic realizations. A little bit of hyperbolic geometry then shows that most of the connected components of \(\mathcal{G}_{\varepsilon-\operatorname{crit}}(L)\) have basically the same volume. We thus know, with a small error, how many connected components we have, and thus how many critical realizations. This concludes the sketch of the proof of Theorem 1.3.
_Remark_.: The strategy used to prove Theorem 1.3 can be used to give a pretty easy proof of Huber's theorem (1.1). We work this out in the appendix, and although there is no logical need of doing so, we encourage the reader to have a look at it before working out the details of the proof of Theorem 1.3.
Let us also sketch the proof of Theorem 1.1. A _fat graph_ is basically a graph which comes equipped with a regular neighborhood \(\operatorname{\mathbf{neigh}}(X)\) homeomorphic to an oriented surface. Such a fat graph _has genus \(g\)_ if this regular neighborhood is homeomorphic to a compact surface of genus \(g\) with connected boundary. The point of considering fat graphs is that whenever we have a realization \(\phi:X\to\Sigma\) of the graph underlying a genus \(g\) fat graph then we get a curve, namely \(\phi(\partial\operatorname{\mathbf{neigh}}(X))\) which has at most genus \(g\). The basic idea of the proof of Theorem 1.1 is to consider the set
\[\mathbf{X}_{g}=\left\{(X,\phi)\left|\begin{array}{c}X\text{ is a fat graph of genus }g\text{ and }\\ \phi:X\to\Sigma\text{ is a critical realization }\\ \text{ of the underlying graph }\end{array}\right.\right\}\Bigg{/}_{ \text{equiv}}\]
of (equivalence classes) of realizations (see Section 7 for details) and prove that the map
\[\Lambda:\mathbf{X}_{g}\to\mathbf{C},\ \ (X,\phi)\mapsto\text{ geodesic homotopic to }\phi(\partial X)\]
is basically bijective onto \(\mathbf{B}_{g}\) and that generically, the geodesic \(\Lambda(X,\phi)\) has length almost exactly equal to \(2\cdot\ell(\phi)-C\) for some explicit constant \(C\). Once we are here we get the statement of Theorem 1.1 from Theorem 1.3 together with a result by Bacher and Vdovina [1]. Other than proving Theorem 1.3, the bulk of the work is to establish the properties of the map \(\Lambda\). The key step is to bound the number of curves with at most length \(L\) and which arise in two essentially different ways as the boundary of genus \(g\) surfaces:
**Theorem 1.4**.: _For any \(g\) there are at most \(\operatorname{\mathbf{const}}\cdot L^{6g-5}\cdot e^{\frac{L}{2}}\) genus \(g\) closed geodesics \(\gamma\) in \(\Sigma\) with length \(\ell(\gamma)\leqslant L\) and with the property that there are two non-homotopic fillings \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) of genus \(\leqslant g\)._
Here, a genus \(g\) filling of \(\gamma\) is a continuous map \(\beta:S\to\Sigma\) from a genus \(g\) surface \(S\) with connected boundary such that \(\beta(\partial S)=\gamma\).
Theorem 1.4 may look kind of weak because we are only bounding by \(\frac{\operatorname{\mathbf{const}}}{L}\) the proportion of those elements in \(\mathbf{B}_{g}(L)\) that we are double counting when
we count surfaces of genus \(g\) instead of counting curves. It should however be noted that this is the order of the error term in (1.2) (see [16]), and this is indeed the order of error term that we expect in the results we prove here. For what it is worth, it is not hard to show that the set of those \(\gamma\) in Theorem 1.4 is at least of the order of \(\mathbf{const}\cdot L^{6g-10}\cdot e^{\frac{L}{2}}\). Indeed, if \(\omega\in\pi_{1}(\Sigma)\) arises in two ways as a commutator, for example if we choose \(\omega=[aabab,ba^{-1}a^{-1}]=[aaba,bba^{-1}]\), and if \(\eta\) is a randomly chosen product of \(g-1\) generators then \(\omega\eta\) arises in two different ways as a product of \(g\) commutators and hence admits two non-homotopic fillings, and there are \(\mathbf{const}\cdot L^{6g-10}\cdot e^{\frac{L}{2}}\) many choices for \(\eta\).
### Section-by-section summary
In Section 2 we discuss realizations of graphs and the topology and geometry of spaces of realizations.
In Section 3 we discuss Delsarte's classical lattice point counting result and a couple of minimal generalizations thereof.
At this point we will have all tools needed to prove Theorem 1.3. This is done in Section 4.
In Section 5 and Section 6 we work out the geometric aspects of the proof of Theorem 1.1, the main result being Theorem 1.4. Although we do not use any results about pleated surfaces, some of the arguments in these two sections will come pretty natural to those readers used to working with such objects.
In Section 7 we prove Theorem 1.1 combining the results of the previous two sections with Theorem 1.3. Theorem 1.2 is proved in Section 8.
Finally, Section 9 is dedicated to discussing what parts of what we do hold if we soften the assumption that \(\Sigma\) is a closed hyperbolic surface.
To conclude, we present in Appendix A a proof of Huber's theorem using the same idea as in the proof of Theorem 1.3.
_Remark_.: After conclusion of this paper we learned that the problem of calculating how many elements arise as commutators in some group has also been treated in other cases. More concretely, Park [15] uses Wicks forms to get asymptotics, as \(L\to\infty\), for the number of elements in the free group \(\mathbb{F}_{r}\) of rank \(r\) which arise as a commutator and have word-length \(L\). In the same paper he also treats the case that the group is a free product of two non-trivial finite groups. It would be interesting to figure out if it is possible to apply the methods here to recover Park's beautiful theorems.
### Acknowledgements
We thank Vincent Delecroix, Lars Louder, Michael Magee, and Peter Sarnak for their very helpful comments. The second author is also very grateful for the patience shown by Sebastien Gouezel, Vincent Guirardel and Francois Maucourant with all his questions--it is nice to have one's office next to theirs.
**Notation.**
Under a graph \(X\) we understand a \(1\)-dimensional CW-complex with finitely many cells. We denote by \(\operatorname{\mathbf{vert}}=\operatorname{\mathbf{vert}}(X)\) the set of vertices, by \(\operatorname{\mathbf{edge}}=\operatorname{\mathbf{edge}}(X)\) the set of edges of \(X\), and by \(\operatorname{\mathbf{half}}=\operatorname{\mathbf{half}}(X)\) the set of half-edges of a graph \(X\)--a half-edge is nothing other than the germ of an edge. Given a vertex \(v\in\operatorname{\mathbf{vert}}(X)\) we let \(\operatorname{\mathbf{half}}_{v}=\operatorname{\mathbf{half}}_{v}(X)\) be the set of half-edges emanating out of \(v\). Note that two elements of \(\operatorname{\mathbf{half}}_{v}\) might well correspond to the same edge--that is, \(X\) might have edges which are incident to the same vertex on both ends. The cardinality of \(\operatorname{\mathbf{half}}_{v}\) is the _degree_ of \(X\) at \(v\) and we say that \(X\) is _trivalent_ if its degree is \(3\) at every vertex. The reader can safely assume that the graphs they encounter are trivalent.
When it comes to surfaces, we will be working all the time with the same underlying surface, our fixed closed connected and oriented hyperbolic surface \(\Sigma=\Gamma\backslash\mathbb{H}^{2}\). We identify \(T^{1}\Sigma\) with \(\Gamma\backslash T^{1}\mathbb{H}^{2}=\Gamma\setminus\operatorname{PSL}_{2} \mathbb{R}\), and endow \(T^{1}\Sigma\) we the distance induced by a \(\operatorname{PSL}_{2}\mathbb{R}\) left-invariant Riemannian metric on \(\operatorname{PSL}_{2}\mathbb{R}\), say one that gives length \(2\pi\) to each unit tangent space \(T^{1}_{x_{0}}\Sigma\) and such that the projection \(T^{1}\mathbb{H}^{2}\to\mathbb{H}^{2}\) is a Riemannian submersion. This means that the unit tangent bundle has volume \(\operatorname{vol}(T^{1}\Sigma)=2\pi\cdot\operatorname{vol}(\Sigma)=4\pi^{2} \cdot|\chi(S)|\). Angles between tangent vectors based at the same point of \(\Sigma\) will always be unoriented, meaning that they take values in \([0,\pi]\)--note that this is consistent with the unit tangent spaces having length \(2\pi\).
Often we will denote the discrete sets we are counting by boldface capitals, such as \(\mathbf{C}\) or \(\mathbf{B}\). They will often come with a wealth of decorations such as for example \(\mathbf{G}^{X}(L)\) or \(\mathbf{B}_{g}(L)\). Often these sets arise as discrete subsets of larger spaces which will be denoted by calligraphic letters. Often the boldfaced and the calligraphic letters go together: \(\mathbf{G}\) will be a subset of the space \(\mathcal{G}\).
A comment about constants. In this paper there are two kinds of constants: the ones whose value we are trying to actually compute, and those about which we just need to know that they exist. Evidently the first kind we have to track carefully. It would however be too painful, and for no clear gain, to do the same with all possible constants. And in general constants tend to breed more and more constants. This is why we we just write \(\operatorname{\mathbf{const}}\) for a constant whose actual value is irrelevant, allowing the precise value of \(\operatorname{\mathbf{const}}\) to change from line to line. We hope that this does not cause any confusion.
And now a comment about Euclidean vectors. All vector arising here, indicated with an arrow as in \(\vec{v}=(v_{1},\dots,v_{k})\), are _positive_ in the sense that the entries \(v_{i}\) are positive. In other words, they live in \(\mathbb{R}^{k}_{+}\) or maybe in \(\mathbb{N}^{k}\). We will write
\[\|\vec{v}\|=|v_{1}|+\dots+|v_{k}|=v_{1}+\dots+v_{k}\]
for the \(L^{1}\)-norm on vectors. This is the only norm we will encounter here.
## 2. Realizations of graphs in surfaces
In this section we discuss certain spaces of maps of graphs into a surface. These spaces play a key role in this paper. Although long, the material here should be nice and pleasant to read--only the proof of Proposition 2.3 takes some amount of work.
### Realizations
We will be interested in connected graphs living inside our hyperbolic surface \(\Sigma\), or more precisely in continuous maps
\[\phi:X\to\Sigma \tag{2.1}\]
of graphs \(X\) into \(\Sigma\) which when restricted to each edge are geodesic. We will say that such a map (2.1) is a _realization of \(X\) in \(\Sigma\)_. We stress that realizations do not need to be injective, and that in fact the map \(\phi\) could be constant on certain edges, or even on larger pieces of the graph. A _regular_ realization is one whose restriction to every edge is non-constant. If \(\phi:X\to\Sigma\) is a regular realization and if \(\vec{e}\in\mathbf{half}(X)\) is a half-edge incident to a vertex \(v\in\mathbf{vert}(X)\) then we denote by \(\phi(\vec{e})\in T^{1}_{\phi(v)}\Sigma\) the unit tangent vector at \(\phi(v)\) pointing in the direction of the image of \(\vec{e}\).
We endow the set \(\mathcal{G}^{X}\) of all realizations of the (always connected) graph \(X\) with the compact-open topology and note that unless \(X\) itself is contractible, the space \(\mathcal{G}^{X}\) is not connected: the connected components of \(\mathcal{G}^{X}\) correspond to the different possible free homotopy classes of maps of \(X\) into \(\Sigma\). Indeed, pulling segments tight relative to their endpoints we get that any homotopy
\[[0,1]\times X\to\Sigma,\ \ (t,x)\mapsto\varphi_{t}(x)\]
between two realizations is homotopic, relative to \(\{0,1\}\times X\), to a homotopy, which we are still denoting by the same symbol, such that
1. \(t\mapsto\varphi_{t}(v)\) is geodesic for every vertex \(v\in\mathbf{vert}(X)\), and
2. \(\varphi_{t}:X\to\Sigma\) is a realization for all \(t\).
A homotopy \([0,1]\times X\to\Sigma\) satisfying (1) and (2) is said to be a _geodesic homotopy_.
Geodesic homotopies admit a different intrinsic description. Indeed, uniqueness of geodesic representatives in each homotopy class of arcs implies that each realization \(\phi\in\mathcal{G}^{X}\) has a neighborhood which is parametrized by the image of the vertices. This implies that the map
\[\Pi:\mathcal{G}^{X}\to\Sigma^{\mathbf{vert}(X)},\ \ \phi\mapsto(\phi(v))_{v\in \mathbf{vert}(X)} \tag{2.2}\]
is a cover. Pulling back the product of hyperbolic metrics we think of it as a manifold locally modeled on the product \((\mathbb{H}^{2})^{\mathbf{vert}(X)}=\mathbb{H}^{2}\times\cdots\times\mathbb{H }^{2}\) of \(\mathbf{vert}(X)\) worth of copies of the hyperbolic plane. Geodesic homotopies are, from this point of view, nothing other than geodesics in \(\mathcal{G}^{X}\).
Since all of this will be quite important, we record it here as a proposition:
**Proposition 2.1**.: _Let \(X\) be a graph. The map (2.2) is a cover and geodesic homotopies are geodesics with respect to the pull-back metric._
### Length function
On the space \(\mathcal{G}^{X}\) of realizations of the graph \(X\) in \(\Sigma\) we have the _length function_
\[\ell_{\Sigma}:\mathcal{G}^{X}\to\mathbb{R}_{\geqslant 0},\ \ \ell_{\Sigma}(\phi)= \sum_{e\in\mathbf{edge}(X)}\ell_{\Sigma}(\phi(e)).\]
First note that Arzela-Ascoli implies that \(\ell_{\Sigma}\) is a proper function. It follows thus that the restriction of \(\ell_{\Sigma}\) to any and every connected component of \(\mathcal{G}^{X}\) has a minimum. Now, convexity of the distance function \(d_{\mathbb{H}^{2}}(\cdot,\cdot)\) implies that \(\ell_{\Sigma}\) is convex. More precisely, if \((\varphi_{t})\) is a geodesic homotopy between two realizations in \(\Sigma\) then the function \(t\mapsto\ell_{\Sigma}(\varphi_{t})\) is convex. Indeed, it is strictly convex unless the image of \([0,1]\times X\) is contained in a geodesic in \(\Sigma\), or rather if the image of some (and hence any) lift to the universal cover is contained in a geodesic in \(\mathbb{H}^{2}\).
Note now also that the length function is smooth when restricted to the set of regular realizations--its derivative is given by the first variation formula
\[\frac{d}{dt}\ell_{\Sigma}(\phi_{t})=\sum_{v\in\mathbf{vert}(X)}\sum_{\vec{e} \in\mathbf{half}_{v}(X)}\left\langle-\phi(\vec{e}),\frac{d}{dt}\phi_{t}(v)\right\rangle\]
where \(\phi(\vec{e})\in T^{1}_{v}\Sigma\) is the unit tangent vector based at \(v\) and pointing as the image of the half-edge \(\vec{e}\). It follows that a regular realization \(\phi\) is a critical point for the length function \(\ell_{\Sigma}:\mathcal{G}^{X}\to\mathbb{R}_{\geqslant 0}\) if and only if for every \(v\in\mathbf{vert}(X)\) we have \(\sum_{\vec{e}\in\mathbf{half}_{v}}\phi(\vec{e})=0\). Note that this implies, in the for us relevant case that \(X\) is trivalent, that the (unsigned) angle \(\angle(\phi^{\prime}(\vec{e}_{1}),\phi^{\prime}(\vec{e}_{2}))\) between the images of any two half edges incident to the same vertex is equal to \(\frac{2\pi}{3}\).
**Definition**.: _A regular realization \(\phi:X\to\Sigma\) of a trivalent graph \(X\) into \(\Sigma\) is critical if we have_
\[\angle(\phi(\vec{e}_{1}),\phi(\vec{e}_{2}))=\frac{2\pi}{3}\]
_for every vertex \(v\in\mathbf{vert}(X)\) and for any two distinct \(\vec{e}_{1},\vec{e}_{2}\in\mathbf{half}_{v}(X)\)._
We collect in the next lemma a few of the properties of critical realizations:
**Lemma 2.2**.: _Let \(X\) be a trivalent graph. A regular realization \(\phi\in\mathcal{G}^{X}\) is a critical point for the length function if and only if \(\phi\) is a critical realization. Moreover, if \(\phi\in\mathcal{G}^{X}\) is a critical realization and \(\mathcal{G}^{\phi}\) is the connected component of \(\mathcal{G}^{X}\) containing \(\phi\), then the following holds:_
1. \(\phi\) _is the unique critical realization in_ \(\mathcal{G}^{\phi}\)_._
2. \(\phi\) _is the global minimum of the length function on_ \(\mathcal{G}^{\phi}\)_._
3. _Besides_ \(\phi\)_, there are no other local minima in_ \(\mathcal{G}^{\phi}\) _of the length function._
4. _The connected component_ \(\mathcal{G}^{\phi}\) _is isometric to the product_ \(\mathbb{H}^{2}\times\cdots\times\mathbb{H}^{2}\) _of_ \(\mathbf{vert}(X)\) _many copies of the hyperbolic plane._
Note that lack of global smoothness of the length function means that we cannot directly derive (2) and (3) from (1). This is why they appear as independent statements.
Proof.: Statement (1) was actually discussed in the paragraph preceding the definition of critical realization. Let us focus in the subsequent ones. Recall that if
\[[0,1]\to\mathcal{G}^{\phi},\ t\mapsto\phi_{t}\]
is a (non-constant) geodesic homotopy with \(\phi_{0}=\phi\), then the length function
\[t\mapsto\ell_{\Sigma}(\phi_{t}(X)) \tag{2.3}\]
is convex. Well, in our situation it is strictly convex: since \(\phi\) is critical we get that its image, or rather the image of its lifts to the universal cover, are not contained in a geodesic because if they were, then the angles between any two half-edges starting at the same vertex could only take the values \(0\) or \(\pi\). Now, strict convexity and the fact that \(\phi=\phi_{0}\) is a critical point of the length function implies that \(t=0\) is the minimum and only critical point of the function (2.3). From here we get directly (2) and (3). To prove (4) note that if \(\mathcal{G}^{\phi}\) were not simply connected, then there would be a non-trivial geodesic homotopy starting and ending at \(\phi\), contradicting the strict convexity of the length function. Note now that the restricting the cover (2.2) to \(\mathcal{G}^{\phi}\) we get a locally isometric cover \(\mathcal{G}^{\phi}\to\Sigma^{\mathbf{vert}(X)}\). Since the domain of this cover is connected and simply connected, we get that it is nothing other than the universal cover of \(\Sigma^{\mathbf{vert}(X)}\). This proves (4) and concludes the proof of the lemma.
_Remark_.: Although we will not need it here, let us comment briefly on the topology of the connected components of \(\mathcal{G}^{X}\). The fundamental group of the connected component \(\mathcal{G}^{\phi}\) containing a realization \(\phi\in\mathcal{G}^{X}\) is isomorphic to the centralizer \(\mathcal{Z}_{\pi_{1}(\Sigma)}(\phi_{*}(\pi_{1}(X)))\) in \(\pi_{1}(\Sigma)\) of the image of \(\pi_{1}(X)\) under \(\phi_{*}:\pi_{1}(X)\to\pi_{1}(\Sigma)\). It follows that \(\mathcal{G}^{\phi}\) is isometric to the quotient under the diagonal action of \(\mathcal{Z}_{\pi_{1}(\Sigma)}(\phi_{*}(\pi_{1}(X)))\) of \(\mathbb{H}^{2}\times\cdots\times\mathbb{H}^{2}\) of \(\mathbf{vert}(X)\)-copies of the hyperbolic plane. In particular we have:
* If \(\phi\) is homotopically trivial then \(\mathcal{G}^{\phi}\simeq\pi_{1}(\Sigma)\backslash(\mathbb{H}^{2}\times\cdots \times\mathbb{H}^{2})\).
* If \(\phi_{*}(\pi_{1}(X))\neq\operatorname{Id}_{\pi_{1}(\Sigma)}\) is abelian then \(\mathcal{G}^{\phi}\simeq\mathbb{Z}\backslash(\mathbb{H}^{2}\times\cdots \times\mathbb{H}^{2})\).
* \(\phi_{*}(\pi_{1}(X))\) is non-abelian then \(\mathcal{G}^{\phi}\simeq\mathbb{H}^{2}\times\cdots\times\mathbb{H}^{2}\).
In the appendix we will give a concrete description of the components of \(\mathcal{G}^{X}\) for the case that \(X\) is a loop, that is the graph with a single vertex and a single edge.
Lemma 2.2, with all its beauty, does not say anything about the existence of critical realizations. Our next goal is to prove that any realization that from far away kind of looks like a critical realization is actually homotopic to a critical realization.
**Quasi-critical realizations.** Recall that a regular realization \(\phi:X\to\Sigma\) of a trivalent graph \(X\) is _critical_ if the angles between the images of any two half-edges incident to the same vertex are equal to \(\frac{2\pi}{3}\). We will say that a regular realization is _quasi-critical_ if those angles are bounded from below by \(\frac{1}{2}\pi\) and that the realization is \(\ell_{0}\)_-long_ if \(\ell(\phi(e))>\ell_{0}\) for all edges \(e\) of \(X\). Recall that we measure all angles in \([0,\pi]\).
Our next goal here is to prove that every sufficiently long quasi-critical realization is homotopic to a critical realization. This will follow easily from the following technical result:
**Proposition 2.3**.: _For any trivalent graph \(X\) and constant \(C\geq 0\) there exist constants \(\ell_{0},D>0\) such that given an \(\ell_{0}\)-long quasi-critical realization \(\phi:X\to\Sigma\), a trivalent graph \(Y\), and realization \(\psi:Y\to\Sigma\) satisfying_
1. \(\ell(\psi)\leq\ell(\phi)+C\)_, and_
2. _there is a homotopy equivalence_ \(\sigma:X\to Y\) _with_ \(\phi\) _and_ \(\sigma\circ\psi\) _homotopic,_
_then there exists a homeomorphism \(F:Y\to X\) mapping each edge with constant speed, such that \(\sigma\circ F\) is homotopic to the identity and such that the geodesic homotopy \(X\times[0,1]\to\sigma\), \((x,t)\mapsto\phi_{t}(x)\) joining \(\phi_{0}(\cdot)=\phi\circ F(\cdot)\) to \(\phi_{1}(\cdot)=\psi(\cdot)\) has tracks bounded by \(D\)._
Since the proof of Proposition 2.3 is pretty long, let us first demonstrate that it might be useful:
**Corollary 2.4**.: _Let \(X\) be a trivalent graph. There are positive constants \(\ell_{0}\) and \(D\) such that every component of \(\mathcal{G}^{X}\) which contains an \(\ell_{0}\)-long quasi-critical realization \(\phi:X\to\Sigma\), also contains a critical realization \(\psi\), which moreover is unique and homotopic to \(\phi\) by a homotopy whose tracks have length bounded by \(D\)._
Proof.: Let \(\ell_{0}\) and \(D\) be given by Lemma 2.3 for \(C=0\) and if needed increase \(\ell_{0}\) so that it is larger than \(2D\). Let \(\phi:X\to\Sigma\) be an \(\ell_{0}\)-long quasi-critical realization. Let \(\psi:X\to\Sigma\) be the minimizer for the length function in the component \(\mathcal{G}^{\phi}\)--it exists because the length function is proper. We claim that \(\psi\) is critical. In the light of Lemma 2.2 it suffices to prove that \(\psi\) is regular.
Being a minimizer we have that \(\ell_{\Sigma}(\psi)\leqslant\ell_{\Sigma}(\phi)\). We thus get from Proposition 2.3 that there exists a homeomorphism \(F:X\to X\) homotopic to the identity, mapping edges with constant speed, and such that the geodesic homotopy from \(\phi\circ F\) to \(\psi\) has tracks bounded by \(D\). Now, since \(X\) is trivalent we get that the homeomorphism \(F\), being homotopic to the identity and mapping edges with constant speed, is actually equal to the identity. What we thus have is a homotopy from \(\phi\) and \(\psi\) with tracks bounded by \(D\). Now, since each edge of \(\phi(X)\) has at least length \(\ell_{0}>2D\) and since the tracks of the homotopy are bounded by \(D\), we get that \(\psi\) is regular and hence critical by Lemma 2.2, as we needed to prove. Lemma 2.2 also yields that \(\psi\) is unique.
Let us next prove the proposition:
Proof of Proposition 2.3.: Let \(\phi:X\to\Sigma\) be a quasi-critical realization and assume it is \(\ell_{0}\)-long for an \(\ell_{0}\) large enough to satisfy some conditions we will give in the course of the proof. Let \(H:X\times[0,1]\to\Sigma\) be the homotopy between \(\phi\) and \(\psi\circ\sigma\).
To be able to consistently choose lifts of \(\phi\), \(\psi\), and \(\sigma\) to the universal covers \(\widetilde{X},\widetilde{Y}\) and \(\mathbb{H}^{2}\) of \(X,Y\) and \(\Sigma\), let us start by picking base points. Fixing \(x_{0}\in X\), consider the base points \(\sigma(x_{0})\in Y\) and \(\phi(x_{0})\in\Sigma\), and pick lifts \(\widetilde{x}_{0}\in\widetilde{X}\), \(\widetilde{\sigma(x_{0})}\in\widetilde{Y}\) and \(\widetilde{\phi(x_{0})}\in\mathbb{H}^{2}\) of each one of those endpoints. Having chosen those base points we have uniquely determined lifts \(\widetilde{\phi}:\widetilde{X}\to\mathbb{H}^{2}\) and \(\widetilde{\sigma}:\widetilde{X}\to\widetilde{Y}\) of \(\phi\) and \(\sigma\) satisfying \(\widetilde{\phi}(\widetilde{x}_{0})=\widetilde{\phi(x_{0})}\) and \(\widetilde{\sigma}(\widetilde{x}_{0})=\widetilde{\sigma(x_{0})}\). We can also lift to \(\mathbb{H}^{2}\), starting at \(\widetilde{\phi}(\widetilde{x_{0}})\), the path in \(\Sigma\) given by \(t\mapsto H(x_{0},t)\). The endpoint of this path is a lift of \(\psi\circ\sigma(x_{0})\) and we take the lift \(\widetilde{\psi}:\widetilde{Y}\to\mathbb{H}^{2}\) which maps \(\widetilde{\sigma}(\widetilde{x}_{0})\) to this point. All those lifts are related by the following equivariance property:
\[\widetilde{\phi}(g(x))=\phi_{*}(g)(\widetilde{\phi}(x))\text{ and }(\widetilde{ \psi}\circ\widetilde{\sigma})(g(x))=\phi_{*}(g)\left((\widetilde{\psi}\circ \widetilde{\sigma})(x)\right) \tag{2.4}\]
for all \(x\in\widetilde{X}\) and for all \(g\in\pi_{1}(X,x_{0})\) where \(\phi_{*}:\pi_{1}(X,x_{0})\to\pi_{1}(\Sigma,\phi(x_{0}))\) is the homomorphism induced by \(\phi\) and the chosen base points.
Note that, since \(\phi\) is almost critical and \(\ell_{0}\)-long we get, as long as \(\ell_{0}\) is large enough, that the lift \(\widetilde{\phi}:\widetilde{X}\to\mathbb{H}^{2}\) is an injective quasi-isometric embedding. For ease of notation, denote by \(T_{X}=\widetilde{\phi}(\widetilde{X})\subset\mathbb{H}^{2}\) the image of \(\widetilde{\phi}\) and let us rephrase what we just said: if \(\ell_{0}\) is sufficiently large then \(T_{X}\) is a quasiconvex tree, with quasiconvexity constants only depending on a lower bound for \(\ell_{0}\). We denote by \(\hat{\pi}:\mathbb{H}^{2}\to T_{X}\) a nearest point retraction which is equivariant under \(\phi_{*}(\pi_{1}(X,x_{0}))\). We would like to define
\[\pi:\widetilde{Y}\to T_{X}\]
as \(\hat{\pi}\circ\widetilde{\psi}\), but since \(\hat{\pi}\) is definitively not continuous, we have to be slightly careful. We define \(\pi\) as follows: first set \(\pi(v)=\hat{\pi}(\tilde{\psi}(v))\) for all vertices \(v\in\mathbf{vert}(\widetilde{Y})\) of \(\widetilde{Y}\), and then extend (at constant speed) over the edges of \(Y\). Note that (2.4), together with the equivariance of \(\pi\), implies that \(\pi\circ\widetilde{\sigma}:\widetilde{X}\to T_{X}\) satisfies
\[(\pi\circ\widetilde{\sigma})(g(x))=\phi_{*}(g)(\pi\circ\widetilde{\sigma}(x))\]
for all \(x\in\widetilde{X}\) and \(g\in\pi_{1}(X,x_{0})\).
**Claim 1**.: _As long as \(\ell_{0}\) is large enough, we have that \(\pi\) maps each vertex of \(\widetilde{Y}\) within \(\mathbf{const}\) of one and only one vertex of \(T_{X}\)._
Starting with the proof of the claim, note that there is a constant \(A\geqslant 0\) depending only on the quasiconvexity constants for \(T_{X}\) with
\[d_{\mathbb{H}^{2}}(x,y)\geqslant-A+d_{\mathbb{H}^{2}}(\hat{\pi}(x),\hat{\pi}(y)) \tag{2.5}\]
for all \(x,y\in\mathbb{H}^{2}\). In fact, there exists constants \(K,A^{\prime}>0\) which once again depend only on the quasiconvexity constant of \(T_{X}\) such that whenever \(d_{\mathbb{H}^{2}}(\hat{\pi}(x),\hat{\pi}(y))>K\) we have the better bound
\[d_{\mathbb{H}^{2}}(x,y)\geqslant-A^{\prime}+d_{\mathbb{H}^{2}}(x,\hat{\pi}(x) )+d_{\mathbb{H}^{2}}(\hat{\pi}(x),\hat{\pi}(y))+d_{\mathbb{H}^{2}}(\hat{\pi}(y ),y). \tag{2.6}\]
For \(a,b\in T_{X}\) let \(d_{T_{X}}(a,b)\) denote the interior distance in \(T_{X}\) between them. We will care about a modified version of this distance: let \(B\) be the collection of balls in \(\mathbb{H}^{2}\) of radius \(1\) centered at each vertex of \(T_{X}\) and define \(d_{T_{X}\mathrm{rel}B}(a,b)\) to be the length of the part of the path in \(T_{X}\) between them that lies outside of \(B\). For all \(\ell_{0}\) large enough (depending only on hyperbolicity of \(\mathbb{H}^{2}\)) we have from the choice of the radius1 that
Footnote 1: In fact, any radius greater than \(\log\sqrt{2}\approx 0.3466\) would work for large \(\ell_{0}\).
\[d_{T\mathrm{rel}B}(a,b)\leq d_{\mathbb{H}^{2}}(a,b) \tag{2.7}\]
for all \(a,b\in T_{X}\).
Given two edges \(e,e^{\prime}\) of \(\widetilde{Y}\) adjacent to the same vertex, we define
\[\mathrm{Fold}(e,e^{\prime})=\ell_{T\mathrm{rel}B}(\pi(e)\cap\pi(e^{\prime}))\]
and let then
\[\mathrm{Fold}(\pi)=\max\mathrm{Fold}(e,e^{\prime})\]
where the maximum is taken over all such pairs of edges. The reason to introduce this quantity is that we have
\[\sum_{[v,v^{\prime}]\in\mathbf{edge}(Y)}d_{T_{X}\mathrm{rel}B}(\pi(v),\pi(v^{ \prime}))\geq\ell(\phi)+\mathrm{Fold}(\pi)-A^{\prime\prime} \tag{2.8}\]
for some positive constant \(A^{\prime\prime}\) depending only on that number of vertices (and the fact that we chose the balls \(B\) to have radius \(1\))--here we have identified each edge of \(Y\) with one of its representatives in \(\widetilde{Y}\).
Now, using (2.5), (2.7) and (2.8) we get that
\[\ell(\psi) =\sum_{[v,v^{\prime}]\in\mathbf{edge}(Y)}d_{\mathbb{H}^{2}}(\tilde {\psi}(v),\tilde{\psi}(v^{\prime}))\] \[\geq-\,\mathbf{const}+\sum_{[v,v^{\prime}]\in\mathbf{edge}(Y)}d_ {\mathbb{H}^{2}}(\pi(v),\pi(v^{\prime}))\] \[\geq-\,\mathbf{const}+\sum_{[v,v^{\prime}]\in\mathbf{edge}(Y)}d_ {T_{X}\mathrm{rel}B}(\pi(v),\pi(v^{\prime}))\] \[\geq-\,\mathbf{const}+\ell(\phi)+\mathrm{Fold}(\pi)\]
where each \(\mathbf{const}\) is a positive constant (but not necessarily the same) depending on the quasiconvexity constant and combinatorics of \(Y\).
Since \(\ell(\psi)\leq\ell(\phi)+C\) it follows that \(\mathrm{Fold}(\pi)\leq\mathbf{const}\), where the constant depends on \(C\) and on the lower bound for \(\ell_{0}\) and on combinatorics of \(Y\). In particular, for each \(v\in\mathbf{vert}\) there is a vertex \(v^{\prime}\in T_{X}\) such that \(d_{T_{X}\mathrm{rel}B}(\pi(v),v^{\prime})\leq\mathbf{const}\) and we can choose \(\ell_{0}\) large such that \(v^{\prime}\) is unique. We have proved Claim 1.
Armed with Claim 1 we can define a map \(\widetilde{F}:\widetilde{Y}\to T_{X}\) by first mapping \(v\in\mathbf{vert}(\widetilde{Y})\), to the unique vertex in \(T_{X}\) closest to \(\pi(v)\), extending it so that it maps each \(e\in\mathbf{edge}(\widetilde{Y})\) with constant speed. Note that equivariance of \(\pi\) implies that \(\widetilde{F}\) satisfies \(\widetilde{F}(\sigma_{*}(g)(y))=g(\widetilde{F}(y))\) for all \(y\in\widetilde{Y}\) and \(g\in\pi_{1}(X,x_{0})\), where \(\sigma_{*}:\pi_{1}(X,x_{0})\to\pi_{1}(Y,\sigma(x_{0}))\) is the isomorphism induced by \(\sigma\) and the chosen base points. It follows that \(\widetilde{F}\) descends to a map \(F:Y\to X\).
**Claim 2**.: \(F:Y\to X\) _is a homeomorphism with \(F\circ\sigma\) homotopic to the identity._
The fact that \(F\circ\sigma\) is homotopic to the identity follows directly from the fact \(\widetilde{F}\circ\widetilde{\sigma}:\widetilde{X}\to\widetilde{X}\) satisifies that \((\widetilde{F}\circ\sigma)(gx)=g((\widetilde{F}\circ\widetilde{\sigma})(x))\) for \(g\in\pi_{1}(X,x_{0})\). What we really have to prove is that \(F\) is a homeomorphism. To see that this is the case note that since the maps \(\widetilde{F},\pi:\widetilde{Y}\to\widetilde{X}\) send points within \(\mathbf{const}\) of each other, and since in our way to proving Claim 1 we proved that \(\operatorname{Fold}(\pi)<\mathbf{const}\), we get that \(\operatorname{Fold}(\widetilde{F})<\mathbf{const}\). On the other hand, since \(\widetilde{F}\) maps vertices to vertices and has constant speed on the edges we get that if \(\operatorname{Fold}(\widetilde{F})\neq 0\) then \(\operatorname{Fold}(F)\) has to be at least as large as the shortest edge of \(X\). This implies, using the fact that \(\operatorname{Fold}(\widetilde{F})\) is uniformly bounded, that as long as \(\ell_{0}\) is large enough then \(\operatorname{Fold}(\widetilde{F})=0\). This implies in turn that \(\widetilde{F}\) is locally injective. Local injectivity of \(\widetilde{F}\), together with the fact that \(F\), being a homotopy inverse to \(\sigma\), is a homotopy equivalence, imply that \(F\) is a homeomorphism. We have proved Claim 2.
What is left is to bound the tracks of the geodesic homotopy from \(\phi\circ F\) to \(\psi\)--the argument is similar to the proof of the existence of \(F\). First, note that we know that for every vertex \(v\in\widetilde{Y}\), the nearest point projection \(\hat{\pi}\) maps \(\widetilde{\psi}(v)\) close to the vertex \(\tilde{\phi}(\widetilde{F}(v))\) of \(T_{X}\). It follows that if we choose \(\ell_{0}\) large enough we can also assume that \(d_{\mathbb{H}^{2}}(\hat{\pi}(\tilde{\psi}(v)),\hat{\pi}(\tilde{\psi}((v^{ \prime}))))>K\) for any two distinct vertices \(v,v^{\prime}\in\mathbf{vert}(\widetilde{Y})\). This means that (2.6) applies, and hence that we have
\[\ell(\psi(Y)) =\sum_{[v,v^{\prime}]\in\mathbf{edge}}d_{\mathbb{H}^{2}}(\tilde{ \psi}(v),\tilde{\psi}(v^{\prime}))\] \[\geq-\,\mathbf{const}+\sum_{[v,v^{\prime}]\in\mathbf{edge}}d_{ \mathbb{H}^{2}}(\hat{\pi}(\tilde{\psi}(v)),\hat{\pi}(\tilde{\psi}(v^{\prime}) ))+\] \[\qquad\quad+3\sum_{v\in\mathbf{vert}}d_{\mathbb{H}^{2}}(\tilde{ \psi}(v),\hat{\pi}(\tilde{\psi}(v)))\] \[\geq-\,\mathbf{const}+\ell(\phi(X))+3\sum_{v\in\mathbf{vert}}d_{ \mathbb{H}^{2}}(\tilde{\psi}(v),\hat{\pi}(\tilde{\psi}(v)))\]
where as before each \(\mathbf{const}\) is a positive constant, the first inequality follows by (2.6), the second by (2.7) and (2.8). Hence, again by assumption (1) in the lemma, we must have that \(d_{\mathbb{H}^{2}}(\tilde{\psi}(v),\hat{\pi}(\tilde{\psi}(v)))<\mathbf{const}\) for all \(v\in\mathbf{vert}(\widetilde{Y})\).
Since \(\widetilde{F}(v)\) and \(\hat{\pi}(\tilde{\psi}(v))\) are near each other, we have that \(d_{\mathbb{H}^{2}}(\tilde{\psi}(v),\widetilde{F}(v))<\mathbf{const}\) for all \(v\in\mathbf{vert}(\widetilde{Y})\). Convexity of the length function implies that the geodesic homotopy between \(\tilde{\psi}\) and \(\tilde{\phi}\circ\widetilde{F}\) has then tracks bounded by the same constant \(\mathbf{const}\), as we had claimed.
### \(\varepsilon\)-critical realizations
Corollary 2.4 asserts that whenever we have a realization with very long edges and which looks vaguely critical, then it is not far from a critical realization. Our next goal is to give a more precise description of the situation in the case that our realization looks even more as if it were critical. To be precise, suppose that \(X\) is a trivalent graph and \(\varepsilon\) positive and small. A regular realization \(\phi:X\to\Sigma\) is _\(\varepsilon\)-critical_ if
\[\angle(\phi(\vec{e}_{1}),\phi(\vec{e}_{2}))\in\left[\frac{2\pi}{3}-\varepsilon,\frac{2\pi}{3}+\varepsilon\right]\]
for every two half-edges \(\vec{e}_{1},\vec{e}_{2}\in\mathbf{half}_{v}\) incident to any vertex \(v\in\mathbf{vert}(X)\).
The goal of this section is to determine how the set \(\mathcal{G}^{X}_{\varepsilon-\mathrm{crit}}\) of \(\varepsilon\)-critical realizations of \(X\) looks. To do that we need a bit of of notation and preparatory work. First, under a _tripod_ we will understand a tuple \(\tau=(p,\{v,v^{\prime},v^{\prime\prime}\})\) where \(p\in\mathbb{H}^{2}\) is a point and where \(\{v,v^{\prime},v^{\prime\prime}\}\subset T^{1}_{p}\mathbb{H}^{2}\) is an unordered collection consisting of three distinct unit vectors based at \(p\). A tripod \(\tau=(p,\{v,v^{\prime},v^{\prime\prime}\})\) is _critical_ if the three vectors \(v,v^{\prime},v,^{\prime\prime}\) have pairwise angle equal to \(\frac{2\pi}{3}\). Any tripod \(\tau\) determines \(3\) geodesic rays, that is \(3\) points in \(\partial_{\infty}\mathbb{H}^{2}\), that is an ideal triangle \(T_{\tau}\subset\mathbb{H}^{2}\) (see left hand side of Figure 1). Conversely, every ideal triangle \(T\subset\mathbb{H}^{2}\) determines a tripod \(\tau^{T}_{p}=(p,\{v,v^{\prime},v^{\prime\prime}\})\) for every \(p\in\mathbb{H}^{2}\): the vectors \(v,v^{\prime},v,^{\prime\prime}\) are the unit vectors based at \(p\) and pointing to the (ideal) vertices of the ideal triangle \(T\). Note that \(T\) consists of exactly those points \(p\) such that the vectors in the tripod \(\tau^{T}_{p}\) have pairwise (always unoriented) angles adding up to \(2\pi\). For a given \(\varepsilon\) we let
\[T(\varepsilon)=\left\{p\in T\ \middle|\ \begin{array}{l}\text{the angles between the vectors}\\ \text{in $\tau^{T}_{p}$ lie in $[\frac{2\pi}{3}-\varepsilon,\frac{2\pi}{3}+ \varepsilon]$}\end{array}\right\}\]
be the set of points such that the angles of the tripod \(\tau^{T}_{p}\) are within \(\varepsilon\) of \(\frac{2\pi}{3}\). The set \(T(\varepsilon)\) is, at least for \(\varepsilon\in(0,\frac{\pi}{6})\), a hexagon centered at the center of the triangle--the sides of the hexagon are not geodesic but rather sub-segments of curves of constant curvature, see Figure 1. However, if we scale everything by \(\varepsilon^{-1}\) then \(\varepsilon^{-1}\cdot T(\varepsilon)\) converges to the regular euclidean hexagon of side-length \(\frac{2}{3}\). This means in particular that
\[\mathrm{diam}(T(\varepsilon))\sim\frac{4}{3}\varepsilon\text{ and }\operatorname{vol}(T( \varepsilon))\sim\frac{2}{\sqrt{3}}\varepsilon^{2}\text{ as }\varepsilon\to 0. \tag{2.9}\]
_Remark_.: To get the shape of \(T(\varepsilon)\) one can use elementary synthetic hyperbolic geometry. But one can also resort to hyperbolic trigonometry. For example, evoking formula 2.2.2 (iv) in the back of Buser's book [3] one gets
\[T(\varepsilon)=\left\{p\in T\,\middle|\,\cot\left(\frac{\pi}{3}-\frac{ \varepsilon}{2}\right)\leq\sinh(d(p,\partial T))\leq\cot\left(\frac{\pi}{3}+ \frac{\varepsilon}{2}\right)\right\}.\]
Suppose now that \(\phi:X\to\Sigma\) is a critical realization of a trivalent graph \(X\) in our closed hyperbolic surface and recall that by Lemma 2.2 we have that the connected component \(\mathcal{G}^{\phi}\) of geodesic realizations homotopic to \(\phi\) is isometric to a product of hyperbolic planes \(\mathbb{H}^{2}\times\cdots\times\mathbb{H}^{2}\), one factor for each vertex of \(X\). To each vertex \(x\) of \(X\) we can associate first the tripod \(\tau_{x}^{\phi}=(\phi(x),\{\phi(\vec{e})\}_{\vec{e}\in\mathbf{half}_{x}(X)})\) consisting of the image under \(\phi\) of the vertex \(x\) and of the unit vectors \(\phi(\vec{e})\) tangent to the images of the half-edges incident to \(x\), and then \(T_{\tau_{x}^{\phi}}\) the ideal triangle associated to the critical tripod \(\tau_{x}^{\phi}\). The assumption that \(\phi\) is critical implies that the point \(\phi(x)\) is the center of \(T_{\tau_{x}^{\phi}}\) for every vertex \(x\in\mathbf{vert}\). We let
\[\mathcal{T}^{\phi}=\prod_{x\in\mathbf{vert}(X)}T_{\tau_{x}^{\phi}}\subset\prod _{x\in\mathbf{vert}(X)}\mathbb{H}^{2}=\mathcal{G}^{\phi}\]
be the subset of \(\mathcal{G}^{\phi}\) consisting of geodesic realizations homotopic to \(\phi\) via a homotopy that maps each vertex \(x\) within \(T_{\tau_{x}^{\phi}}\). Accordingly we set
\[\mathcal{T}^{\phi}(\varepsilon)=\prod_{x\in\mathbf{vert}(X)}T_{\tau_{x}^{\phi }}(\varepsilon)\subset\mathcal{T}^{\phi}\]
and we note that
\[\operatorname{vol}(\mathcal{T}^{\phi}(\varepsilon))\sim\left(\frac{2}{\sqrt{3 }}\varepsilon^{2}\right)^{|\mathbf{vert}(X)|}.\]
The reason why we care about all these sets is that, asymptotically, \(\mathcal{T}^{\phi}(\varepsilon)\) agrees with the set \(\mathcal{G}^{\phi}_{\varepsilon-\operatorname{crit}}\) of \(\varepsilon\)-critical realizations \(\psi\) homotopic to \(\phi\). Indeed we get from Corollary 2.4 that the geodesic homotopy from \(\psi\) to \(\phi\) has lengths bounded by a uniform constant. This means that for all \(e\in\mathbf{edge}(X)\) the geodesic segments \(\phi(e)\) and \(\psi(e)\) are very long but have endpoints relatively close to each other. This means that, when looked from (each) one of its vertices \(x\), the segment \(\psi(e)\) is very close to being asymptotic to \(\phi(e)\). In particular, the angle between \(\psi(e)\) and the geodesic ray starting at \(\psi(x)\)
Figure 1: Left: A critical tripod and (part of) the ideal triangle it determines. Right: The hexagon \(T(\varepsilon)\). Each of the lines making up the boundary of the hexagon corresponds to points \(p\) having an angle between vectors in \(\tau_{P}^{T}\) of measure \(\frac{2\pi}{3}-\varepsilon\) (dotted lines) and \(\frac{2\pi}{3}+\varepsilon\) (solid lines).
and asymptotic to the ray starting in \(\phi(x)\) in direction \(\phi(e)\) is smaller than \(\delta=\delta(\min_{e\in\mathbf{edge}(X)}\ell(\psi(e)))\) for some positive function with \(\delta(t)\to 0\) as \(t\to\infty\). With the rather clumsy notation we find ourselves working with we have that \(\delta\) bounds from above the angles between corresponding edges of the tripods \(\tau_{x}^{\psi}\) and \(\tau_{\psi(x)}^{T_{x}^{\phi}}\). Since by assumption the angles of \(\tau_{x}^{\psi}\) belong to \([\frac{2\pi}{3}-\varepsilon,\frac{2\pi}{3}+\varepsilon]\) we get that the angles of \(\tau_{\psi(x)}^{T_{x}^{\phi}}\) belong to \([\frac{2\pi}{3}-\varepsilon-2\delta,\frac{2\pi}{3}+\varepsilon+2\delta]\), meaning that \(\psi(x)\in T_{\tau_{x}^{\phi}}(\varepsilon+2\delta)\).
To summarize, what we have proved is that for all \(\varepsilon_{1}>\varepsilon\) there is \(\ell_{1}\) such that if \(\phi:X\to\Sigma\) is critical with \(\ell(\phi(e))\geqslant\ell_{1}\) for all \(e\in\mathbf{edge}\), then
\[\mathcal{G}_{\varepsilon-\mathrm{crit}}^{\phi}\subset\mathcal{T}^{\phi}( \varepsilon_{1}).\]
The same argument proves that for all \(\varepsilon_{2}<\varepsilon\) there is \(\ell_{2}\) such that if \(\phi:X\to\Sigma\) is critical with \(\ell(\phi(e))\geqslant\ell_{2}\) for all \(e\in\mathbf{edge}\), then
\[\mathcal{T}^{\phi}(\varepsilon_{2})\subset\mathcal{G}_{\varepsilon-\mathrm{crit}} ^{\phi}\]
We record these facts:
**Lemma 2.5**.: _There are functions \(h:(0,\frac{\pi}{6})\to\mathbb{R}_{>0}\) and \(\delta:(0,\varepsilon_{0})\to\mathbb{R}_{>0}\) with \(\lim_{t\to 0}h(t)=0\) and \(\lim_{t\to\infty}\delta(t)=0\) such that for every critical realizations \(\phi:X\to\Sigma\) we have_
\[\mathcal{T}^{\phi}(\varepsilon-r(\varepsilon,\phi))\subset\mathcal{G}_{ \varepsilon-\mathrm{crit}}^{\phi}\subset\mathcal{T}^{\phi}(\varepsilon+r( \varepsilon,\phi))\]
_where \(r(\varepsilon,\phi)=h(\varepsilon)+\delta(\min_{e\in\mathbf{edge}(X)}\ell( \phi(e)))\). _
To conclude this section we collect what we will actually need about the set of \(\varepsilon\)-critical realizations in a single statement.
**Proposition 2.6**.: _Let \(X\) be a trivalent graph. There are \(\ell>0\) and functions \(\rho_{0},\rho_{1}:\mathbb{R}_{>0}\to\mathbb{R}_{>0}\) with \(\lim_{t\to 0}\rho_{0}(t)=0\) and \(\lim_{t\to\infty}\rho_{1}(t)=0\) and such that the following holds for all \(\varepsilon\in[0,\frac{\pi}{6}]\):_
_If \(\phi\in\mathcal{G}_{\varepsilon-\mathrm{crit}}(X)\) is an \(\ell\)-long \(\varepsilon\)-critical realization then_
\[\left|\mathrm{vol}(\mathcal{G}_{\varepsilon-\mathrm{crit}}^{\phi})-\left( \frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{V}\right|\leqslant\rho_{0}( \varepsilon)\cdot\rho_{1}\left(\min_{e\in\mathbf{edge}}\ell(\phi(e))\right) \cdot\varepsilon^{2V}, \tag{2.10}\]
_Moreover \(\mathcal{G}_{\varepsilon-\mathrm{crit}}^{\phi}\) contains a unique critical realization \(\psi\) and we have_
\[\max_{e\in\mathbf{edge}}|\ell_{\Sigma}(\phi(e))-\ell_{\Sigma}(\psi(e))| \leqslant\rho_{0}(\varepsilon)+\rho_{1}\left(\min_{e\in\mathbf{edge}}\ell( \phi(e))\right)\cdot\varepsilon \tag{2.11}\]
_Here again \(V\) is the number of vertices of \(X\) and \(\mathbf{edge}=\mathbf{edge}(X)\) is its set of edges._
Proof.: Suppose to begin with that \(\ell\) is at least as large as the \(\ell_{0}\) in Corollary 2.4. We thus get that \(\mathcal{G}^{\phi}\) contains a unique critical realization \(\psi\). Now we get from Lemma 2.5 that
\[\mathcal{T}^{\phi}(\varepsilon-r(\varepsilon,\phi))\subset\mathcal{G}_{ \varepsilon-\mathrm{crit}}^{\phi}\subset\mathcal{T}^{\phi}(\varepsilon+r( \varepsilon,\phi)) \tag{2.12}\]
where \(r(\varepsilon,\phi)=h(\varepsilon)+\delta(\min_{e\in\mathbf{edge}}\ell(\phi(e)))\) for some functions \(h:(0,\frac{\pi}{6})\to\mathbb{R}_{>0}\) with \(\text{ and }\delta:(0,\varepsilon_{0})\to\mathbb{R}_{>0}\) with \(\lim_{t\to\infty}h(t)=0\) and \(\lim_{t\to\infty}\delta(t)=0\). The bounds (2.10) and (2.11) follow now from (2.12) and (2.9).
## 3. Variations of Delsarte's theorem
In this section we establish the asymptotics of the number of realizations \(\phi:X\to\Sigma\) satisfying some length condition, mapping each vertex to some pre-determined point in \(\Sigma\), and so that the tuple of directions of images of half-edges belongs to some given open set of such tuples--see Theorem 3.2 for details. Other than some book-keeping, what is needed is a slight extension of Delsarte's lattice counting theorem [7], namely Theorem 3.1 below. This result is known to experts, and much more sophisticated versions than what we need here can be found in the literature--see for example [17, Theoreme 4.1.1]. However, for the sake of completeness, we will explain how to derive Theorem 3.1 from the fact that the geodesic flow is mixing. We refer to the very nice books [2, 17] for the needed background.
We start by recalling a few well-known facts about dynamics of geodesic flows on hyperbolic surfaces. First recall that we can identify \(T^{1}\mathbb{H}^{2}\) with \(\operatorname{PSL}_{2}\mathbb{R}\), and \(T^{1}\Sigma=T^{1}(\Gamma\backslash\mathbb{H}^{2})\) with \(\Gamma\backslash\operatorname{PSL}_{2}\mathbb{R}\). More specifically, when using the identification
\[\operatorname{PSL}_{2}\mathbb{R}\to T^{1}\mathbb{H}^{2}\ \ g\mapsto\frac{d}{dt}g(e^{t}i)|_{t=0}\]
where the computation happens in the upper half-plane model and \(i\in\mathbb{H}^{2}\) is the imaginary unit, then the geodesic and horocyclic flows amount to right multiplication by the matrices
\[\rho_{t}=\left(\begin{array}{cc}e^{\frac{t}{2}}&0\\ 0&e^{-\frac{t}{2}}\end{array}\right)\text{ and }h_{s}=\left(\begin{array}{cc}1& s\\ 0&1\end{array}\right)\]
respectively. Note that \(K=\operatorname{SO}_{2}\subset\operatorname{PSL}_{2}\mathbb{R}\), the stabilizer of \(i\) under the action \(\operatorname{PSL}_{2}\mathbb{R}\curvearrowright\mathbb{H}^{2}\), is a maximal compact subgroup--\(K\) corresponds under the identification \(\operatorname{PSL}_{2}\mathbb{R}\simeq T^{1}\mathbb{H}^{2}\) to the unit tangent space \(T^{1}_{i}\mathbb{H}^{2}\) of the base point \(i\in\mathbb{H}^{2}\). The KAN decomposition (basically the output of the Gramm-Schmidt process from linear algebra) asserts that every element in \(g\in\operatorname{PSL}_{2}\mathbb{R}\) can be written in a unique way as
\[g=k\rho_{t}h_{s} \tag{3.1}\]
for \(k\in K\) and \(t,s\in\mathbb{R}\). In those coordinates, the Haar measure of \(\operatorname{PSL}_{2}\mathbb{R}\) is given by
\[\int f(g)\ \operatorname{vol}_{T^{1}\mathbb{H}^{2}}(g)=\iiint f(k\rho_{t}h_{s}) \cdot e^{-t}\,dkdtds\]
where \(dk\) stands for integrating over the arc length in \(K\simeq T^{1}_{i}\mathbb{H}^{2}\), normalized to have total measure \(2\pi\).
The basic fact we will need is that the geodesic flow \(\rho_{t}\) on \(T^{1}\Sigma\) is mixing [2, III.2.3], or rather one of its direct consequences, namely the fact that for each \(x_{0}\in\mathbb{H}^{2}\) the projection to \(\Sigma\) of the sphere \(S(x_{0},L)\) centered at \(x_{0}\) and with radius \(L\) gets equidistributed in \(\Sigma\) when \(L\to\infty\)[2, III.3.3]. To be more precise, note that we might assume without loss of generality that \(x_{0}=i\), meaning that we are identifying \(T^{1}_{x_{0}}\mathbb{H}^{2}=K\). The fact that the geodesic flow is mixing implies then that for every non-degenerate interval \(I\subset K\) the spherical arcs \(I\rho_{t}\) equidistribute in \(\Gamma\backslash\operatorname{PSL}_{2}\mathbb{R}\) in the sense that for any continuous function \(f\in C^{0}(T^{1}\Sigma)=C^{0}(\Gamma\backslash\operatorname{PSL}_{2}\mathbb{ R})\) we have
\[\lim_{L\to\infty}\frac{1}{\ell(I)}\int_{I}f(\Gamma k\rho_{L})\,dk=\frac{1}{ \operatorname{vol}_{T^{1}\Sigma}(T^{1}\Sigma)}\int_{T^{1}\Sigma}f(\Gamma g) \cdot dg \tag{3.2}\]
where \(\ell(I)\) is the arc length of \(I\) and where the second integral is with respect to \(\operatorname{vol}_{T^{1}\Sigma}\). Anyways, we care about equidistribution of the spheres for the following reason. If \(f\in C^{0}(\Sigma)\) is a continuous function and if we let \(\tilde{f}\) be the composition of \(f\) with the cover \(\mathbb{H}^{2}\to\Sigma\) then (3.2) implies, with \(I=K\), that
\[\lim_{L\to\infty}\frac{1}{\operatorname{vol}_{\mathbb{H}^{2}}(B(x_{0},L))} \cdot\int_{B(x_{0},L)}\tilde{f}(x)\,dx=\frac{1}{\operatorname{vol}(\Sigma)} \int_{\Sigma}f(x)dx\]
Applying this to a non-negative function \(f_{y_{0},\varepsilon}\in C^{0}(\Sigma)\) with total integral \(1\) and supported by \(B(y_{0},\varepsilon)\) we get that
\[\int_{B(x_{0},L)}\tilde{f}_{y_{0},\varepsilon}(x)\,dx\sim\frac{\operatorname {vol}_{\mathbb{H}^{2}}(B(x_{0},L))}{\operatorname{vol}(\Sigma)}\]
Now, from the properties of \(f_{y_{0},\varepsilon}\) we get that
\[\int_{B(x,L-\varepsilon)}\tilde{f}_{y_{0},\varepsilon}(\cdot)d\operatorname{ vol}_{\mathbb{H}^{2}}\leqslant|\Gamma\!\cdot\!y_{0}\!\cap\!B(x_{0},L)| \leqslant\int_{B(x,L+\varepsilon)}\tilde{f}_{y_{0},\varepsilon}(\cdot)d \operatorname{vol}_{\mathbb{H}^{2}} \tag{3.3}\]
When taken together, the last two displayed equations imply that for all \(\varepsilon\) one has
\[\frac{\pi\cdot e^{L-\varepsilon}}{\operatorname{vol}(\Sigma)}\leqslant|\Gamma \cdot y_{0}\cap B(x_{0},L)|\leqslant\frac{\pi\cdot e^{L+\varepsilon}}{ \operatorname{vol}(\Sigma)}\]
and hence that
\[|\Gamma\cdot y_{0}\cap B(x_{0},L)|\sim\frac{\pi\cdot e^{L}}{\operatorname{ vol}(\Sigma)}\text{ as }L\to\infty. \tag{3.4}\]
This is Delsarte's lattice point counting theorem [7]--we refer to [2, III.3.5] for more details.
An observation: note that counting elements of \(\Gamma\cdot y_{0}\) contained in the ball \(B(x_{0},L)\) is exactly the same thing as counting geodesic arcs in \(\Sigma\) of length at most \(L\) going from \(x_{0}\) to \(y_{0}\), or to be precise, from the projection of \(x_{0}\) to the projection of \(y_{0}\). In this way, if we are given a segment \(I\subset K=T^{1}_{x_{0}}\Sigma\) and we use the equidistribution of the spherical segments \(I\rho_{t}\) then when we run the argument above we get the cardinality of the set \(\mathbf{A}_{I,y_{0}}(L)\) of geodesic
arcs of length at most \(L\), starting in \(x_{0}\) with initial speed in \(I\), and ending in \(y_{0}\). As expected, the result is that
\[|\mathbf{A}_{I,y_{0}}(L)|\sim\frac{\ell(I)}{2\pi}\cdot\frac{\pi\cdot e^{L}}{ \operatorname{vol}(\Sigma)}\quad\text{ when }L\to\infty.\]
Suppose now that we dial it up a bit giving ourselves a second sector \(J\subset T^{1}_{y_{0}}\Sigma\) and care, for some fixed \(h\), about the cardinality of the set \(\mathbf{A}_{I,J}(L,h)\) of geodesic arcs with length in \([L,L+h]\), joining \(x_{0}\) to \(y_{0}\), and with initial and terminal velocities in \(I\) and \(J\) respectively. We can obtain the asymptotics of \(|\mathbf{A}_{I,J}(L,h)|\) following the same basic idea as in the proof of Delsarte's theorem above. We need however to replace the bump function \(f_{y_{0},\varepsilon}\) by something else.
Using the coordinates (3.1) consider for \(h\) and \(\delta\) positive and small the set
\[\mathcal{J}(J,h,\delta)=\{J\rho_{t}h_{s}\text{ with }s\in[-\delta,\delta] \text{ and }t\in[0,h]\}\subset T^{1}\Sigma\]
and note that it has volume \(\ell(J)\cdot 2\delta\cdot(1-e^{-h})\). Note also that the intersection of \(\mathcal{J}(J,h,\delta)\) with the outer normal vector field of any horosphere consists of segments of the form \(H_{v\rho_{t}}=\{v\rho_{t}h_{s}\text{ with }s\in[-\delta,\delta]\}\) where \(v\in J\) and \(t\in[0,h]\) and that each such segment has length \(2\delta\). Finally observe each one of the sets \(H_{v\rho_{t}}\) contains exactly one vector, namely \(v\rho_{t}\), which lands in \(J\) when we geodesic flow it for some time in \([-h,0]\). It follows that for all intervals \(\mathbb{I}\subset\mathbb{R}\) and \(w\in T^{1}\Sigma\) we have
\[\left|\left|\left\{r\in\mathbb{I}\left|\begin{array}{c}\exists s\in[-\delta, \delta],t\in[-h,0]\\ \text{with }wh_{s}\rho_{t}\in J\end{array}\right.\right\}\right|-\frac{\ell( \{s\in\mathbb{I}\text{ with }wh_{s}\in\mathcal{J}\})}{2\delta}\right|\leqslant 2\]
where \(\mathcal{J}=\mathcal{J}(J,h,\delta)\) and where the number \(2\) is there to take into account possible over counting near the ends of the horospherical segment.
Using then the fact that when \(L\to\infty\) the sphere \(S(x_{0},L+h)\subset\mathbb{H}^{2}\) looks more and more like a horosphere, one gets the following analogue of (3.3): for any \(\delta^{\prime}<\delta<\delta^{\prime\prime}\) and \(h^{\prime}<h<h^{\prime\prime}\) and \(J^{\prime}\Subset J\Subset J^{\prime\prime}\) we have for all sufficiently large \(L\)
\[\int_{I\rho_{L+h}}\chi_{\mathcal{J}(J^{\prime},h^{\prime},\delta^{\prime})} \leqslant 2\delta\cdot|\mathbf{A}_{I,J}(L,h)|\leqslant\int_{I\rho_{L+h}} \chi_{\mathcal{J}(J^{\prime\prime},h^{\prime\prime},\delta^{\prime\prime})}\]
where \(\chi_{\mathcal{J}}\) is the characteristic function of \(\mathcal{J}\). From here we get that
\[|\mathbf{A}_{I,J}(L,h)| \sim\frac{1}{2\delta}\int_{I\rho_{L+h}}\chi_{\mathcal{J}(J,h, \delta)}\sim\frac{1}{2\delta}\cdot\frac{e^{L+h}}{2}\cdot\int_{I}\chi_{ \mathcal{J}(J,h,\delta)}(k\rho_{L+h})\ dk\] \[\stackrel{{\eqref{eq:L-h-1}}}{{\sim}}\frac{1}{2 \delta}\cdot\frac{e^{L+h}}{2}\cdot\ell(I)\cdot\frac{\operatorname{vol}_{T^{1} \Sigma}(\mathcal{J}(J,h,\delta))}{\operatorname{vol}_{T^{1}\Sigma}(T^{1} \Sigma)}\] \[\sim\frac{\ell(I)\cdot\ell(J)}{4\pi}\cdot\frac{e^{L+h}-e^{L}}{ \operatorname{vol}(\Sigma)}\]
We record this slightly generalized version of Delsarte's theorem for later reference:
**Theorem 3.1** (Delarte's theorem for in-out sectors).: _Let \(\Sigma\) be a closed hyperbolic surface, let \(h\) be positive, and let \(I\subset T_{x_{0}}\Sigma\) and \(J\subset T_{y_{0}}\Sigma\) be non-degenerate segments. Let then \(\mathbf{A}_{I,J}(L,h)\) be the set of geodesic arcs \(\alpha:[0,r]\to\Sigma\) with length \(r\in[L,L+h]\), with endpoints \(\alpha(0)=x_{0}\) and \(\alpha(r)=y_{0}\), and with initial and terminal speeds satisfying \(\alpha^{\prime}(0)\in I\) and \(\alpha^{\prime}(r)\in J\). Then we have_
\[|\mathbf{A}_{I,J}(L,h)|\sim\frac{\ell(I)\cdot\ell(J)}{4\pi}\cdot\frac{e^{L+h}-e ^{L}}{\operatorname{vol}(\Sigma)}\]
_when \(L\to\infty\). _
As we mentioned earlier, Theorem 3.1 is a very special case of the much more general [17, Theoreme 4.1.1]. We would not be surprised if there were also other references we are not aware of.
_Remark_.: Note also that the asymptotic behavior in Theorem 3.1 is uniform as long as \(I\) and \(J\) belong to a compact set of choices. For example, since the surface \(\Sigma\) is compact, we get that for all \(\delta>0\) the asymptotic behavior in Theorem 3.1 is uniform as long as \(I\) and \(J\) are intervals of length at least \(\delta\) contained respectively in \(T^{1}_{x_{I}}\Sigma\) and \(T^{1}_{x_{J}}\Sigma\) for some \(x_{I}\) and \(x_{J}\) in \(\Sigma\).
Now, let \(X\) be a trivalent graph with vertex set \(\mathbf{vert}(X)\) and let \(\vec{x}=(x_{v})_{v\in\mathbf{vert}(X)}\in\Sigma^{\mathbf{vert}(X)}\) be a \(\mathbf{vert}(X)\)-tuple of points in the surface. Let also
\[U\subset\prod_{v\in\mathbf{vert}(X)}\left(\bigoplus_{\vec{e}\in\mathbf{half}_ {v}(X)}T^{1}_{x_{v}}\Sigma\right)\stackrel{{\text{def}}}{{=}} \mathbb{T}_{\vec{x}}\]
be an open set where, as always, \(\mathbf{half}_{v}(X)\) is the set of all half-edges of \(X\) starting at the vertex \(v\).
Given for each edge \(e\in\mathbf{edge}(X)\) a positive number \(L_{e}\) and writing \(\vec{L}=(L_{e})_{e\in\mathbf{edge}(X)}\) we are going to be interested in the set \(\mathbf{G}^{X}_{U}(\vec{L}_{e},h)\) of realizations \(\phi:X\to\Sigma\)
1. mapping each vertex \(v\in\mathbf{vert}(X)\) to the point \(x_{v}\),
2. with \((\phi(\vec{e}))_{\vec{e}\in\mathbf{half}(X)}\in U\), and
3. with \(\ell(e)\in[L_{e},L_{e}+h]\) for all \(e\in\mathbf{edge}(X)\).
In this setting we have the following version of Delsarte's theorem relating the cardinality of \(\mathbf{G}^{X}_{U}(\vec{L}_{e},h)\) with the volume \(\operatorname{vol}_{\vec{x}}(U)\) of \(U\), where the volume is measured in the flat torus \(\mathbb{T}_{\vec{x}}\).
**Theorem 3.2** (Delarte's theorem for graph realizations).: _Let \(\Sigma\) be a closed hyperbolic surface, let \(X\) be a finite graph, fix \(\vec{x}\in\Sigma^{\mathbf{vert}(X)}\) and an open set \(U_{\vec{x}}\subset\mathbb{T}_{\vec{x}}\). If \(\operatorname{vol}_{\vec{x}}(\vec{U}_{\vec{x}}\setminus U_{\vec{x}})=0\), then for every \(h>0\) we have_
\[|\mathbf{G}^{X}_{U_{\vec{x}}}(\vec{L},h)|\sim\frac{\operatorname{vol}_{\vec{x} }(U_{\vec{x}})}{(4\pi)^{E}}\cdot\frac{(e^{h}-1)^{E}\cdot e^{\|\vec{L}\|}}{ \operatorname{vol}(\Sigma)^{E}}\]
_when \(\min_{e\in\mathbf{edge}(X)}L_{e}\to\infty\). Here \(\bar{U}_{\vec{x}}\) is the closure of \(U_{\vec{x}}\) in \(\mathbb{T}_{\vec{x}}\) and \(\operatorname{vol}_{\vec{x}}(\cdot)\) stands for the volume therein. Also, \(E=|\operatorname{\mathbf{edge}}(X)|\) is the number of edges in \(X\), and \(\|\vec{L}\|=\sum_{e\in\mathbf{edge}(X)}L_{e}\)._
We stress that \(\operatorname{vol}_{\vec{x}}\) is normalized in such a way that \(\operatorname{vol}_{\vec{x}}(\mathbb{T}_{\vec{x}})=(2\pi)^{2E}\). We also stress that this is consistent with the fact that we use the interval \([0,\pi]\) to measure unoriented angles.
Proof.: Let us say that a closed set of the form
\[\prod_{v\in\mathbf{vert}(X)}\left(\prod_{\vec{e}\in\mathbf{half}_{v}(X)}I_{ \vec{e}}\right)\subset\mathbb{T}_{\vec{x}}=\prod_{v\in\mathbf{vert}(X)}\left( \bigoplus_{\vec{e}\in\mathbf{half}_{v}(X)}T_{x_{v}}^{1}\Sigma\right)\]
where each \(I_{\vec{e}}\) is a segment in \(T_{x_{v}}^{1}\Sigma\) is _a cube_. We say that a closed subset of \(\prod_{v\in\mathbf{vert}(X)}\left(\oplus_{\vec{e}\in\mathbf{half}_{v}(X)}T_{ x_{v}}^{1}\Sigma\right)\) it _cubical_ if it can be given as the union of finitely many cubes with disjoint interiors. The assumption that the open set \(U=U_{\vec{x}}\) in the statement is such that \(\bar{U}\setminus U\) is a null-set implies that \(U\) can be approximated from inside and outside by cubical sets \(U^{\prime}\subset U\subset U^{\prime\prime}\) with \(\operatorname{vol}(U^{\prime\prime})-\operatorname{vol}(U^{\prime})\) as small as we want. It follows that it suffices to prove the theorem if \(U\) is (the interior of) a cubical set.
Note now that if \(U=U_{1}\cup U_{2}\) is the disjoint union of two sets \(U_{1}\) and \(U_{2}\) and if the statement of the theorem holds true for \(U_{1}\) and \(U_{2}\) then it also holds true for \(U\). Since every cubical set is made out of finitely many cubes with disjoint interior we deduce that it really suffices to prove the theorem for individual cubes
\[U=\prod_{v\in\mathbf{vert}(X)}\left(\prod_{\vec{e}\in\mathbf{half}_{v}(X)}I_ {\vec{e}}\right)\]
Note that, up to shuffling the factors we can see \(U\) as
\[U=\prod_{e\in\mathbf{edge}(X)}\left(I_{e^{+}}\times I_{e^{-}}\right).\]
Here we are denoting by \(e^{+}\) and \(e^{-}\) the two half-edges of the edge \(e\in\mathbf{edge}(X)\). Now, unpacking the notation one sees that a realization \(\phi\) belongs to \(\mathbf{G}_{U}(\vec{L}_{e},h)\) if and only if for all \(e\in\mathbf{edge}(X)\) the arc \(\phi(e)\) belongs to \(\mathbf{A}_{I_{e^{+}},I_{e^{-}}}(L_{e},h)\). It follows thus from Delsarte's theorem for in-out sectors
that
\[|\mathbf{G}_{U}^{X}(\vec{L},h)| =\prod_{e\in\mathbf{edge}(X)}\left|\mathbf{A}_{I_{e^{+},I_{e^{-}}} }(L_{e},h)\right|\] \[\sim\prod_{e\in\mathbf{edge}(X)}\left(\frac{\ell(I_{e^{+}})\cdot \ell(I_{e^{-}})}{4\pi}\cdot\frac{(e^{h}-1)\cdot e^{L_{e}}}{\operatorname{vol}( \Sigma)}\right)\] \[=\frac{\prod_{e\in\mathbf{edge}(X)}(\ell(I_{e^{+}})\cdot\ell(I_{ e^{-}}))}{(4\pi)^{E}}\cdot\frac{(e^{h}-1)^{E}\cdot e^{\sum_{e\in\mathbf{edge}(X)}L_ {e}}}{\operatorname{vol}(\Sigma)^{E}}\] \[=\frac{\operatorname{vol}(U)}{(4\pi)^{E}}\cdot\frac{(e^{h}-1)^{E }\cdot e^{\|\vec{L}\|}}{\operatorname{vol}(\Sigma)^{E}}\]
where, as in the statement we have set \(E=|\operatorname{\mathbf{edge}}(X)|\). We are done.
Let us consider now the concrete case we will care mostly about. Suppose namely that \(X\) is a trivalent graph and that we want all the angles to be between \(\frac{2}{3}\pi-\varepsilon\) and \(\frac{2}{3}\pi+\varepsilon\). In other words, we are interested in the set of \(\varepsilon\)-critical realizations \(\phi:X\to\Sigma\) which map the vertices to our prescribed tuple \(\vec{x}\in\Sigma^{\mathbf{vert}(X)}\). Then we have to consider the set
\[U^{X}_{\vec{x},\varepsilon-\operatorname{crit}}\subset\prod_{v\in\mathbf{vert }(X)}\left(\bigoplus_{\bar{\varepsilon}\in\mathbf{half}_{v}(X)}T^{1}_{x_{v}} \Sigma\right)\]
of those tuples \((v_{\vec{e}})_{\bar{e}\in\mathbf{half}(X)}\) with \(\angle(v_{\vec{e}_{1}},v_{\vec{e}_{2}})\in[\frac{2\pi}{3}-\varepsilon,\frac{2 \pi}{3}+\varepsilon]\) for all distinct \(\vec{e}_{1},\vec{e}_{2}\in\mathbf{half}(X)\) incident to the same vertex.
Well, let us compute the volume of \(U^{X}_{\vec{x},\varepsilon-\operatorname{crit}}\). Noting that the conditions on \(U^{X}_{\vec{x},\varepsilon-\operatorname{crit}}\) associated to different vertices are independent, we get that it suffices to think vertex-by-vertex and then multiply all the numbers obtained for all vertices. For each vertex \(v\) label arbitrarily the half-edges incident to \(v\) by \(\vec{e}_{1},\vec{e}_{2}\) and \(\vec{e}_{3}\). We have no restriction for the position of \(v_{\vec{e}_{1}}\in T_{x_{v}}\Sigma\). Once we have fixed \(v_{\vec{e}_{1}}\in T^{1}_{x_{v}}(\Sigma)\) we get that \(v_{\vec{e}_{2}}\) can belong to two segments, each one of length \(2\varepsilon\), in \(T^{1}_{x_{v}}(\Sigma)\)--recall that all our angles are unoriented. Then, once we have fixed \(v_{\vec{e}_{1}}\) and \(v_{\vec{e}_{2}}\) we have to choose \(v_{\vec{e}_{3}}\) in an interval of length \(2\varepsilon-|\angle(v_{\vec{e}_{1}},v_{\vec{e}_{2}})-\frac{2\pi}{3}|\). This means that when we have chosen \(v_{\vec{e}_{1}}\) and which segment \(v_{\vec{e}_{2}}\) is in, then we have \(3\varepsilon^{2}\) worth of choices of \((v_{\vec{e}_{2}},v_{\vec{e}_{3}})\). This means that the set of possible choices in \(T^{1}_{x_{v}}\Sigma\times T^{1}_{x_{v}}\Sigma\times T^{1}_{x_{v}}\Sigma\) has volume equal to \(12\pi\varepsilon^{2}\). Since, as we already mentioned earlier, the conditions at all vertices of \(X\) are independent, we get that
\[\operatorname{vol}_{\vec{x}}(U^{X}_{\vec{x},\varepsilon-\operatorname{crit}}) =(12\pi\varepsilon^{2})^{|\operatorname{vert}(X)|}\]
From Theorem 3.2 we get thus that for all \(h>0\) and all \(\vec{x}\in\Sigma^{\mathbf{vert}(X)}\) we have
\[|\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}}(\vec{L},h)|\sim\frac {(12\pi\varepsilon^{2})^{V}}{(4\pi)^{E}}\cdot\frac{(e^{h}-1)^{E}\cdot e^{\| \vec{L}\|}}{\operatorname{vol}(\Sigma)^{E}}\text{ as }\min_{e\in\mathbf{edge}(X)}L_{e}\to\infty,\]
where we are writing \(V\) and \(E\) for the number of vertices and edges of \(X\), and \(\mathbf{G}^{X}_{\vec{x},\varepsilon-\mathrm{crit}}\) instead of \(\mathbf{G}^{X}_{U_{\vec{x},\varepsilon-\mathrm{crit}}}\). Taking into account that \(X\) is trivalent and hence satisfies \(V=-2\chi(X)\) and \(E=-3\chi(X)\) we can clean this up to
\[|\mathbf{G}^{X}_{\vec{x},\varepsilon-\mathrm{crit}}(\vec{L},h)|\sim\varepsilon ^{4|\chi(X)|}\cdot\left(\frac{2}{3}\right)^{2\chi(X)}\cdot\pi^{\chi(X)}\cdot \frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{\mathrm{vol}(\Sigma)^{-3\chi (X)}}\]
as \(\min_{e\in\mathbf{edge}(X)}L_{e}\to\infty\). Moreover, since the geometry of the set \(U^{X}_{\vec{x}}(\varepsilon-\mathrm{crit})\) is independent of the point \(\vec{x}\), we get that the speed of convergence to this asymptotic is independent of \(\vec{x}\). Altogether have the following result:
**Corollary 3.3**.: _Let \(\Sigma\) be a closed hyperbolic surface and \(X\) be a trivalent graph, fix \(\varepsilon>0\) and \(h>0\), and for \(\vec{x}\in\Sigma^{\mathbf{vert}(X)}\), let \(\mathbf{G}^{X}_{\vec{x},\varepsilon-\mathrm{crit}}(\vec{L},h)\) be the set of \(\varepsilon\)-critical realizations \(\phi:X\to\Sigma\) mapping the vertex \(v\) to the point \(x_{v}=\phi(v)\). Then we have_
\[|\mathbf{G}^{X}_{\vec{x},\varepsilon-\mathrm{crit}}(\vec{L},h)|\sim\varepsilon ^{4|\chi(X)|}\cdot\left(\frac{2}{3}\right)^{2\chi(X)}\cdot\pi^{\chi(X)}\cdot \frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{\mathrm{vol}(\Sigma)^{-3 \chi(X)}} \tag{3.5}\]
_as \(\min_{e\in\mathbf{edge}(X)}L_{e}\to\infty\). _
Note that the set \(U^{X}_{\vec{x},\varepsilon-\mathrm{crit}}\) needed to establish Corollary 3.3 is basically the same for all choices of \(\vec{x}\). For example, if we take another \(\vec{y}\in\Sigma^{\mathbf{vert}(X)}\) and we identity \(\mathbb{T}_{\vec{x}}\) with \(\mathbb{T}_{\vec{y}}\) isometrically by parallel transport (along any collection of curves whatsoever) then \(U^{X}_{\vec{x},\varepsilon-\mathrm{crit}}\) is sent to \(U^{X}_{\vec{y},\varepsilon-\mathrm{crit}}\). It follows that we can approximate \(U^{X}_{\vec{x},\varepsilon-\mathrm{crit}}\) and \(U^{X}_{\vec{y},\varepsilon-\mathrm{crit}}\) at the same time by cubical sets consisting of the same number of cubes with the same side lengths. It thus follows from the comment following Theorem 3.1 that the asymptotics in Corollary 3.3 is uniform in \(\vec{x}\). We record this fact:
**Addendum to Corollary 3.3**.: _The asymptotics in (3.5) is uniform in \(\vec{x}\in\Sigma^{\mathbf{vert}(X)}\). _
## 4. Counting critical realizations
In this section we prove Theorem 1.3 from the introduction. Before restating the theorem, recall that if \(X\) is a trivalent graph then we denote by
\[\mathbf{G}^{X}(L)=\left\{\begin{array}{c}\phi:X\to\Sigma\text{ critical realization}\\ \text{with length }\ell_{\Sigma}(\phi)\leqslant L\end{array}\right\} \tag{4.1}\]
the set of all critical realizations of length \(\ell_{\Sigma}(\phi)\) at most \(L\).
**Theorem 1.3**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface. For every connected trivalent graph \(X\) we have_
\[|\mathbf{G}^{X}(L)|\sim\left(\frac{2}{3}\right)^{3\chi(X)}\cdot\frac{\mathrm{ vol}(T^{1}\Sigma)^{\chi(X)}}{(-3\chi(X)-1)!}\cdot L^{-3\chi(X)-1}\cdot e^{L}\]
_as \(L\to\infty\)._
Fixing for the remaining of this section the trivalent graph \(X\) we will write \(\operatorname{\mathbf{vert}}=\operatorname{\mathbf{vert}}(X)\) and \(\operatorname{\mathbf{edge}}=\operatorname{\mathbf{edge}}(X)\) for the sets of vertices and edges and \(V=|\operatorname{\mathbf{vert}}|\) and \(E=|\operatorname{\mathbf{edge}}|\) for their cardinalities. Similarly we will denote by \(\mathcal{G}=\mathcal{G}^{X}\) the manifold of realizations of \(X\) in \(\Sigma\), and by \(\mathbf{G}(L)=\mathbf{G}^{X}(L)\) the set of critical realizations with length at most \(L\).
The main step in the proof of Theorem 1.3 is to count critical realizations of \(X\) such that the corresponding vector of lengths \((\ell(\phi(e)))_{e\in\operatorname{\mathbf{edge}}}\) belongs to a box of size \(h>0\). More concretely, we want to count how many elements there are in the set
\[\mathbf{G}(\vec{L},h)=\left\{\begin{array}{c}\phi:X\to\Sigma\text{ critical realization with}\\ \ell(\phi(e))\in(L_{e},L_{e}+h]\text{ for all }e\in\operatorname{\mathbf{edge}} \end{array}\right\} \tag{4.2}\]
where \(\vec{L}=(L_{e})_{e\in\operatorname{\mathbf{edge}}}\) is a positive vector. We start by establishing some form of an upper bound for the number of homotopy classes of realizations when we bound the length of each individual edge--recall that by Lemma 2.2 any two homotopic critical realizations are identical.
**Lemma 4.1**.: _For all \(\vec{L}\in\mathbb{R}^{\operatorname{\mathbf{edge}}(X)}_{+}\) there are at most \(\operatorname{\mathbf{const}}\cdot e^{\|\vec{L}\|}\) homotopy classes of realizations \(\phi:X\to\Sigma\) with \(\ell(\phi(e))\leqslant L_{e}\) for all \(e\in\operatorname{\mathbf{edge}}(X)\)._
It is worth pointing out that Lemma 4.1 fails if \(\Sigma\) is allowed to have cusps--see Section 9.
Proof.: Let us fix a point \(x_{0}\in\Sigma\) and note that every point in \(\Sigma\)--think of the images under a realization of the vertices of \(X\)--can be moved to \(x_{0}\) along a path of length at most \(\operatorname{diam}(\Sigma)\). It follows that every realization \(\phi:X\to\Sigma\) is homotopic to a new realization \(\psi:X\to\Sigma\) mapping all vertices of \(X\) to \(x_{0}\) and with
\[\ell(\psi(e))\leqslant\ell(\phi(e))+2\cdot\operatorname{diam}(\Sigma)\leqslant L _{e}+2\cdot\operatorname{diam}(\Sigma). \tag{4.3}\]
for every edge \(e\in\operatorname{\mathbf{edge}}(X)\). Note that the homotopy class of \(\psi\) is determined by the homotopy classes of the loops \(\psi(e)\) when \(e\) ranges over the edges of \(X\). Now (4.3) implies that, up to homotopy, we have at most \(\operatorname{\mathbf{const}}\cdot e^{L_{e}}\) choices for the geodesic segment \(\psi(e)\). This implies that there are at most \(\operatorname{\mathbf{const}}\cdot e^{\|\vec{L}\|}\) choices for the homotopy class of \(\psi\), and hence for the homotopy class of \(\phi\). We are done.
Although it is evidently pretty coarse, Lemma 4.1 will play a key role in the proof of Theorem 1.3. However, the main tool in the proof of the theorem is the following:
**Proposition 4.2**.: _For all \(h>0\) we have_
\[|\mathbf{G}(\vec{L},h)|\sim\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot\pi^{\chi(X)} \cdot\frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{\operatorname{vol}( \Sigma)^{-\chi(X)}}\]
_as \(\min_{e\in\operatorname{\mathbf{edge}}}L_{e}\to\infty\). Here \(\mathbf{G}(\vec{L},h)\) is as in (4.2)._
Proof.: Denote by \(\mathcal{G}(\vec{L},h)\subset\mathcal{G}\) the set of all realizations \(\phi:X\to\Sigma\) with \(\ell_{\Sigma}(\phi)\in[L_{e},L_{e}+h]\) for all \(e\in\mathbf{edge}\) and then let
\[G_{\varepsilon}(\vec{L},h) =\left\{\mathcal{G}^{\phi}\in\pi_{0}(\mathcal{G})\text{ with } \mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}}\subset\mathcal{G}(\vec{L},h)\right\}\] \[\hat{G}_{\varepsilon}(\vec{L},h) =\left\{\mathcal{G}^{\phi}\in\pi_{0}(\mathcal{G})\text{ with } \mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}}\cap\mathcal{G}(\vec{L},h)\neq \varnothing\right\}\]
be the sets of connected components of \(\mathcal{G}\) whose set of \(\varepsilon\)-critical realizations is fully contained in (resp. which meet) \(\mathcal{G}(\vec{L},h)\). It follows from Corollary 2.4 that there is some \(\ell_{0}\) such that as long as \(\vec{L}\) satisfies that \(\min L_{e}\geqslant\ell_{0}\), then each component listed in \(\hat{G}_{\varepsilon}(\vec{L},h)\) contains exactly one critical realization of the graph \(X\). Assuming from now on that we are in this situation we get that
\[|G_{\varepsilon}(\vec{L},h)|\leqslant|\mathbf{G}(\vec{L},h)|\leqslant|\hat{G}_ {\varepsilon}(\vec{L},h)|.\]
Now, from (2.10) in Proposition 2.6 we get that for all \(\delta>0\) there is \(\ell_{1}>\ell_{0}\) with
\[(1-\delta)\cdot\mathrm{vol}\left(\bigcup_{\mathcal{G}^{\phi}\in G _{\varepsilon}(\vec{L},h)}\mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}} \right) <\left(\frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{V}\cdot|G_{ \varepsilon}(\vec{L},h)|\] \[(1+\delta)\cdot\mathrm{vol}\left(\bigcup_{\mathcal{G}^{\phi}\in \hat{G}_{\varepsilon}(\vec{L},h)}\mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit} }\right) >\left(\frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{V}\cdot|\hat{G}_{ \varepsilon}(\vec{L},h)|\]
whenever \(\varepsilon\) is small enough and \(\min L_{e}\geqslant\ell_{1}\). Altogether we get that for all \(\varepsilon\) positive and small we have
\[(1-\delta)\cdot\left(\frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{-V}\cdot \mathrm{vol}\left(\bigcup_{\mathcal{G}^{\phi}\in G_{\varepsilon}(\vec{L},h)} \mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}}\right) <|\mathbf{G}(\vec{L},h)|\] \[(1+\delta)\cdot\left(\frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{-V} \cdot\mathrm{vol}\left(\bigcup_{\mathcal{G}^{\phi}\in\hat{G}_{\varepsilon}( \vec{L},h)}\mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}}\right) >|\mathbf{G}(\vec{L},h)|\]
for all \(\vec{L}\) with \(\min L_{e}\geqslant\ell_{1}\).
We get now from (2.11) in Proposition 2.6 that there is \(\ell_{2}>\ell_{1}\) such that, as long as \(\varepsilon\) is under some threshold, we have that whenever \(\phi,\psi\in\mathcal{G}\) are homotopic \(\varepsilon\)-critical realizations with \(\ell(\phi(e)),\ell(\psi(e))\geqslant\ell_{2}\) for all \(e\in\mathbf{edge}\), then we have that the lengths \(\ell(\phi(e))\) and \(\ell(\psi(e))\) differ by at most \(2\varepsilon\) for each edge \(e\in\mathbf{edge}\). This implies that for any such \(\vec{L}\) and \(\varepsilon\) we have
\[\mathcal{G}_{\varepsilon-\mathrm{crit}}(\vec{L}+[2\varepsilon],h-4 \varepsilon) \subset\bigcup_{\mathcal{G}^{\phi}\in G_{\varepsilon}(\vec{L},h)} \mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}}\] \[\mathcal{G}_{\varepsilon-\mathrm{crit}}(\vec{L}-[2\varepsilon],h+4 \varepsilon) \supset\bigcup_{\mathcal{G}^{\phi}\in\hat{G}_{\varepsilon}(\vec{L},h)} \mathcal{G}^{\phi}_{\varepsilon-\mathrm{crit}}\]
where \(\vec{L}+[t]\in\mathbb{R}^{\mathbf{edge}}\) is the vector with entries \((\vec{L}+[t])_{e}=\vec{L}_{e}+t\).
Let us summarize what we have obtained so far:
\[(1-\delta)\cdot\left(\frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{-V} \cdot\operatorname{vol}\left(\mathcal{G}_{\varepsilon-\operatorname{crit}}( \vec{L}+[2\varepsilon],h-4\varepsilon)\right)<|\mathbf{G}(\vec{L},h)|\] \[(1+\delta)\cdot\left(\frac{2}{\sqrt{3}}\varepsilon^{2}\right)^{-V} \cdot\operatorname{vol}\left(\mathcal{G}_{\varepsilon-\operatorname{crit}}( \vec{L}-[2\varepsilon],h+4\varepsilon)\right)>|\mathbf{G}(\vec{L},h)|.\]
Our next goal is to compute the volumes on the left. Using the cover (2.2) we can compute volumes \(\operatorname{vol}(\mathcal{G}_{\varepsilon-\operatorname{crit}}(\vec{L},h))\) by integrating over \(\Sigma^{\mathbf{vert}}\) the cardinality of the intersection
\[\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}}(\vec{L},h)=\Pi^{-1}( \vec{x})\cap\mathcal{G}_{\varepsilon-\operatorname{crit}}(\vec{L},h)\]
of the fiber \(\Pi^{-1}(\vec{x})\) with the set we care about. In light of Corollary 3.3 we get in this way that
\[\operatorname{vol}(\mathcal{G}_{\varepsilon-\operatorname{crit}}( \vec{L},h)) =\int_{\Sigma^{\mathbf{vert}}}\left|\mathbf{G}^{X}_{\vec{x}, \varepsilon-\operatorname{crit}}(\vec{L},h)\right|\,d\vec{x}\] \[\overset{\operatorname{Cor.}\,\,\ref{eq:3.3}}{\sim}\int_{\Sigma^ {\mathbf{vert}}}\varepsilon^{4|\chi(X)|}\cdot\left(\frac{2}{3}\right)^{2\chi(X )}\cdot\pi^{\chi(X)}\cdot\frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{ \operatorname{vol}(\Sigma)^{-3\chi(X)}}\,d\vec{x}\] \[=\varepsilon^{4|\chi(X)|}\cdot\left(\frac{2}{3}\right)^{2\chi(X)} \cdot\pi^{\chi(X)}\cdot\frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{ \operatorname{vol}(\Sigma)^{-3\chi(X)}}\operatorname{vol}(\Sigma)^{V}\] \[=\varepsilon^{4|\chi(X)|}\cdot\left(\frac{2}{3}\right)^{2\chi(X )}\cdot\pi^{\chi(X)}\cdot\frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{ \operatorname{vol}(\Sigma)^{-\chi(X)}}\]
where we have used that \(V=-2\chi(X)\) and where the asymptotics hold true when \(\min_{e}L_{e}\to\infty\). This means that whenever \(\min_{e}L_{e}\) is large enough we have
\[(1-\delta)\cdot\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot\pi^{\chi(X)}\cdot\frac{ (e^{h-4\varepsilon}-1)^{-3\chi(X)}\cdot e^{\sum_{e\in\mathbf{edge}}(L_{e}+2 \varepsilon)}}{\operatorname{vol}(\Sigma)^{-\chi(X)}}<|\mathbf{G}(\vec{L},h)|\]
\[(1+\delta)\cdot\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot\pi^{\chi(X)}\cdot\frac{ (e^{h+4\varepsilon}-1)^{-3\chi(X)}\cdot e^{\sum_{e\in\mathbf{edge}}(L_{e}-2 \varepsilon)}}{\operatorname{vol}(\Sigma)^{-\chi(X)}}>|\mathbf{G}(\vec{L},h)|\]
Since this is true for all \(\varepsilon>0\) we get that
\[(1-\delta)\cdot\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot\pi^{\chi(X)}\cdot\frac{ (e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{\operatorname{vol}(\Sigma)^{- \chi(X)}}\leqslant|\mathbf{G}(\vec{L},h)|\]
\[(1+\delta)\cdot\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot\pi^{\chi(X)}\cdot\frac{ (e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{\operatorname{vol}(\Sigma)^{- \chi(X)}}\geqslant|\mathbf{G}(\vec{L},h)|\]
and hence, since for all \(\delta>0\) we can choose \(L\) large enough so that the above bounds hold, we have
\[\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot\pi^{\chi(X)}\cdot\frac{(e^{h}-1)^{-3 \chi(X)}\cdot e^{\|\vec{L}\|}}{\operatorname{vol}(\Sigma)^{-\chi(X)}}\sim| \mathbf{G}(\vec{L},h)|\]
as we wanted to prove.
Armed with Lemma 4.1 and Proposition 4.2 we can now prove the theorem:
Proof of Theorem 1.3.: Let \(h>0\) be small, and for \(\vec{n}\in\mathbb{N}^{\mathbf{edge}}\) consider, with the same notation as in (4.2), the set \(\mathbf{G}(h\cdot\vec{n},h)\). Setting
\[\Delta(N)=\{\vec{n}\in\mathbb{N}^{\mathbf{edge}}\text{ with }\|\vec{n}\|\leqslant N\}\]
where \(\|\vec{n}\|=\sum_{e}n_{e}\), note that
\[\sum_{\vec{n}\in\Delta(N-E)}|\mathbf{G}(h\cdot\vec{n},h)|\leqslant|\mathbf{G} (h\cdot N)|\leqslant\sum_{\vec{n}\in\Delta(N)}|\mathbf{G}(h\cdot\vec{n},h)| \tag{4.4}\]
where \(\mathbf{G}(h\cdot N)=\mathbf{G}^{X}(h\cdot N)\) is as in (4.1) and where, once again, \(E=|\,\mathbf{edge}\,|\) is the number of edges of the graph \(X\). Finally, write
\[\kappa=\frac{2^{4\chi(X)}}{3^{3\chi(X)}}\cdot(\pi\cdot\text{vol}(\Sigma))^{ \chi(X)}\]
Proposition 4.2 now reads as
\[|\mathbf{G}(h\cdot\vec{n},h)|\sim\kappa\cdot(e^{h}-1)^{-3\chi(X)}\cdot e^{h \cdot\|\vec{n}\|}\]
where the asymptotic holds for fixed \(h\) when \(\min\vec{n}_{e}\to\infty\). This means that for all \(h\) and \(\delta\) there is \(n(h,\delta)\) with
\[|\mathbf{G}(h\cdot\vec{n},h)|>(\kappa-\delta)\cdot(e^{h}-1)^{-3\chi(X)}\cdot e ^{h\cdot\|\vec{n}\|}\]
and
\[|\mathbf{G}(h\cdot\vec{n},h)|<(\kappa+\delta)\cdot(e^{h}-1)^{-3\chi(X)}\cdot e ^{h\cdot\|\vec{n}\|}\]
for all \(\vec{n}\) with \(\min\vec{n}_{e}\geqslant n(h,\delta)\). It follows thus from the left side of (4.4) that
\[|\mathbf{G}(h\cdot N)| \geqslant\sum_{\vec{n}\in\Delta(N-E),\atop\min\vec{n}_{e}\geqslant n (h,\,\delta)}|\mathbf{G}(h\cdot\vec{n},h)|\] \[>(\kappa-\delta)(e^{h}-1)^{-3\chi(X)}\sum_{\vec{n}\in\Delta(N-E),\atop\min\vec{n}_{e}\geqslant n(h,\,\delta)}e^{h\cdot\|\vec{n}\|}\] \[=(\kappa-\delta)(e^{h}-1)^{-3\chi(X)}\sum_{K=0}^{N-E}P(K)\cdot e^ {h\cdot K}\]
where \(P(K)\) is the number of those \(\vec{n}\in\mathbb{N}^{\mathbf{edge}}\) with \(\|n\|=K\) and \(\min\vec{n}_{e}\geqslant n(h,\delta)\). As \(K\) tends to \(\infty\) we have \(P(K)\sim\frac{1}{(E-1)!}K^{E-1}\). Taking into account
that \(E=-3\chi(X)\) we get that for all \(N\) large enough we have
\[|\mathbf{G}(h\cdot N)| \succeq\frac{\kappa-\delta}{(-3\chi(X)-1)!}(e^{h}-1)^{-3\chi(X)} \sum_{K=0}^{N-E}K^{-3\chi(X)-1}\cdot e^{h\cdot K}\] \[=\frac{\kappa-\delta}{(-3\chi(X)-1)!}\left(\frac{e^{h}-1}{h} \right)^{-3\chi(X)}\sum_{K=0}^{N-E}(hK)^{-3\chi(X)-1}\cdot e^{h\cdot K}\cdot h\] \[\succeq\frac{\kappa-\delta}{(-3\chi(X)-1)!}\left(\frac{e^{h}-1}{ h}\right)^{-3\chi(X)}\int_{0}^{(N-E)h}x^{-3\chi(X)-1}e^{x}dx\]
where the symbol \(\succeq\) means that asymptotically the ratio between the left side and the right side is at least \(1\). When \(N\to\infty\) then the value of the integral is asymptotic to \(((N-E)\cdot h)^{-3\chi(X)-1}\cdot e^{(N-E)h}\), and this means that for all \(N\) large enough we have
\[|\mathbf{G}(h\cdot N)|\succeq\frac{\kappa-\delta}{(-3\chi(X)-1)!}\left(\frac{ e^{h}-1}{h}\right)^{-3\chi(X)}(Nh-Eh)^{-3\chi(X)-1}\cdot e^{Nh-Eh}\]
This being true for all \(\delta\) and all \(h\), and replacing \(Nh\) by \(L\), we have
\[|\mathbf{G}(L)|\succeq\frac{\kappa}{(-3\chi(X)-1)!}L^{-3\chi(X)-1}\cdot e^{L}\]
as \(L\to\infty\). In other words, we have established the desired asymptotic lower bound.
Starting with the upper bound we get, again for \(h\) positive and small, from the right side in (4.4) that
\[|\mathbf{G}(h\cdot N)|\leqslant\sum_{\begin{subarray}{c}\vec{n}\in\Delta(N), \\ \min\vec{n}_{e}\geqslant n(h,\,\delta)\end{subarray}}|\mathbf{G}(h\cdot\vec{n},h)|+\sum_{\begin{subarray}{c}\vec{n}\in\Delta(N),\\ \min\vec{n}_{e}\leqslant n(h,\,\delta)\end{subarray}}|\mathbf{G}(h\cdot\vec{n},h)|.\]
The same calculation as above yields that
\[\sum_{\begin{subarray}{c}\vec{n}\in\Delta(N),\\ \min\vec{n}_{e}\geqslant n(h,\,\delta)\end{subarray}}|\mathbf{G}(h\cdot\vec{n },h)|\preceq\\ \preceq\frac{\kappa+\delta}{(-3\chi(X)-1)!}\left(\frac{e^{h}-1}{h} \right)^{-3\chi(X)}(h(N+E))^{-3\chi(X)-1}\cdot e^{h(N+E)} \tag{4.5}\]
as \(N\to\infty\). On the other hand we get from Lemma 4.1 that there is \(C>0\) with \(|\mathbf{G}(h\cdot\vec{n},h)|\leqslant C\cdot e^{h\cdot|\vec{n}|}\) for all \(\vec{n}\). This means thus that
\[\sum_{\begin{subarray}{c}\vec{n}\in\Delta(N),\\ \min\vec{n}_{e}\leqslant n(h,\,\delta)\end{subarray}}|\mathbf{G}(h\cdot\vec{ n},h)|\leqslant C\cdot\sum_{K=1}^{N}Q(K)\cdot e^{h\cdot K}\]
where \(Q(K)\) is the number of those \(\vec{n}\in\mathbb{N}^{\mathbf{edge}}\) with \(\min\vec{n}_{e}<n(h,\delta)\) and \(|\vec{n}|=K\). When \(K\to\infty\) the function \(Q(K)\) is asymptotic to \(C^{\prime}\cdot K^{E-2}\) for some positive constant \(C^{\prime}\), meaning that we have
\[\sum_{\begin{subarray}{c}\vec{n}\in\Delta(N),\\ \min\vec{n}_{e}\leqslant n(h,\,\delta)\end{subarray}}|\mathbf{G}(h\cdot\vec{ n},h)|\leqslant\frac{(1+\delta)\cdot C\cdot C^{\prime}}{h^{E-1}}\cdot\sum_{K=1}^{N}h \cdot(hK)^{E-2}\cdot e^{h\cdot K}\]
for all \(L\) large enough. A similar estimation as the one above yields thus that there is another positive constant \(C^{\prime\prime}\) with
\[\sum_{\begin{subarray}{c}\vec{n}\in\Delta(N),\\ \min\vec{n}_{e}\leqslant n(h,\,\delta)\end{subarray}}|\mathbf{G}(h\cdot\vec{ n},h)|\leqslant C^{\prime\prime}\cdot(N\cdot h)^{-3\chi(X)-2}\cdot e^{N\cdot H} \tag{4.6}\]
The quantity in the right hand side of (4.6) is negligible when compared to the right hand side of (4.5), and this means that we have
\[|\mathbf{G}(h\cdot N)|\preceq\frac{\kappa+2\delta}{(-3\chi(X)-1)!}\left(\frac {e^{h}-1}{h}\right)^{-3\chi(X)}(h(N+E))^{-3\chi(X)-1}\cdot e^{h(N+E)}\]
for all large \(N\). Since this holds true for all \(\delta\) and all \(h\), replacing \(hN\) by \(L\) we deduce that
\[|\mathbf{G}(L)|\preceq\frac{\kappa}{(-3\chi(X)-1)!}L^{-3\chi(X)-1}\cdot e^{L}.\]
Having now also established the upper asymptotic bound, we are done with the proof of the theorem.
Before moving on to other matters, we include an observation that we will use later on. The basic strategy of the proof of Theorem 1.3 was to decompose the problem of counting all critical realizations of at most some given length into the problem of counting those whose edge lengths are in a given box and then adding over all boxes. For most boxes, Proposition 4.2 gives a pretty precise estimation for the number of critical realizations in the box, and from Lemma 4.1 we get an upper bound for all boxes. We used these two to get the desired upper bound in the theorem, deducing that we could ignore the boxes where Proposition 4.2 does not apply. A very similar argument implies that the set of critical realizations where some edge is shorter than \(\ell_{0}\) is also negligible. We state this observation as a lemma:
**Lemma 4.3**.: _For all \(\ell\), all but a negligible set of critical realizations are \(\ell\)-long. _
It is probably clear from the context what negligible means here, but to be precise, we mean that the set is negligible inside the set of all critical realizations in the sense of (7.2) below.
## 5. Fillings
Let \(S_{g}\) be a compact, connected and oriented surface of genus \(g\) and with one boundary component. Below we will be interested in continuous maps
\[\beta:S_{g}\to\Sigma \tag{5.1}\]
which send \(\partial S_{g}\) to a closed geodesic \(\gamma=\beta(S_{g})\). We will refer to such a map as a _filling of genus \(g\) of \(\gamma\)_, a _genus \(g\) filling of \(\gamma\)_, or just simply as a _filling of \(\gamma\)_ when the genus \(g\) is either undetermined or understood from the context. The genus of a curve \(\gamma\subset\Sigma\) is the infimum of all \(g\)'s for which there is a genus \(g\) filling (5.1) with \(\gamma=\beta(\partial S_{g})\). Note that the genus of a curve is infinite unless \(\gamma\) is homologically trivial, that is unless \(\gamma\) is represented by elements in the commutator subgroup of \(\pi_{1}(\Sigma)\). Indeed, as an element of \(\pi_{1}(S_{g})\) the boundary \(\partial S_{g}\) is a product of \(g\) commutators. It follows that if a curve \(\gamma\) in \(\Sigma\) has genus \(g\) then it is, when considered as an element in \(\pi_{1}(\Sigma)\), a product of \(g\) commutators. Conversely, if \(\gamma\) is a product of \(g\) commutators then there is a map as in (5.1) with \(S_{g}\) of genus \(g\). In a nutshell, what we have is that the genus and the commutator length of \(\gamma\) agree. We record this fact for ease of reference:
**Lemma 5.1**.: _The genus of a curve agrees with its commutator length. _
Continuing with the same notation and terminology, suppose that a curve \(\gamma\) in \(\Sigma\) has genus \(g\). We then refer to any \(\beta:S\to\Sigma\) as in (5.1) with \(S\) of genus \(g\) as a _minimal genus filling_. Minimal genus fillings have very nice topological properties. Indeed, suppose that \(\beta:S_{g}\to\Sigma\) is a filling as in (5.1) and suppose that there is an essential simple curve \(m\subset S_{g}\) with \(\beta(m)\) homotopically trivial. Then, performing surgery on the surface \(S_{g}\) and the map \(\beta\) we get a smaller genus filling \(\beta^{\prime}:S_{g^{\prime}}\to\Sigma\) with \(\beta^{\prime}(\partial S_{g^{\prime}})=\beta(\partial S_{g})\) and \(g^{\prime}<g\). It follows that if \(\beta:S_{g}\to\Sigma\) is a minimal genus filling for \(\gamma=\beta(\partial S_{g})\) then \(\beta\) is _geometrically incompressible_ in the sense that there are no elements in \(\ker(\beta_{*}:\pi_{1}(S_{g})\to\pi_{1}(\Sigma))\) which are represented by simple curves. We record this fact:
**Lemma 5.2**.: _Minimal genus fillings are geometrically incompressible. _
From now on we will be working exclusively with minimal genus fillings and the reader can safely add the words "minimal genus" every time they see the word "filling".
We will not be that much interested in individual fillings, but rather in homotopy classes of fillings. In particular, we will allow ourselves to select particularly nice fillings. More precisely, we will be working with _hyperbolic fillings_, by what we will understand a particular kind of pleated surface. We remind the reader that according to Thurston [18] (see also [5]) a pleated surface is a map from a hyperbolic surface to a hyperbolic manifold with the property that every point in the domain is contained in a geodesic segment which gets mapped isometrically.
**Definition**.: _A filling \(\beta:S\to\Sigma\) is hyperbolic if \(S\) is endowed with a hyperbolic metric with geodesic boundary and if the map \(\beta\) is such that every \(x\in S\) is contained in the interior of a geodesic arc \(I\) such that \(\beta\) maps \(I\) isometrically to a geodesic arc in \(\Sigma\)._
An important observation is that if \(\beta:S\to\Sigma\) is a hyperbolic filling, if \(x\in\partial S\), and if \(I\subset S\) is a geodesic segment with \(x\) in its interior, then \(I\subset\partial S\). It follows that hyperbolic fillings map the boundary isometrically.
**Lemma 5.3**.: _The restriction of any hyperbolic filling \(\beta:S\to\Sigma\) to \(\partial S\) is geodesic. _
We should not delay making sure that hyperbolic fillings exist:
**Proposition 5.4**.: _Every minimal genus filling is homotopic to a hyperbolic filling._
To prove this proposition we will first show that if \(\beta:S\to\Sigma\) is any filling, and if the surface \(S\) admits a triangulation with certain properties, then \(\beta\) is homotopic to a hyperbolic filling. For lack of a better name we will say that a triangulation \(\mathcal{T}\) of \(S\) is _useful_ if it satisfies the following three conditions:
1. \(\mathcal{T}\) has exactly \(g+1\) vertices \(v_{0},\dots,v_{g}\), with \(v_{0}\in\partial S\) and the others in the interior of \(S\).
2. There is a collection of edges \(I_{0},\dots,I_{g}\) of \(\mathcal{T}\) such that both endpoints of \(I_{i}\) are attached to \(v_{i}\) for all \(i=0,\dots,g\). Moreover \(I_{0}=\partial S\).
These two conditions are evidently pretty soft. It is the condition we will state next what makes useful triangulations actually useful. We first need a bit of notation: if \(I\) is any edge of \(\mathcal{T}\) other than \(I_{0},\dots,I_{g}\) then let \(G_{I}\) be the connected component of \(I\cup I_{0}\cup\dots\cup I_{g}\) containing \(I\).
1. For any edge \(I\) other than \(I_{0},\dots,I_{g}\) we have that the image under \(\beta_{*}:\pi_{1}(G_{I})\to\pi_{1}(\Sigma)\) is not abelian.
Note that \(\pi_{1}(G_{I})\) is a a free group of rank 2. This means that its image \(\beta_{*}(\pi_{1}(G_{I})))\) has at most rank 2. Since we are assuming that it is not abelian, we actually get that it is a rank 2 free group. Free groups being Hopfian we deduce that \(\beta_{*}:\pi_{1}(G_{I})\to\pi_{1}(\Sigma)\) is an isomorphism onto its image. In other words, \(\beta_{*}\) is injective on \(\pi_{1}(G_{I})\).
The following result makes clear why we care about such triangulations:
**Lemma 5.5**.: _If the domain \(S\) of a filling \(\beta:S\to\Sigma\) admits a useful triangulation \(\mathcal{T}\), then \(\beta\) is homotopic to a hyperbolic filling._
Proof.: As we just discussed, our conditions imply that \(\beta_{*}\) is injective on \(\pi_{1}(G_{I})\) for all \(I\). This implies in particular that each one of the exceptional edges \(I_{0},\dots,I_{g}\) of \(\mathcal{T}\) closes up to a simple closed curve \(\gamma_{0},\dots,\gamma_{g}\) in \(S\) which is mapped to a homotopically essential curve. This in turn means that the images of \(\gamma_{0},\dots,\gamma_{g}\) are homotopic to non-trivial closed geodesics. Since all these \(g+1\) curves are mutually disjoint we can then homotope \(\beta\) so that \(\beta(\gamma_{i})\) is a closed geodesic \(\hat{\gamma}_{i}\) for all \(i\).
Let now \(I\) be one of the remaining edges of \(\mathcal{T}\), let \(v_{i}\) and \(v_{j}\) be its (possibly equal) endpoints and note that \(G_{I}=\gamma_{i}\cup I\cup\gamma_{j}\). Since \(\beta_{*}\) is injective on \(\pi_{1}(G_{I},v_{i})\) we know that the elements \(\beta_{*}(\gamma_{i})\) and \(\beta_{*}(I*\gamma_{j}*I^{-1})\) do not commute and hence have distinct fixed points in \(\partial_{\infty}\mathbb{H}^{2}\). This seemingly weak property is all we need to run the standard construction of pleated surfaces by spinning the edges of the triangulation over the geodesics \(\hat{\gamma}_{i}\)--compare with the proof of Theorem I.5.3.6 in [5].
We can now prove Proposition 5.4.
Proof.: Let \(\beta:S\to\Sigma\) be a minimal genus filling of a geodesic \(\gamma=\beta(\partial S)\), say of genus \(g\). In light of Lemma 5.5, to prove that \(\beta\) is homotopic to a hyperbolic filling it suffices to show that \(S\) admits a useful triangulation. Well, let us start by taking \(g\) disjoint compact one-holed tori \(T_{1},\dots,T_{g}\subset S\). We claim that the restriction of \(\beta\) to each \(T_{i}\) is \(\pi_{1}\)-injective. Indeed, since we know that \(\beta\) is geometrically incompressible we deduce that \(\partial T_{i}\) is not in the kernel of the induced homomorphism at the fundamental group level. It follows that the image of \(\beta_{*}(\pi_{1}(T_{i}))\) cannot be abelian. Now, this implies that \(\beta_{*}(\pi_{1}(T_{i}))\) is free, and evidently of rank \(2\). Hence, again since free groups are Hopfian, we get that the restriction of \(\beta_{*}\) to \(\pi_{1}(T_{i})\) is injective for all \(i\).
Now, why do we care about that? Well, knowing that the restriction of \(\beta_{*}\) to \(\pi_{1}(T_{i})\) is injective for any \(i\) implies that when the images under \(\beta\) of the non-boundary parallel simple closed curves in \(T_{i}\) determine infinitely many conjugacy classes of maximal abelian subgroups of \(\pi_{1}(\Sigma)\). We can thus choose for each \(i=1,\dots,g\) a non-boundary parallel simple closed curve \(\gamma_{i}\subset T_{i}\) such that if we also set \(\gamma_{0}=\partial S\) then we have that
* no two of the the maximal abelian subgroups of \(\pi_{1}(\Sigma)\) containing \(\beta_{*}(\gamma_{0}),\dots,\beta_{*}(\gamma_{g})\) are conjugate to each other.
Choosing now a vertex \(v_{i}\) in each one of the curves \(\gamma_{i}\) we get from (*) that if \(I\) is any simple path joining \(v_{i}\) to \(v_{j}\) for \(i\neq j\), then the image of \(\beta_{*}(\pi_{1}(\gamma_{i}\cup I\cup\gamma_{j}))\) is not abelian.
The upshot of all of this is that any triangulation \(\mathcal{T}\) with
1. vertex set \(v_{0},\dots,v_{g}\),
2. such that there are edges \(I_{0},\dots,I_{g}\) incident on both ends to \(v_{i}\) and with image \(\gamma_{i}\), and
3. such that all other edges connect distinct endpoints,
is useful. To see that such a triangulation exists cut \(S\) along \(\gamma_{1},\dots,\gamma_{g}\). When doing this we get a \(2g+1\) holed sphere \(\Delta\) and each vertex \(v_{1},\dots,v_{g}\) arises twice--denote the two copies of \(v_{i}\) by \(v_{i}\) and \(v_{i}^{\prime}\) as in Figure 2. Now any triangulation of \(\Delta\) with vertex set \(v_{0},v_{1},v_{1}^{\prime},v_{2},v_{2}^{\prime},\dots,v_{g},v_{g}^{\prime}\) and which
* contains a sequence of \(2g-1\) edges yielding a path \[[v_{1},v_{2}],[v_{2},v_{3}],\dots,[v_{g},v_{1}^{\prime}],[v_{1}^{\prime},v_{2 }^{\prime}],\dots,[v_{g-1}^{\prime},v_{g}^{\prime}]\] and
* all other edges incident to \(v_{0}\) yields a triangulation of \(S\) satisfying (1)-(3) above. Having proved that there is a useful triangulation, we get from Lemma 5.5 that \(\beta\) is homotopic to a hyperbolic filling, as we needed to prove.
Being able to work with hyperbolic fillings is going to be key in the next section, where we bound the number of closed geodesics \(\gamma\) in \(\Sigma\) with length \(\ell_{\Sigma}(\gamma)\leqslant L\) and which admit at least two homotopically distinct fillings.
**Definition**.: _Two fillings \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) are homotopic if there is a homeomorphism \(F:S_{1}\to S_{2}\) such that \(\beta_{1}\) is homotopic to \(\beta_{2}\circ F\)._
Evidently, to bound the number of pairs of non-homotopic fillings with the same boundary we will need some criterion to decide when two fillings are homotopic. To be able to state it note that if \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) are homotopic hyperbolic fillings, then there is
* a closed hyperbolic surface \(S=S_{1}\cup_{\partial S_{1}=\partial S_{2}}S_{2}\) obtained by isometrically gluing \(S_{1}\) and \(S_{2}\) along the boundary, in such a way that there is a pleated surface \(\Theta:S\to\Sigma\) with \(\Theta|_{S_{i}}=\beta_{i}\).
For lack of a better name we refer to \(\Theta:S\to\Sigma\) as the _pseudo-double_ associated to \(\beta_{1}\) and \(\beta_{2}\), and to the curve \(\partial S_{1}=\partial S_{2}\subset S\) as the _crease_ of the pseudo-double.
Our criterion to decide if the two hyperbolic fillings \(\beta_{1}\) and \(\beta_{2}\) of a geodesic \(\gamma\) are homotopic will be in terms of the structure of the \(\varepsilon_{0}\) thin part of the domain of the associated pseudo-double, where we choose once and forever the constant
\[\varepsilon_{0}=\frac{1}{10}\cdot\min\left\{\text{Margulis constant of }\mathbb{H}^{2},\text{ systole of }\Sigma\right\}. \tag{5.2}\]
The following is our criterion.
Figure 2. Construction of the useful triangulation in Proposition 5.4: The \(2g+1\) holed sphere obtained from \(S\) by cutting along \(\gamma_{1},\dots,\gamma_{g}\). The vertices \(v_{1},\dots,v_{g}\) and their copies \(v_{1}^{\prime},\dots,v_{g}^{\prime}\) are joined by the sequence of edges \([v_{1},v_{2}],[v_{2},v_{3}],\dots,[v_{g},v_{1}^{\prime}],[v_{1}^{\prime},v_{ 2}^{\prime}],\dots,[v_{g-1}^{\prime},v_{g}^{\prime}]\). To complete the triangulation add as many edges as needed joining \(v_{0}\) to the other vertices.
**Lemma 5.6**.: _Suppose that \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) are hyperbolic minimal genus fillings of genus \(g\) of a geodesic \(\gamma\), let \(\Theta:S\to\Sigma\) be the associated pseudo-double, and denote its crease \(\partial S_{1}=\partial S_{2}\subset S\) also by \(\gamma\). If_
1. _the_ \(\varepsilon_{0}\)_-thin part of_ \(S\) _has_ \(6g-3\) _connected components_ \(U_{1},\dots,U_{6g-3}\)_, and_
2. \(\gamma\) _traverses each_ \(U_{i}\) _exactly twice,_
_then \(\beta_{1}\) and \(\beta_{2}\) are homotopic._
Proof.: Note that the surface \(S\) has genus \(2g\). In particular, the assumption on the \(\varepsilon_{0}\)-thin part implies that there is a pants decomposition \(P\) in \(S\) consisting of closed geodesics of length at most \(2\varepsilon_{0}\). Now, since \(\Theta\) is evidently \(1\)-Lipschitz and since \(2\varepsilon_{0}\) is less than the systole of \(\Sigma\) we get that each one of the components of \(P\) is mapped to a homotopically trivial curve. The assumption that the crease \(\gamma\), a simple curve, intersects each component of the pants decomposition exactly twice implies that \(\gamma\) cuts each pair of pants into two hexagons. Paint blue those in \(S_{1}\) and yellow those in \(S_{2}\).
We can now construct a homotopy relative to the crease as follows. Each component of \(P\) consists of a blue arc and a yellow arc whose juxtaposition is homotopically trivial. This implies that we can homotope all yellow arcs, relative to their endpoints to the corresponding blue arcs. Extend this homotopy to a homotopy fixing \(\partial S_{2}\) and defined on the whole of \(S_{2}\). Now the boundary of each yellow hexagon is mapped to the boundary of each blue hexagon. Since \(\Sigma\) has trivial \(\pi_{2}\) we deduce that those two hexagons are homotopic. Proceeding like this with each hexagon we get a homotopy relative to the crease of the yellow parts of \(S\) to the blue part. This is what we wanted to get.
## 6. Bounding the number of multifillings
The goal of this section is to prove Theorem 1.4:
**Theorem 1.4**.: _For any \(g\) there are at most \(\mathbf{const}\cdot L^{6g-5}\cdot e^{\frac{L}{2}}\) genus \(g\) closed geodesics \(\gamma\) in \(\Sigma\) with length \(\ell(\gamma)\leqslant L\) and with two non-homotopic fillings \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) of genus \(\leqslant g\)._
The proof of Theorem 1.4 turns out to be kind of involved. We hope that the reader will not despair with all the weird objects and rather opaque statements they will find below.
### Wired surfaces
Under a _wired surface_ we will understand a compact connected simplicial complex \(\Delta\) obtained as follows: start with a compact, possibly disconnected, triangulated surface \(\operatorname{Surf}(\Delta)\) and an, evidently finite, subset \(P_{\Delta}\) of the set of vertices of the triangulation of \(\operatorname{Surf}(\Delta)\) such that every connected component of \(\operatorname{Surf}(\Delta)\setminus P_{\Delta}\) has negative Euler-characteristic. We think of the elements in \(P_{\Delta}\) as _plugs_. Now attach \(1\)-simplices, to which we will refer as _wires_, by attaching both end-points to plugs, and do so in
such a way that each plug arises exactly once as the end-point of a wire--we denote the set of wires by \(\mathbf{wire}(\Delta)\). A wired surface \(\Delta\) without wires, that is one with \(\Delta=\operatorname{Surf}(\Delta)\), is said to be _degenerate_. Otherwise it is _non-degenerate_.
How will wired surfaces arise? We will say that a pair \((\mathbb{F},\mathbb{T})\) is a _decoration_ of a surface \(S\) if
* \(\mathbb{F}\) is a partial foliation of \(S\) supported by the union of a finite collection of disjoint, essential and non-parallel, cylinders, and
* \(\mathbb{T}\) is a triangulation of the complement of the interior of those cylinders.
Now, if \((\mathbb{F},\mathbb{T})\) is a decoration of \(S\), then we get an associated wired surface \(\Delta=S/\sim_{\mathbb{F}}\) by collapsing each leaf of \(\mathbb{F}\) to a point and dividing each one of the arising bigons into two triangles by adding a vertex in the interior of the bigon. We will say that the quotient map \(\pi:S\to\Delta\) is a _resolution_ of \(\Delta\) with _associated foliation_\(\mathbb{F}\). Every wired surface admits an essentially unique resolution, unique in the sense that any two differ by a PL-homeomorphism mapping one of the foliations to the other one.
Suppose now that \(\Delta\) is a wired surface. A _simple curve_ on \(\Delta\) is a map \(\eta:\mathbb{S}^{1}\to\Delta\) such that there are a resolution \(\pi:S\to\Delta\) with associated foliation \(\mathbb{F}\) and an essential simple curve \(\eta^{\prime}:\mathbb{S}^{1}\to S\) which is transversal to \(\mathbb{F}\) and with \(\eta=\pi\circ\eta^{\prime}\).
Note that transversality to the associated foliation implies that if \(\eta\) is a simple curve of a wired surface \(\Delta\) then \(\eta\cap\pi^{-1}(\Delta\setminus\operatorname{Surf}(\Delta))\) consists of a collection of segments, each one of them mapped homeomorphically to a wire. If \(I\) is such a wire then we denote by \(n_{I}(\eta)\) the _weight_ of \(\eta\) in \(I\), that is the number connected components of \(\eta\cap\pi^{-1}(\Delta\setminus\operatorname{Surf}(\Delta))\) which are mapped homeomorphically to \(I\), or in other words, the number of times that \(\eta\) crosses \(I\). We refer to the vector
\[\vec{n}_{\Delta}(\eta)=(n_{I}(\eta))_{I\in\mathbf{wire}(\Delta)} \tag{6.1}\]
as the _weight vector for \(\eta\) in \(\Delta\)_. The intersection of the image of \(\eta\) with the surface part \(\operatorname{Surf}(\Delta)\) is a simple arc system with endpoints in the set \(P_{\Delta}\). Note that up to homotopying \(\eta\) to another simple curve in \(\Delta\) we might assume that all the components of the arc system \(\eta\cap\operatorname{Surf}(\Delta)\) are essential. This is equivalent to asking that for each wire \(I\) we have \(n_{I}(\eta)\leqslant n_{I}(\eta^{\prime})\) for any other simple curve \(\eta^{\prime}\) in \(\Delta\) homotopic to \(\eta\). We will suppose from now on, without further mention, that all simple curves in \(\Delta\) satisfy these minimality requirements.
So far, wired surfaces are just topological objects. Let us change this. Under a _hyperbolic wired surface_ we understand a wired surface \(\Delta\) whose surface part \(\operatorname{Surf}(\Delta)\) is endowed with a piece-wise hyperbolic metric, that is, one with respect to which the simplexes in the triangulation of \(\operatorname{Surf}(\Delta)\) are isometric to hyperbolic triangles.
Let \(\Delta\) be a hyperbolic wired surface, and as always let \(\Sigma\) be our fixed hyperbolic surface. We will say that a map \(\Xi:\Delta\to\Sigma\) is _tight_ if the following holds:
* \(\Xi\) maps every wire to a geodesic segment, and
* \(\Xi\) is an isometry when restricted to each one of the simplexes in the triangulation of \(\operatorname{Surf}(\Delta)\).
We will be interested in counting pairs \((\Xi:\Delta\to\Sigma,\gamma)\) consisting of tight maps and simple curves. Evidently, without further restrictions, there could be infinitely many such pairs. What we are going to count is pairs were the curve has bounded length. We are however going to use a pretty strange notion of length. Consider namely for some given small but positive \(\varepsilon\) the following quantity
\[\ell^{\varepsilon}_{\Xi}(\gamma)=\varepsilon\!\cdot\!\ell_{\operatorname{Surf }(\Delta)}(\gamma\cap\operatorname{Surf}(\Delta))+\sum_{I\in\operatorname{ \mathbf{wire}}}n_{I}(\gamma)\!\cdot\!\max\left\{\ell_{\Sigma}(\Xi(I))-\frac{1} {\varepsilon},0\right\} \tag{6.2}\]
It is evident that this notion of length is exactly taylored to what we will need later on, but let us try to parse what (6.2) actually means. What is the role of \(\varepsilon\)? Well, if we think of the length as a measure of the cost of a journey, then the first \(\varepsilon\) just makes traveling along the surface part pretty cheap, meaning that for the same price we can cruise longer over there. Along the same lines, when traveling through the wires, we only pay when the wires are very long.
**Lemma 6.1**.: _Let \(\Delta\) be a non-degenerate hyperbolic wired surface with set of wires \(\operatorname{\mathbf{wire}}=\operatorname{\mathbf{wire}}(\Delta)\). Fix a tight map \(f:\Delta\to\Sigma\), a positive integer vector \(\vec{n}=(n_{I})_{I\in\operatorname{\mathbf{wire}}}\in\mathbb{N}_{+}^{ \operatorname{\mathbf{wire}}}\), and denote by_
\[\min=\min_{I\in\operatorname{\mathbf{wire}}}n_{I}\geqslant 1\text{ and }d=| \{I\in\operatorname{\mathbf{wire}}\text{ with }n_{I}=\min\}|\]
_the smallest entry of \(\vec{n}\) and the number of times that this value is taken._
_For any \(\varepsilon>0\) there are at most \(\operatorname{\mathbf{const}}\!\cdot\!L^{d-1}\cdot e^{\frac{L}{\min}}\) homotopy classes of pairs \((\Xi:\Delta\to\Sigma,\gamma)\) where \(\Xi\) is a tight map with \(\Xi|_{\operatorname{Surf}(\Delta)}=f|_{\operatorname{Surf}(\Delta)}\) and where \(\gamma\) is a simple multicurve in \(\Delta\) with \(n_{I}(\gamma)\geqslant n_{I}\) for every wire \(I\) and with \(\ell^{\varepsilon}_{\Xi}(\gamma)\leqslant L\)._
Note that the fact that the obtained bound, or rather its rate of growth, does not depend on \(\varepsilon\) implies that actually the only way to get many homotopy classes is to play with the wires. In fact, since the given bound only depends on \(d\) and \(\min\), the only wires that matter are those which the curve crosses as little as possible.
Another comment before launching the proof. Namely, what happens if the wired surface \(\Delta\) in Lemma 6.1 is degenerate? Well, if there are no wires, then \(\Delta\) is nothing other than a surface with a (piece-wise hyperbolic) metric. In such a surface there are at most \(\operatorname{\mathbf{const}}\!\cdot\!L^{3\cdot|\chi(\Delta)|}\) simple multi-curves of length at most \(L\)--see for example [8]. This means that for a degenerate
wired surface one actually gets a polynomial bound instead of an exponential one.
We are now ready to launch the proof of Lemma 6.1:
Proof.: Note that, since the map \(\Xi\) is fixed on \(\operatorname{Surf}(\Delta)\), we get that the homotopy type of the map, or even the map itself, is determined by what happens to the wires. In particular, as in the proof of Lemma 4.1 we get that if we give ourselves a positive vector \(\vec{\lambda}=(\lambda_{I})_{I\in\mathbf{wire}}\in\mathbb{R}_{+}^{\mathbf{ wire}}\), then there are at most \(\mathbf{const}\cdot e^{\|\vec{\lambda}\|}\) homotopy classes of tight maps \(\Xi:\Delta\to\Sigma\) with \(\Xi|_{\operatorname{Surf}(\Delta)}=f\) and such that
\[\lambda_{I}\leqslant\ell_{\Sigma}(\Xi(I))\leqslant\lambda_{I}+1\text{ for all }I\in\mathbf{wire}\,. \tag{6.3}\]
As always, we have set \(\|\vec{\lambda}\|=\lambda_{1}+\cdots+\lambda_{r}\).
Now, we are not counting homotopy classes of maps, but rather of pairs \((\Xi:\Delta\to\Sigma,\gamma)\) where the multicurve \(\gamma\) satisfies \(\ell_{\Xi}^{\varepsilon}(\gamma)\leqslant L\). Note that, if \(\Xi\) satisfies (6.3) then our given length bound \(\ell_{\Xi}^{\varepsilon}(\gamma)\leqslant L\) implies that
\[\ell_{\operatorname{Surf}(\Delta)}(\gamma\cap\operatorname{Surf} (\Delta)) =\frac{1}{\varepsilon}\left(\ell_{\Xi}^{\varepsilon}(\gamma)- \sum_{I\in\mathbf{wire}}n_{I}(\gamma)\cdot\max\left\{\ell_{\Sigma}(\Xi(I))- \frac{1}{\varepsilon},0\right\}\right)\] \[\leqslant\frac{1}{\varepsilon}\left(L-\sum_{I\in\mathbf{wire}}n_ {I}\cdot\max\left\{\lambda_{I}-\frac{1}{\varepsilon},0\right\}\right)\] \[\leqslant\frac{1}{\varepsilon}\left(L-\langle\vec{n},\vec{ \lambda}\rangle+\frac{1}{\varepsilon}\cdot\|\vec{n}\|\right)\]
Now, since for any given length there are only polynomially many simple arc systems of bounded length we deduce that for each \(\Xi\) satisfying (6.3) there are at most \(\mathbf{const}\cdot(L-\langle\vec{n},\vec{\lambda}\rangle)^{\mathbf{const}}\) homotopy classes of simple multicurves \(\gamma\) in \(\Delta\) with \(\ell_{\Xi}^{\varepsilon}(\gamma)\leqslant L\) and satisfying \(n_{I}(\gamma)\geqslant n_{I}\) for each wire \(I\). Putting all of this together we get the following:
**Fact**.: _There are at most \(\mathbf{const}\cdot(L-\langle\vec{n},\vec{\lambda}\rangle)^{\mathbf{const}} \cdot e^{\|\lambda\|}\) homotopy classes of pairs \((\Xi:\Delta\to\Sigma,\gamma)\) where \(\Xi\) is a tight map with \(\Xi|_{\operatorname{Surf}(\Delta)}=f\), satisfying (6.3), and where \(\gamma\) is a simple curve in \(\Delta\) with \(n_{I}(\gamma)\geqslant n_{I}\) for every wire \(I\) and with \(\ell_{\Xi}^{\varepsilon}(\gamma)\leqslant L\). _
Now we get that the quantity we want to bound, that is the number of homotopy classes of pairs \((\Xi:\Delta\to\Sigma,\gamma)\) where \(\Xi\) is a tight map with \(\Xi|_{\operatorname{Surf}(\Delta)}=f\) and where \(\gamma\) is a simple curve in \(\Delta\) with \(n_{I}(\gamma)\geqslant n_{I}\) for every wire \(I\) and with \(\ell_{\Xi}^{\varepsilon}(\gamma)\leqslant L\) is bounded from above by
\[\sum_{\lambda\in\mathbb{N}^{\mathbf{wire}},\ \|\lambda\|\leqslant L}\mathbf{ const}\cdot(L-\langle\vec{n},\vec{\lambda}\rangle)^{\mathbf{const}}\cdot e^{\| \lambda\|}\]
This quantity is then bounded from above by the value of the integral
\[\mathbf{const}\int_{\{\vec{x}\in\mathbb{R}_{+}^{\mathbf{wire}},\ \langle\vec{n},\vec{x}\rangle\}\leqslant L}(L-\langle\vec{n},\vec{x}\rangle) ^{\mathbf{const}}\cdot e^{\|\vec{x}\|}dx\]
and now it is a calculus problem that we leave to the reader to check that this integral is bounded by \(\mathbf{const}\!\cdot\!L^{d-1}\cdot e^{\frac{L}{n_{\min}}}\) where \(n_{\min}=\min_{I}n_{I}\geqslant 1\) and where \(d=|\{I\) wire with \(n_{I}=n_{\min}\}|\). We are done.
At this point we know how to bound the number of homotopy classes of tight maps of wired surfaces. It is time to explain why we care about being able to do so.
### Pseudo-doubles
Earlier, just before the statement of Lemma 5.6 we introduced the pseudo-double associated to two fillings. Let us extend that terminology a bit: Under a _pseudo-double_ we understand a pair \((\Theta:S\to\Sigma,\gamma)\) where
* \(\Theta:S\to\Sigma\) is a pleated surface with \(S\) closed,
* \(\gamma\subset S\), the _crease_, is a simple curve cutting \(S\) into two connected components, and
* \(\Theta\) maps \(\gamma\) to a geodesic in \(\Sigma\) and its restriction \(\Theta|_{S\setminus\gamma}\) to the complement of \(\gamma\) is geometrically incompressible.
Two pseudo-doubles \((\Theta:S\to\Sigma,\gamma)\) and \((\Theta^{\prime}:S^{\prime}\to\Sigma,\gamma^{\prime})\) are _homotopic_ if there is a homeomorphism \(f:S\to S^{\prime}\) with \(\gamma^{\prime}\) homotopic to \(f(\gamma)\) and with \(\Theta\) homotopic to \(\Theta^{\prime}\circ f\).
Note that this terminology is consistent with the use of the word pseudo-double in the previous section.
Recall now that we fixed earlier some \(\varepsilon_{0}\) satisfying (5.2) and note that if \((\Theta:S\to\Sigma,\gamma)\) is a pseudo-double then the crease \(\gamma\), being separating, crosses every component \(U\) of the thin part \(S^{\leqslant\varepsilon_{0}}\) an even number of times \(\iota(\gamma,U)\in 2\mathbb{N}\). In fact, since by the choice of \(\varepsilon_{0}\) we get that \(\Theta\) maps the core of every component of the thin part to a homotopically trivial curve, and since the restriction of \(\Theta\) to \(S\setminus\gamma\) is geometrically incompressible we get that actually
\[\iota(\gamma,U)\geqslant 2\text{ for all components }U\text{ of the thin part of }S, \tag{6.4}\]
by what we mean that \(\gamma\) traverses each such \(U\) at least twice.
Our next goal is to bound, for growing \(L\), the number of homotopy classes of pseudo-doubles \((\Theta:S\to\Sigma,\gamma)\) where \(S\) has given topological type, where there are precisely \(d\) components \(U\) of the thin part with \(\iota(\gamma,U)=2\), and where \(\ell_{S}(\gamma)\leqslant L\):
**Proposition 6.2**.: _Let \(\varepsilon_{0}\) be as in (5.2) and suppose that \(S_{0}\) is a closed orientable surface._
1. _For every_ \(d\geqslant 1\) _there are at most_ \(\mathbf{const}\!\cdot\!L^{d-1}\cdot e^{\frac{L}{2}}\) _homotopy classes of pseudo-doubles_ \((\Theta:S\to\Sigma,\gamma)\) _where_ \(S\) _is homeomorphic to_ \(S_{0}\)_, where the thin part_ \(S^{\leqslant\varepsilon_{0}}\) _of_ \(S\) _has exactly_ \(d\) _components_ \(U\) _with_ \(\iota(\gamma,U)=2\)_, and where_ \(\gamma\) _has length_ \(\ell_{S}(\gamma)\leqslant L\)
2. _There are at most_ \(\operatorname{\mathbf{const}}\cdot e^{\frac{L}{3}}\) _homotopy classes of pseudo-doubles_ \((\Theta:S\to\Sigma,\gamma)\) _where_ \(S\) _is homeomorphic to_ \(S_{0}\)_, where there is no component_ \(U\) _of the thin part_ \(S^{\leqslant\varepsilon_{0}}\) _with_ \(\iota(\gamma,U)=2\)_, and where_ \(\gamma\) _has length_ \(\ell_{S}(\gamma)\leqslant L\)_._
Recall that we declared a _decoration_\((\mathbb{F},\mathbb{T})\) of a surface \(S\) to be a pair consisting of partial foliation \(\mathbb{F}\) supported by the union of disjoint essential and non-parallel cylinders, and of a triangulation \(\mathbb{T}\) of the complement of the interior of those cylinders. To see where these decorations come from, assume that \(S\) is a hyperbolic surface.
* The \(\varepsilon_{0}\)-thick part of \(S\) is metrically bounded in the sense that it has a triangulation \(\mathbb{T}\) whose vertices are \(\frac{1}{10}\varepsilon_{0}\)-separated, and whose edges have length at most \(\frac{1}{3}\varepsilon_{0}\) and are geodesic unless contained in the boundary \(\partial(S^{\geqslant\varepsilon_{0}})\) of the \(\varepsilon_{0}\)-thick part--in that case they have just constant curvature.
* The components of the \(\varepsilon_{0}\)-thin part \(S^{\leqslant\varepsilon_{0}}\) are not metrically bounded but still have a very simple structure: they are cylinders foliated by constant curvature circles, namely the curves at constant distance from the geodesic at the core of the cylinder.
Putting these things together, that is the triangulation \(\mathbb{T}\) of the thick part and the foliation \(\mathbb{F}\) of the thin part, we get what we will refer as a _thin-thick decoration \((\mathbb{F},\mathbb{T})\) of \(S\)_--note that the triangulation \(\mathbb{T}\) is not unique, and this is why we use an undetermined article. More importantly, note also that the number of components of the thin part is bounded just in terms of the topology of \(S\), and that the number of vertices in the triangulation \(\mathbb{T}\) is bounded by some number depending on the chosen \(\varepsilon_{0}\) and again the topological type of \(S\). Since we have fixed \(\varepsilon_{0}\), it follows that every compact surface \(S_{0}\) admits finitely many decorations \((\mathbb{F}_{1},\mathbb{T}_{1}),\ldots,(\mathbb{F}_{r},\mathbb{T}_{r})\) such that if \(S\) is any hyperbolic surface homeomorphic to \(S_{0}\) then there is a homeomorphism \(\sigma:S_{0}\to S\) and some \(i\) such that \((\sigma(\mathbb{F}_{i}),\sigma(\mathbb{T}_{i}))\) is a \(\varepsilon_{0}\)-thin-thick decoration of \(S\). We state this fact for later reference:
**Lemma 6.3**.: _For every closed surface \(S_{0}\) there are finitely many decorations \((\mathbb{F}_{1},\mathbb{T}_{1}),\ldots,(\mathbb{F}_{r},\mathbb{T}_{r})\) such that for any hyperbolic surface \(S\) homeomorphic to \(S_{0}\) there are \(i\in\{1,\ldots,r\}\) and a homeomorphism \(\sigma:S_{0}\to S\) such that \(\sigma(\mathbb{F}_{i},\mathbb{T}_{i})\) is a \(\varepsilon_{0}\)-thin-thick decoration of \(S\). _
After these comments, we can finally launch the proof of Proposition 6.2:
Proof of Proposition 6.2.: Starting with the proof of (1), note that from Lemma 6.3 we get that it suffices to prove for each fixed decoration \((\mathbb{F},\mathbb{T})\) of \(S_{0}\) that
* there are at most \(\operatorname{\mathbf{const}}\cdot L^{d-1}\cdot e^{\frac{L}{2}}\) homotopy classes of pseudo-doubles \((\Theta:S\to\Sigma,\gamma)\) where there is a homeomorphism \(\sigma:S_{0}\to S\)
such that \((\sigma(\mathbb{F}),\sigma(\mathbb{T}))\) is a decoration of the \(\varepsilon_{0}\)-thin-thick decomposition of \(S\), where the thin part \(S^{\leqslant\varepsilon_{0}}\) of \(S\) has exactly \(d\) components \(U\) with \(\iota(\gamma,U)=2\), and where \(\gamma\) has length \(\ell_{S}(\gamma)\leqslant L\).
Assuming from now on that we have an \(\varepsilon_{0}\)-thin-thick decoration \((\mathbb{F},\mathbb{T})\), let \(\Delta=S_{0}/\sim_{\mathbb{F}}\) be the wired surface obtained from \(S\) by collapsing each leaf of \(\mathbb{F}\) to a point and let \(\pi:S_{0}\to\Delta\) be the corresponding quotient map.
The reason why we consider \(\Delta\) is that, as we already mentioned earlier, we have that by the choice of \(\varepsilon_{0}\) all the leaves of the foliation \(\sigma(\mathbb{F})\) are mapped by \(\Theta\) to homotopically trivial curves. This implies that there is a map
\[\Xi:\Delta\to\Sigma\]
mapping the wires of \(\Delta\) to geodesic segments and with \(\Theta\circ\sigma\) homotopic to \(\Xi\circ\pi\) by a homotopy whose tracks are bounded by **const**. In particular, the edges in \(\mathbb{T}\) are mapped to paths homotopic to geodesic paths of at most length **const**. Pulling tight relative to the vertices, we can assume that \(\Xi\) maps 2-dimensional simplices in the triangulation of \(\Delta\) to hyperbolic triangles. Note that the bound on the lengths of the images of the edges of \(\mathbb{T}\) imply that the restriction of \(\Xi\) to \(\operatorname{Surf}(\Delta)\) is **const**-Lipschitz.
This uniform Lipschitz bound implies that \(\Xi|_{\operatorname{Surf}(\Delta_{0})}\) belongs to finitely many homotopy classes and that the tracks of the homotopy to any chosen representative of the correct homotopy class are bounded by **const**. Choosing the representatives to be tight maps with respect to some hyperbolic structure on \(\Delta_{0}\) we get:
**Fact.**_There are finitely many hyperbolic structures \(\Delta_{1},\ldots,\Delta_{r}\) on \(\Delta\) and finitely many tight maps \(f_{1},\ldots,f_{r}:\Delta_{i}\to\Sigma\), such that for any \(\Theta:S\to\Sigma\) and \(\sigma:S_{0}\to S\) as in (*) there are \(i\in\{1,\ldots,r\}\) and a tight map \(\Xi:\Delta_{i}\to\Sigma\) with \(\Xi|_{\operatorname{Surf}(\Delta_{i})}=f_{i}|_{\operatorname{Surf}(\Delta_{i})}\) and such that \(\Theta\circ\sigma:S_{0}\to\Sigma\) is homotopic to \(\Xi\circ\pi:S_{0}\to\Sigma\) by a homotopy whose tracks have at most length **const**.
Continuing with the same notation, note that the bound on the tracks of the homotopy between \(\Theta\circ\sigma\) and \(\Xi\circ\pi\) means that when we compare the length of the geodesic \(\gamma\) in \(S\) with that of the curve \(\eta=(\pi\circ\sigma^{-1})(\gamma)\) in the hyperbolic wired surface \(\Delta_{i}\) then there is at most an increase by an additive amount every time \(\gamma\) crosses a simplex of the wired surface. It means that lengths
* increase at most by an additive amount every time we cross a component of the thin part, and
* increase by a multiplicative amount while we are in the thick part.
Said in other words: there is some \(R\) with
\[\ell_{\Sigma}(\Xi(\eta\cap\operatorname{Surf}(\Delta))) \leqslant R\cdot\ell_{S}(\gamma\cap S^{\geqslant\varepsilon_{0}})\] \[\ell_{\Sigma}(\Xi(\eta\setminus\operatorname{Surf}(\Delta))) \leqslant\sum_{\kappa\in\pi_{0}(\gamma\cap S^{\leqslant\varepsilon_{0 }})}(\ell_{S}(\kappa)+R)\]
Recalling that the wires \(I\) of \(\Delta_{0}\) correspond to the thin parts of \(S^{\leqslant\varepsilon_{0}}\) and that the weight \(n_{I}(\eta)\) is nothing other than number of times that \(\eta\) crosses the wire \(I\), we get from the last two inequalities that
\[\frac{1}{R}\cdot\ell_{\Xi}(\eta\cap\operatorname{Surf}(\Delta))+\sum_{I\in \operatorname{\mathbf{wire}}(\Delta_{0})}n_{I}(\eta)\cdot\max\left\{\ell_{\Xi} (I)-R,0\right\}\leqslant\ell_{S}(\gamma) \tag{6.5}\]
where the "max" arises because a length is always non-negative. Anyways, note that with the notation introduced in (6.2) we can rewrite (6.5) as
\[\ell_{\Xi}^{\frac{1}{R}}(\gamma)\leqslant\ell_{S}(\gamma)\]
Note also that from (6.4) we get that \(n_{I}(\eta)\geqslant 2\) for all \(I\in\operatorname{\mathbf{wire}}(\Delta_{0})\). This means that using the notation of Lemma 6.1 we can restate the assumptions in Proposition 6.2 (1) as follows: \(\min=2\) and this value is achieved \(d\geqslant 1\) times. Lemma 6.1 implies thus that there are at most \(\operatorname{\mathbf{const}}\cdot L^{d-1}\cdot e^{\frac{L}{2}}\) homotopy classes of pairs \((\Xi,\eta)\) arising from pseudo-doubles \((\Theta:S\to\Sigma,\gamma)\) where \(\Theta\) is as in the Fact and where \(\ell_{S}(\gamma)\leqslant L\). This implies a fortiori that we have at most that many choices for the homotopy class of \(\Xi\). Since the homotopy class of \(\Xi\) determines that of \(\Theta\), we are done with the proof of (1). The proof of (2) is pretty much identical, the only difference is that now \(\min\geqslant 4\). Plugging this in the argument above we get the bound \(\operatorname{\mathbf{const}}\cdot L^{k-1}\). \(e^{\frac{L}{4}}\) for some \(k\). This is evidently a stronger bound that \(\operatorname{\mathbf{const}}\cdot e^{\frac{L}{3}}\), and we are done.
### The fruits of our labor
After all this work we are now ready to prove Theorem 1.4.
Proof of Theorem 1.4.: Suppose that \(\gamma\) is a closed geodesic with length \(\ell_{\Sigma}(\gamma)\leqslant L\) and such that there are two non-homotopic minimal genus fillings \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) with \(\beta_{i}(\partial S_{i})=\gamma\). From Proposition 5.4 we get that, without loss of generality we might assume that both \(\beta_{1}:S_{1}\to\Sigma\) and \(\beta_{2}:S_{2}\to\Sigma\) are hyperbolic fillings.
Let then \(S=S_{i}\cup_{\partial S_{1}=\partial S_{2}}S_{2}\) be the hyperbolic surface obtained by gluing both surfaces \(S_{1}\) and \(S_{2}\) along the boundary in such a way that there is a pleated surface \(\Theta:S\to\Sigma\) with \(\Theta|_{S_{i}}=\beta_{i}\). Note once again that the map \(\Theta\) maps the crease \(\hat{\gamma}=\partial S_{1}=\partial S_{2}\) geodesically to \(\gamma\). Moreover, Lemma 5.2 implies that the restriction of \(\Theta\) to \(S\setminus\hat{\gamma}\) is geometrically incompressible. Taken together, all of this means that the pair \((\Theta:S\to\Sigma,\hat{\gamma})\) is a pseudo-double.
Now, Lemma 5.6 implies, together with the assumption that \(\beta_{1}\) and \(\beta_{2}\) are not homotopic, that the \(\varepsilon_{0}\)-thin part of \(S\) has at most \(6g-4\) connected components which are traversed twice by the crease \(\hat{\gamma}\). We thus get from Proposition 6.2 that there are at most \(\operatorname{\mathbf{const}}\cdot L^{6g-5}\cdot e^{\frac{L}{2}}\) choices for the homotopy class of \((\Theta:S\to\Sigma,\hat{\gamma})\). Since the geodesic \(\gamma\) is determined by the homotopy class of \(\Theta(\hat{\gamma})\), we have proved that there are \(\operatorname{\mathbf{const}}\cdot L^{6g-5}\cdot e^{\frac{L}{2}}\) choices for \(\gamma\), as we had claimed.
## 7. Proof of the main theorem
In this section we prove Theorem 1.1 from the introduction.
**Theorem 1.1**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface and for \(g\geqslant 1\) and \(L>0\) let \(\mathbf{B}_{g}(L)\) be as in (1.3). We have_
\[|\mathbf{B}_{g}(L)|\sim\frac{2}{12^{g}\cdot g!\cdot(3g-2)!\cdot\operatorname{ vol}(T^{1}\Sigma)^{2g-1}}\cdot L^{6g-4}\cdot e^{\frac{L}{2}}\]
_as \(L\to\infty\)._
Before we can even explain the idea of the proof of Theorem 1.1 we need to recall what fat graphs are, and a few of their properties:
**Fat graphs.** A _fat graph_\(X\) is a graph endowed with a cyclic ordering of the set \(\mathbf{half}_{v}\) for each \(v\)--fat graphs are also sometimes called _ribbon graphs_. Every fat graph is endowed with a canonically built thickening \(\mathbf{neigh}(X)\), the _thickening of_\(X\). For the sake of concreteness let us discuss this in the particular case that \(X\) is trivalent. Well, we start by taking an oriented filled-in hexagon \(G_{v}\) for every vertex \(v\), see Figure 3. If we label by \(a,b,c\) the three elements in \(\mathbf{half}_{v}\), given in the correct cyclic order, then we label the boundary components of \(G_{v}\) by \(a,ab,b,bc,c,ca\), also given in the correct cyclic order. Now, for every edge \(e\in\mathbf{edge}(X)\) let \(v_{1},v_{2}\in\mathbf{vert}(X)\) be the two (possibly identical) vertices at which the two half-edges \(\vec{e}_{1}\in\mathbf{half}_{v_{1}}\) and \(\vec{e}_{2}\in\mathbf{half}_{v_{2}}\) corresponding to \(e\) are based, and identify in an orientation reversing way the \(\vec{e}_{1}\)-edge of \(\partial G_{v_{1}}\) with the \(\vec{e}_{2}\)-edge of \(\partial G_{v_{2}}\). Proceeding like this for all edges we end up with the _thickening_\(\mathbf{neigh}(X)\) of \(X\).
**Definition**.: _A trivalent fat graph \(X\) has \(\operatorname{genus}\,g\) if \(\mathbf{neigh}(X)\) is homeomorphic to a surface of genus \(g\) with one boundary component._
One of the quantities that appear in the right side of Theorem 1.1 is the number of genus \(g\) fat graphs, or rather, that number weighted by the number of automorphims of each such fat graph, where a map \(F:X\to X^{\prime}\) between two fat graphs is a _fat graph homeomorphism_ if first it is a homeomorphism between two the underlying graphs, and then if it sends
Figure 3. Constructing the thickening of \(X\). Pictured are two oriented hexagons corresponding to two vertices connected by an edge. The lighter lines indicate the gluing of those two hexagons.
one fat structure to the other one. Anyways, what is really lucky for us is that Bacher and Vdovina [1] have computed that
\[\sum\frac{1}{|\operatorname{Aut}(X)|}=\frac{2}{12^{g}}\cdot\frac{(6g-5)!}{g! \cdot(3g-3)!}\]
where the sum takes place over all homeomorphism classes of genus \(g\) fat graphs.
_Remark_.: Bacher and Vdovina's result is phrased in terms of \(1\)-vertex triangulations of the closed surface of genus \(g\) up to orientation preserving homeomorphism. Let us explain briefly how one goes from such triangulations to genus \(g\) fat graphs and back. The dual graph of a triangulation of a surface is a trivalent fat graph--its thickening is the surface minus the vertices of the triangulation. It follows that if the triangulation has a single vertex and the surface is closed of genus \(g\) then the fat graph has genus \(g\). Conversely, the thickening of a fat graph is equipped with a natural arc system (one arc dual to each edge). When collapsing each boundary component of the thickening to a point one gets a closed surface together with a triangulation with as many vertices as connected components of the boundary. It follows that if \(X\) is a genus \(g\) fat graph then one gets a \(1\)-vertex triangulation of a genus \(g\) surface.
Let \(X\) now be a fat graph and, for the sake of concreteness assume that it has genus \(g\). By construction we have a canonical embedding of \(X\) into \(\operatorname{\mathbf{neigh}}(X)\) whose image is a spine. In particular there is a retraction
\[\operatorname{spine}:\operatorname{\mathbf{neigh}}(X)\to X\]
such that the pre-image of every vertex is a tripod and the pre-image of every point in the interior of and edge in \(X\) is a segment. The image of \(\partial\operatorname{\mathbf{neigh}}(X)\) under spine runs twice over every edge of \(X\). We will refer to this parametrized curve in \(X\) as \(\partial X\).
_Remark_.: Note that reversing the orientation of \(X\), that is reversing the cyclic order at each vertex, has the effect of reversing the orientation of \(\partial X\).
### The map
We will reduce the proof of Theorem 1.1 to the fact that we know how to count critical realizations of graphs, that is to Theorem 1.3. Let us explain the basic idea. For given \(g\) consider the set
\[\mathbf{X}_{g}=\left\{(X,\phi)\middle|\begin{array}{c}X\text{ is a fat graph of genus }g\text{ and }\\ \phi:X\to\Sigma\text{ is a critical realization }\\ \text{ of the underlying graph}\end{array}\right\}\middle/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\sigma\) in the definition of equivalence definitively has to preserve the fat structure.
Note that if \((X,\phi)\) is a equivalent to \((X^{\prime},\phi^{\prime})\) then the curves \(\phi(\partial X)\) and \(\phi^{\prime}(\partial X^{\prime})\) are freely homotopic to each other. In particular we have a well-defined map
\[\Lambda:\mathbf{X}_{g}\to\mathbf{C},\ \ (X,\phi)\mapsto\text{ geodesic homotopic to }\phi(\partial X) \tag{7.1}\]
where \(\mathbf{C}\) is, as it was earlier, the collection of all oriented geodesics in \(\Sigma\).
The basic idea of the proof of Theorem 1.1 is that the map (7.1) has the following informally stated properties:
1. \(\Lambda\) is basically injective with image basically contained in \(\mathbf{B}_{g}\),
2. \(\Lambda\) is basically surjective onto \(\mathbf{B}_{g}\), and
3. generically, the geodesic \(\Lambda(X,\phi)\) has length almost exactly equal to \(2\cdot\ell(\phi)-C\) for some explicit constant \(C\).
Let us start by clarifying the final point. Suppose that \((X,\phi)\in\mathbf{X}_{g}\) is such that \(\phi\) is \(\ell_{0}\)-long for some large \(\ell_{0}\). Well, since the curve \(\partial X\) runs exactly twice over each edge \(X\) we get that it its image \(\phi(\partial X)\) consists of \(2\operatorname{\mathbf{edge}}(X)\) geodesic segments of length at least \(\ell_{0}\) and with \(3\cdot\operatorname{\mathbf{vert}}(X)=-6\cdot\chi(X)\) corners where it makes angle equal to \(\frac{2\pi}{3}\). We get now from a standard hyperbolic geometry computation (or, if you so wish, from a limiting argument) that when \(\ell_{0}\) is large then, up to an small error depending on \(\ell_{0}\), when we pull tight \(\phi(\partial X)\) to get \(\Lambda(X,\phi)\) we save \(\log\frac{4}{3}\) at each one of those corners. Taking into account that \(\chi(X)=1-2g\) when \(X\) has genus \(g\), this is what we have proved:
**Lemma 7.1**.: _For every \(\delta>0\) there is \(\ell_{\delta}\) with_
\[\left|\ell_{\Sigma}(\Lambda(X,\phi))-\left(2\ell_{\Sigma}(\phi)-6\cdot(2g-1) \cdot\log\frac{4}{3}\right)\right|\leqslant\delta\]
_for every \((X,\phi)\in\mathbf{X}_{g}\) such that \(\phi\) is \(\ell_{\delta}\)-long. _
_Remark_.: Where does \(\log\frac{4}{3}\) come from? Well, if \(\Delta\subset\mathbb{H}^{2}\) is an ideal triangle with vertices \(\theta_{1},\theta_{2}\) and \(\theta_{3}\) and center \(p\), and if \(p^{\prime}\) is the projection of \(p\) to the side say \((\theta_{1},\theta_{2})\) then \(\log\frac{2}{\sqrt{3}}\) is the difference between the values at \(p\) and \(p^{\prime}\) of the Buseman function centered at \(\theta_{1}\), and every time we pass by a vertex we basically save twice that amount (see Figure 4).
Our next goal is to prove that the map \(\Lambda\) is basically bijective onto \(\mathbf{B}_{g}\), but we should first formalize what we mean by "basically". Well, suppose that we have a set \(Z\) consisting of either realizations, or curves, or of anything else consisting of elements \(\alpha\in Z\) whose length \(\ell_{\Sigma}(\alpha)\) can be measured, and set \(Z(L)=\{\alpha\in Z\text{ with }\ell_{\Sigma}(\alpha)\leqslant L\}\). We will say that a subset
\[W\subset Z\text{ is {negligible} in }Z\text{ if }\limsup_{L\to\infty}\frac{|W(L)|}{|Z(L)|}=0 \tag{7.2}\]
If the ambient set \(Z\) is understood from the context, then we might just say that \(W\) is _negligible_. The complement of a negligible set is said to be _generic_
and the elements of a generic set are themselves _generic_. For example we get from Lemma 4.3 and Lemma 7.1 that for all \(\delta\) we have that
\[\left|\ell_{\Sigma}(\Lambda(X,\phi))-\left(2\ell_{\Sigma}(\phi)-2\cdot(6g-3) \cdot\log\frac{4}{3}\right)\right|\leqslant\delta\]
for \((X,\phi)\in\mathbf{X}_{g}\) generic.
**Basic bijectivity of \(\Lambda\).** Above we used the word "basically" as meaning that something was true up to negligible sets. Let us start by proving that the map \(\Lambda\) is basically injective and that its image is contained in the set \(\mathbf{B}_{g}\) of closed geodesics of genus \(g\).
**Lemma 7.2**.: _There is a generic subset \(W\subset\mathbf{X}_{g}\) such that the restriction of \(\Lambda\) to \(W\) is injective and that its image is contained in \(\mathbf{B}_{g}\)._
Proof.: Let \(\ell_{0}\) and \(C\) be such that for every \((X,\phi)\in\mathbf{X}_{g}\) so that \(\phi\) is \(\ell_{0}\)-long we have
\[2\cdot\ell_{\Sigma}(\phi)-C\leqslant\ell_{\Sigma}(\Lambda(X,\phi))\leqslant 2 \ell_{\Sigma}(\phi)\]
and let \(\mathbf{X}_{g,\ell_{0}}\) be the set of those pairs. We get from Lemma 4.3 that \(\mathbf{X}_{g,\ell_{0}}\) is generic in \(\mathbf{X}_{g}\). It follows hence from Theorem 1.3 that
\[|\{(X,\phi)\in\mathbf{X}_{g,\ell_{0}}\text{ with }\ell_{\Sigma}(\Lambda(X, \phi))\leqslant L\}|\geqslant\mathbf{const}\cdot L^{6g-4}\cdot e^{\frac{L}{2}} \tag{7.3}\]
Note now that each element \((X,\phi)\) of \(\mathbf{X}_{g}\), and thus of \(\mathbf{X}_{g,\ell_{0}}\), determines not only the curve \(\Lambda(X,\phi)\) but also a homotopy class of fillings for this curve, namely \(\phi\circ\text{spine : }\mathbf{neigh}(X)\to\Sigma\). Let now \(Z\subset\mathbf{X}_{g,\ell_{0}}\) be the set of pairs \((X,\phi)\) so that the filling \(\phi\circ\text{spine : }\mathbf{neigh}(X)\to\Sigma\) is not unique in the sense that the curve \(\Lambda(X,\phi)\) admits another non-homotopic genus \(g\) filling and let
\[W=\mathbf{X}_{g,\ell_{0}}\setminus Z\]
be its complement. From Theorem 1.4 we get that \(Z\) consists of at most \(\mathbf{const}\cdot L^{6g-5}\cdot e^{\frac{L}{2}}\) many elements and hence that \(W\) is generic.
**Claim**.: _If \(\ell_{0}\) is over some threshold we have that \(\Lambda(W)\subset\mathbf{B}_{g}\)._
Figure 4: The ideal triangle \(\Delta\) with vertices \(\theta_{1},\theta_{2},\theta_{3}=\infty\) and center \(p\). Each one of the legs of the bold printed tripod has length \(\log\frac{2}{\sqrt{3}}\).
Proof.: First we have by construction that the curve \(\Lambda(X,\phi)\) admits the genus \(g\) filling \(\phi\circ\operatorname{spine}_{X}:\operatorname{\mathbf{neigh}}(X)\to\Sigma\). Suppose that it admits a smaller genus filling. Then, adding handles and mapping them to points we get that \(\Lambda(X,\phi)\) admits a non-\(\pi_{1}\)-injective genus \(g\) filling. Since on the other hand the filling \(\phi\circ\operatorname{spine}_{X}:\operatorname{\mathbf{neigh}}(X)\to\Sigma\) is \(\pi_{1}\)-injective as long as \(\ell_{0}\) is over some threshold, we get that \(\Lambda(X,\phi)\) admits two non-homotopic genus \(g\) fillings, contradicting the assumption that \((X,\phi)\in W\).
It remains to prove that the restriction of \(\Lambda\) to the generic set \(W\) is injective.
Well, suppose that we have \((X,\phi),(X^{\prime},\phi^{\prime})\in W\) with \(\Lambda(X,\phi)=\Lambda(X^{\prime},\phi^{\prime})\). Since \((X,\phi)\) and \((X^{\prime},\phi^{\prime})\) belong to \(W\) we know that the two fillings
\[\phi\circ\operatorname{spine}_{X}:\operatorname{\mathbf{neigh}}(X)\to\Sigma \text{ and }\phi^{\prime}\circ\operatorname{spine}_{X^{\prime}}:\operatorname{ \mathbf{neigh}}(X^{\prime})\to\Sigma\]
are homotopic. Recall that this means that there is a homeomorphism
\[\sigma:\operatorname{\mathbf{neigh}}(X)\to\operatorname{\mathbf{neigh}}(X^{ \prime})\]
with \(\phi^{\prime}\circ\operatorname{spine}_{X^{\prime}}\circ\sigma\) homotopic to \(\phi\circ\operatorname{spine}_{X}\). Since \(X\) and \(X^{\prime}\) are spines of \(\operatorname{\mathbf{neigh}}(X)\) and \(\operatorname{\mathbf{neigh}}(X^{\prime})\) we deduce that there is a homotopy equivalence \(\bar{\sigma}:X\to X^{\prime}\) such that \(\phi^{\prime}\circ\bar{\sigma}\) is homotopic to \(\phi\). Now, since the lengths of both \(\phi(X)\) and \(\phi^{\prime}(X^{\prime})\) are, up to a constant, basically half the length of \(\Lambda(X,\phi)=\Lambda(X^{\prime},\phi^{\prime})\) we see that \(\phi\) and \(\phi^{\prime}\) satisfy the conditions in Proposition 2.3. It thus follows that there is a homeomorphism \(F:X^{\prime}\to X\) mapping edges at constant velocity, with \(F\circ\bar{\sigma}\) homotopic to the identity, and with \(\phi\circ F\) homotopic to \(\phi^{\prime}\). Now, both
\[\phi\circ F,\phi^{\prime}:X\to\Sigma\]
are critical realizations and both are homotopic to each other. We get then from (1) in Lemma 2.2 that \(\phi^{\prime}=\phi\circ F\). To conclude, note that since \(F\) is a homotopy inverse of \(\bar{\sigma}\) and since \(\bar{\sigma}\) is induced by the homeomorphism \(\sigma:\operatorname{\mathbf{neigh}}(X^{\prime})\to\operatorname{\mathbf{neigh }}(X)\) we get that \(F\) is a fat graph homeomorphism. This proves that \((X,\phi)\) and \((X^{\prime},\phi^{\prime})\) are equivalent, and hence that the restriction of \(\Lambda\) to \(W\) is injective. We are done with Lemma 7.2.
_Remark_.: Note that (7.3) and Lemma 7.2 imply that
\[|\mathbf{B}_{g}(L)|>\operatorname{\mathbf{const}}\cdot L^{6g-4}\cdot e^{\frac{ L}{2}} \tag{7.4}\]
for all \(L\) large enough.
Our next goal is to show that the image of \(\Lambda\) contains a large subset of \(\mathbf{B}_{g}\).
**Lemma 7.3**.: _There is a generic subset of \(\mathbf{B}_{g}\) which is contained in the image of \(\Lambda\)._
Proof.: Let \(Z\subset\mathbf{B}_{g}\) be the set of those geodesics which admits a hyperbolic genus \(g\) filling \(\beta:S\to\Sigma\) such that the \(\varepsilon_{0}\)-thin part of the double \(DS\) of \(S\) has at most \(6g-4\) connected components \(U\) through with \(\iota(U,\partial S)=2\). It
follows from Proposition 6.2 that \(Z\) has at most \(\mathbf{const}\cdot L^{6g-5}\cdot e^{\frac{L}{2}}\) elements with length \(\leqslant L\). It follows from (7.4) that \(Z\) is negligible and hence that its complement \(W=\mathbf{B}_{g}\setminus Z\) is generic. We claim that \(W\) is contained in the image of \(\Lambda\), at least if \(\varepsilon_{0}\) is chosen small enough.
Well, each \(\gamma\in W\) admits a hyperbolic filling \(\beta:S\to\Sigma\) such that the \(\varepsilon_{0}\)-thin part of the double \(DS\) has at least \(6g-3\) connected components \(U\) with \(\iota(\partial S,U)=2\). Since \(DS\) has genus \(2g\) we have that \(6g-3\) is actually the maximal number of connected components that its thin part can have. It follows that the double \(DS\) of \(S\) admits a pants decomposition consisting of very short curves and that moreover \(\partial S\) cuts each one of them exactly twice. It follows that \(S\) has \(6g-3\) orthogeodesics cutting \(S\) into a union of \(4g-2\) right-angle hexagons. These hexagons have \(3\) alternating sides which are extremely short. It follows that each one of the hexagons contains a pretty large compact set which is almost isometric to a large neighborhood of the center of an ideal triangle in \(\mathbb{H}^{2}\). Declare the center of the hexagon to be the image of the center of the ideal triangle by this almost isometric map. Now, we can represent the dual graph of the decomposition of \(S\) by our short orthogeodesic segments as a geodesic subgraph \(X\) of \(S\) with vertices in the centers of the hexagons, see Figure 5.
The graph \(X\) is a spine of \(S\) and hence it inherits from \(S\) a fat graph structure with \(S\) as its neighborhood. Let now \(\psi:X\to\Sigma\) be the realization obtained from the restriction of \(\beta\) to \(X\) by pulling the edges tight.
**Claim**.: _For all \(\delta\) there is \(\varepsilon\) such that if \(\varepsilon_{0}<\varepsilon\) then the realization \(\psi:X\to\Sigma\) is \(\delta\)-critical._
Proof.: Suppose that the claim fails to be true. This means that for some \(\delta\) there are a sequence of counter examples with \(\varepsilon_{0}\to 0\). Since \(\varepsilon_{0}\to 0\) we get
Figure 5. Getting an almost critical realization out of a filling whose domain can be cut into hexagons by very short orthogeodesics.
that the images of the hexagons converge to a \(1\)-Lipschitz map of a geodesic triangle into \(\Sigma\) which however maps the boundary geodesics to geodesics. Such a map is an isometric embedding of the triangle in question. Now, in a geodesic triangle, the geodesic rays starting in the center and pointing into the cusps make angle \(\frac{2\pi}{3}\). This means that the angles in the approximates converge to \(\frac{2\pi}{3}\) contradicting our assumption that some of them were at least \(\delta\) off \(\frac{2\pi}{3}\). We have proved the claim.
It follows thus from the claim and Corollary 2.4 that, as long as \(\varepsilon_{0}\) is under some threshold, the realization \(\psi:X\to\Sigma\) is homotopic to a critical realization \(\phi:X\to\Sigma\). This implies \((X,\phi)\) belongs to the domain of \(\Lambda\) and that \(\gamma=\Lambda(X,\phi)\). We have proved that the generic set \(W\subset\mathbf{B}_{g}\) is contained in the image of \(\Lambda\).
We are now ready to wrap all of this up.
**Proof of Theorem 1.1.** Combining Lemma 7.1, Lemma 7.2 and Lemma 7.3 we get that for all \(\delta>0\) there is a generic subset \(W\subset\mathbf{X}_{g}\) which is mapped injectively under \(\Lambda\) to a generic subset of \(\mathbf{B}_{g}\) in such a way that
\[|\ell_{\Sigma}(\Lambda(X,\phi))-2\ell_{\Sigma}(\phi)+2\kappa|\leqslant\delta\]
where
\[\kappa=-3\chi(X)\cdot\log\frac{4}{3}=\log\left(\left(\frac{4}{3}\right)^{6g-3 }\right) \tag{7.5}\]
It follows that for all \(\delta>0\) our set \(\mathbf{B}_{g}(L)\) has, for \(L\to\infty\), at least as many elements as \(\mathbf{X}_{g}(\frac{L}{2}+\kappa-\delta)\) and at most as many as \(\mathbf{X}_{g}(\frac{L}{2}+\kappa+\delta)\). In symbols this just means that
\[\left|\mathbf{X}_{g}\bigg{(}\frac{L}{2}+\kappa-\delta\bigg{)}\right|\preceq| \mathbf{B}_{g}(L)|\preceq\left|\mathbf{X}_{g}\bigg{(}\frac{L}{2}+\kappa+ \delta\bigg{)}\right| \tag{7.6}\]
for large \(L\). It thus remains to estimate the cardinality of \(\mathbf{X}_{g}(L)\).
**Lemma 7.4**.: _We have_
\[|\mathbf{X}_{g}(L)|\sim\frac{1}{12^{g}}\cdot\left(\frac{3}{2}\right)^{6g-3} \cdot\frac{1}{g!\cdot(3g-2)!\cdot\operatorname{vol}(T^{1}\Sigma)^{2g-1}}\cdot L ^{6g-4}\cdot e^{L}\]
_as \(L\to\infty\)._
Proof.: From Theorem 1.3 we get that every trivalent graph \(X\) has
\[|\mathbf{G}^{X}(L)|\sim\left(\frac{2}{3}\right)^{3\chi(X)}\cdot\frac{ \operatorname{vol}(T^{1}\Sigma)^{\chi(X)}}{(-3\chi(X)-1)!}\cdot L^{-3\chi(X)-1 }\cdot e^{L}\]
critical realizations of length at most \(L\) in \(\Sigma\). This implies that whenever \(X\) is fat graph of genus \(g\) then asymptotically there are
\[\frac{1}{|\operatorname{Aut}(X)|}\left(\frac{2}{3}\right)^{3\chi(X)}\cdot \frac{\operatorname{vol}(T^{1}\Sigma)^{\chi(X)}}{(-3\chi(X)-1)!}\cdot L^{-3 \chi(X)-1}\cdot e^{L}\]
many elements in \(\mathbf{X}_{g}(L)\) represented by \((X,\phi)\) for some critical \(\phi:X\to\Sigma\) of length \(\ell(\phi)\leqslant L\). Adding over all possible types of genus \(g\) fat graphs we get that
\[|\mathbf{X}_{g}(L)|\sim\sum_{X}\frac{1}{|\operatorname{Aut}(X)|}\left(\frac{2}{ 3}\right)^{3\chi(X)}\cdot\frac{\operatorname{vol}(T^{1}\Sigma)^{\chi(X)}}{(-3 \chi(X)-1)!}\cdot L^{-3\chi(X)-1}\cdot e^{L}\]
From the Bacher-Vdovina [1] result mentioned earlier and taking into consideration that \(\chi(X)=1-2g\) we get
\[|\mathbf{X}_{g}(L)|\sim\frac{2}{12^{g}}\cdot\frac{(6g-5)!}{g!\cdot(3g-3)!}\cdot \left(\frac{3}{2}\right)^{6g-3}\cdot\frac{\operatorname{vol}(T^{1}\Sigma)^{1-2 g}}{(6g-4)!}\cdot L^{6g-4}\cdot e^{L}\]
The claim follows now from elementary algebra.
Now, from Lemma 7.4 we get that
\[\left|\mathbf{X}_{g}\left(\frac{L}{2}\right)\right|\sim\frac{1}{12^{g}}\cdot \left(\frac{3}{2}\right)^{6g-3}\cdot\frac{1}{g!\cdot(3g-2)!\cdot\operatorname {vol}(T^{1}\Sigma)^{2g-1}}\cdot\frac{L^{6g-4}}{2^{6g-4}}\cdot e^{\frac{L}{2}}.\]
Taking into account that
\[\left|\mathbf{X}_{g}\left(\frac{L}{2}+\kappa\right)\right|\sim\left|\mathbf{X }_{g}\left(\frac{L}{2}\right)\right|\cdot e^{\kappa}=\left|\mathbf{X}_{g} \left(\frac{L}{2}\right)\right|\cdot\left(\frac{4}{3}\right)^{6g-3}\]
we get that
\[\left|\mathbf{X}_{g}\left(\frac{L}{2}+\kappa\right)\right|\sim\frac{2}{12^{g} \cdot g!\cdot(3g-2)!\cdot\operatorname{vol}(T^{1}\Sigma)^{2g-1}}\cdot L^{6g-4 }\cdot e^{\frac{L}{2}}\]
It follows thus from (7.6) that
\[|\mathbf{B}_{g}(L)|\sim\frac{2}{12^{g}\cdot g!\cdot(3g-2)!\cdot\operatorname {vol}(T^{1}\Sigma)^{2g-1}}\cdot L^{6g-4}\cdot e^{\frac{L}{2}}\]
as we wanted to show. This concludes the proof of Theorem 1.1.
## 8. Curves bounding immersed surfaces
We now turn our attention to Theorem 1.2:
**Theorem 1.2**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface and for \(g\geqslant 1\) and \(L>0\) let \(\mathbf{B}_{g}(L)\) be as in (1.3). We have_
\[|\{\gamma\in\mathbf{B}_{g}(L)\text{ bounds immersed surface of genus }g\}|\sim\frac{1}{2^{4g-2}}|\mathbf{B}_{g}(L)|\]
_as \(L\to\infty\)._
Denote by
\[\mathbf{B}_{g}^{\text{imm}}(L)=\{\gamma\in\mathbf{B}_{g}(L)\text{ bounds immersed surface of genus }g\}\]
the set we want to count. From the proof of Theorem 1.1 we get that
\[|\mathbf{B}_{g}^{\text{imm}}(L)|\sim\left|\mathbf{X}_{g}^{\text{imm}}\left( \frac{L}{2}+\kappa\right)\right|\]
where \(\kappa\) is as in (7.5) and where
\[\mathbf{X}_{g}^{\mathrm{imm}}(L)\stackrel{{\mathrm{def}}}{{=}}\left\{ (X,\phi)\in\mathbf{X}_{g}(L)\left|\begin{array}{l}\mbox{the realization $\phi:X\to\Sigma$ extends}\\ \mbox{to an immersion of the thickening}\\ \mathbf{neigh}(X)\mbox{ of }X\end{array}\right.\right\}.\]
Equivalently, \((X,\phi)\in\mathbf{X}_{g}\) belongs to \(\mathbf{X}_{g}^{\mathrm{imm}}\) if the cyclic ordering at each vertex of \(X\) agrees with the one pulled back from \(\Sigma\) via \(\phi\), that is the one coming from the orientation of \(\Sigma\). We can refer to such realizations of a fat graph as _fat realizations_. With this language we have that
\[\mathbf{X}_{g}^{\mathrm{imm}}(L)=\left\{(X,\phi)\in\mathbf{X}_{g}(L)\,|\,\phi \mbox{ is a fat realization of the fat graph }X\right\}.\]
For a given trivalent fat graph \(X\) let
\[\mathbf{G}_{\mathrm{imm}}^{X}(L)=\left\{\phi\in\mathbf{G}^{X}(L)\,|\,\phi \mbox{ is a fat realization of }X\right\}\]
be the set of fat critical realizations of \(X\) of total length at most \(L\). Note that
\[|\mathbf{X}_{g}^{\mathrm{imm}}(L)|=\sum_{X}\frac{1}{|\operatorname{Aut}(X)|}| \mathbf{G}_{\mathrm{imm}}^{X}(L)|.\]
What is still missing to be able to run the proof as in that of Theorem 1.1 is a version of Theorem 1.3 for \(\mathbf{G}_{\mathrm{imm}}^{X}\). Well, here it is:
**Theorem 8.1**.: _Let \(\Sigma\) be a closed, connected, and oriented hyperbolic surface. For every connected trivalent fat graph \(X\) we have_
\[|\mathbf{G}_{\mathrm{imm}}^{X}(L)|\sim 2^{2\chi(X)}\cdot\left(\frac{2}{3} \right)^{3\chi(X)}\cdot\frac{\operatorname{vol}(T^{1}\Sigma)^{\chi(X)}}{(-3 \chi(X)-1)!}\cdot L^{-3\chi(X)-1}\cdot e^{L}\]
_as \(L\to\infty\)._
Assuming Theorem 8.1 for the moment we get from \(\chi(X)=1-2g\) and from the Bacher-Vdovina theorem that
\[|\mathbf{X}_{g}^{\mathrm{imm}}(L)|\sim\frac{1}{2^{4g-2}}\cdot\frac{2}{12^{g}} \cdot\frac{(6g-5)!}{g!\cdot(3g-3)!}\cdot\left(\frac{3}{2}\right)^{6g-3}\cdot \frac{\operatorname{vol}(T^{1}\Sigma)^{1-2g}}{(6g-4)!}\cdot L^{6g-4}\cdot e^{L}\]
from where we get, as in the proof of Theorem 1.1, that
\[|\mathbf{B}_{g}^{\mathrm{imm}}(L)|\sim\frac{1}{2^{4g-2}}\frac{2}{12^{g}\cdot g!\cdot(3g-2)!\cdot\operatorname{vol}(T^{1}\Sigma)^{2g-1}}\cdot L^{6g-4}\cdot e ^{\frac{L}{2}}\]
The claim of Theorem 1.2 follows now from this statement combined with Theorem 1.1.
All that is left to do is to prove Theorem 8.1. Since it is basically identical to the proof of Theorem 1.3 we just point out the differences. The key is to obtain a fat graph version of Proposition 4.2. In a nutshell, the idea of the proof of this proposition was that
1. we could compute the volume \(\operatorname{vol}(\mathcal{G}_{\varepsilon-\mathrm{crit}}^{X}(\vec{L},h))\) of the set of \(\varepsilon\)-critical realizations of \(X\) whose edge lengths were in a box, and
2. we knew that every connected component contributes the same amount, and how much.
Let thus \(\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}\,-\,\operatorname{imm}}(\vec{L},h) \subset\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}}(\vec{L},h)\) be those \(\varepsilon\)-critical realizations in our box which preserve the fat structure. Recalling now that by Proposition 2.6 the connected components of \(\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}}(\vec{L},h)\) have small diameter, we deduce that the induced fat structure is constant over each such connected component. It follows that \(\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}\,-\,\operatorname{imm}}( \vec{L},h)\) is a union of connected components of \(\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}}(\vec{L},h)\). In particular, to be able to obtain a fat graph version of Proposition 4.2 we just need to be able to compute \(\operatorname{vol}(\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}\,- \,\operatorname{imm}}(\vec{L},h))\).
Now, in the proof of Proposition 4.2, the key ingredient of the computation of the volume of \(\mathcal{G}_{\varepsilon-\operatorname{crit}}(\vec{L},h)\) was Corollary 3.3--we remind the reader that the statement of the said corollary was that for any \(\vec{x}\in\Sigma^{\operatorname{\mathbf{vert}}\,X}\) we have
\[|\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}}(\vec{L},h)|\sim \varepsilon^{4|\chi(X)|}\cdot\left(\frac{2}{3}\right)^{2\chi(X)}\cdot\pi^{\chi( X)}\cdot\frac{(e^{h}-1)^{-3\chi(X)}\cdot e^{\|\vec{L}\|}}{\operatorname{vol}( \Sigma)^{-3\chi(X)}}\]
as \(\min_{e\in\mathbf{edge}(X)}L_{e}\to\infty\), where \(\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}}(\vec{L},h)\) is the set of \(\varepsilon\)-critical realizations \(\phi:X\to\Sigma\) mapping the vertex \(v\) to the point \(x_{v}=\phi(v)\). As we see, if we want to just copy line-by-line the computation of the volume of \(\mathcal{G}_{\varepsilon-\operatorname{crit}}(\vec{L},h)\) to get the volume of \(\mathcal{G}_{\varepsilon-\operatorname{crit}\,-\,\operatorname{imm}}(\vec{L},h)\) what we need to know is the number of elements in the set \(\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}\,-\,\operatorname{ imm}}(\vec{L},h)\) of \(\varepsilon\)-critical realizations \(\phi:X\to\Sigma\) mapping the vertex \(v\) to the point \(x_{v}=\phi(v)\) and preserving the fat structure.
Now, to obtain the number of elements in \(\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}}(\vec{L},h)\) what we did was to invoke Theorem 3.2 and compute the volume of the set
\[U^{X}_{\vec{x},\varepsilon-\operatorname{crit}}\subset\prod_{v\in\operatorname {\mathbf{vert}}(X)}\left(\bigoplus_{\vec{\varepsilon}\in\operatorname{ \mathbf{half}}_{v}(X)}T^{1}_{x_{v}}\Sigma\right)\]
of those tuples \((v_{\vec{e}})_{\vec{e}\in\operatorname{\mathbf{half}}(X)}\) with \(\angle(v_{\vec{e}_{1}},v_{\vec{e}_{2}})\in[\frac{2\pi}{3}-\varepsilon,\frac{2 \pi}{3}+\varepsilon]\) for all distinct \(\vec{e}_{1},\vec{e}_{2}\in\operatorname{\mathbf{half}}(X)\) incident to the same vertex. Accordingly, to compute the number of elements in \(\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}\,\operatorname{imm}}( \vec{L},h)\) we need to compute the volume of the set
\[U^{X}_{\vec{x},\varepsilon-\operatorname{crit}\,-\,\operatorname{imm}}\subset U ^{X}_{\vec{x},\varepsilon-\operatorname{crit}}\]
consisting of tuples such that for any vertex \(v\in\operatorname{\mathbf{vert}}(X)\) the cyclic order of the half-edges incident to \(v\) agrees with the one of the corresponding unit tangent vectors. Following the computation of \(\operatorname{vol}(U_{\vec{x},\varepsilon-\operatorname{crit}})\) we get that
\[\operatorname{vol}(U^{X}_{\vec{x},\varepsilon-\operatorname{crit}\,-\, \operatorname{imm}})=\frac{1}{2^{|\operatorname{\mathbf{vert}}(X)|}} \operatorname{vol}(U^{X}_{\vec{x},\varepsilon-\operatorname{crit}})=2^{2\chi( X)}\cdot\operatorname{vol}(U^{X}_{\vec{x},\varepsilon-\operatorname{crit}})\]
As we just discussed, this implies that
\[|\mathbf{G}^{X}_{\vec{x},\varepsilon-\operatorname{crit}\,-\,\operatorname{ imm}}(\vec{L},h)|\sim 2^{2\chi(X)}\cdot|\mathbf{G}^{X}_{\vec{x},\varepsilon- \operatorname{crit}}(\vec{L},h)|\]
and hence that
\[\operatorname{vol}(\mathcal{G}^{X}_{\varepsilon-\operatorname{crit}\,-\, \operatorname{imm}}(\vec{L},h))\sim 2^{2\chi(X)}\cdot\operatorname{vol}(\mathcal{G}^{X}_{ \varepsilon-\operatorname{crit}}(\vec{L},h))\]
and thus that
\[|\mathbf{G}^{X}_{\mathrm{imm}}(L)|\sim 2^{2\chi(X)}\cdot|\mathbf{G}^{X}(L)|.\]
Theorem 8.1 follows now from Theorem 1.3.
## 9. Comments
In this section we discuss where and how much we use the assumption that \(\Sigma\) is a closed orientable surface.
First, do the results here apply if we replace \(\Sigma\) by a compact 2-dimensional orbifold \(\mathcal{O}=\Gamma\backslash\mathbb{H}^{2}\)?
The answer is yes for Theorem 1.3, with exactly the same proof. One should just do everything equivariantly. For example, a realization in \(\mathcal{O}\) of a graph \(X\) should be a map \(\tilde{\phi}:\tilde{X}\to\mathbb{H}^{2}\) from the universal cover of \(X\) to \(\mathbb{H}^{2}\) which is equivariant under a homomorphism \(\tilde{\phi}_{*}:\pi_{1}(X)\to\Gamma\), and where two such pairs \((\tilde{\phi},\tilde{\phi}_{*})\) and \((\tilde{\psi},\tilde{\psi}_{*})\) are identified if they differ by an element of \(\Gamma\). Once we rephrase the situation in those terms, everything extends in the obvious way. For example, the space \(\mathcal{G}^{X}\) of realizations of a graph in \(\mathcal{O}\) is now an orbifold: the map \(\mathcal{G}^{X}\to\mathcal{O}^{\mathbf{vert}\,X}\) sending each realization to the images of the vertices is a covering in the category of orbifolds.
On the other hand, to prove Theorem 1.1 one needs to be a little bit careful because in Section 5 and Section 6 we used repeatedly that every sufficiently short curve in \(\Sigma\) is homotopically trivial, and this is no longer true if we are working in \(\mathcal{O}\).
### What about allowing \(\Sigma\) to have cusps?
Here we again have problems with the discussion in Section 5 and Section 6, but this time it is much worse. In some sense, the results of Section 5 and Section 6 are just generalizations of Lemma 4.1, and this lemma fails in the presence of cusps: indeed, suppose that \(\Sigma\) is a once punctured torus and \(X\) is a graph with two vertices \(x\) and \(x^{\prime}\) (if you wish you can make \(X\) trivalent while essentially keeping the same reasoning as we will present, but doing so might obscure things slightly) and with 6 edges \(f_{1},f_{2},e_{1},e_{2},e_{3}\) and \(h\) such that
* the edges \(f_{1},f_{2}\) are incident on both ends to \(x\),
* the edges \(e_{1},e_{2},e_{3}\) are incidents on both ends to \(x^{\prime}\), and
* the edge \(h\) runs from \(x\) to \(x^{\prime}\).
Fix now a horospherical neighborhood of the cusps in \(\Sigma\), fix a point \(\mathbf{x}_{0}\) and let \(\mathbf{x}_{t}\) be the point at distance \(t\) of \(\mathbf{x}_{0}\) along the ray pointing directly into the cusp. Now, if we are given a vector \(\vec{L}=(F_{1},F_{2},E_{1},E_{3},E_{3},H)\) with positive real coefficients consider realizations \(\phi:X\to\Sigma\) with \(\phi(x)=\mathbf{x}_{0}\), with \(\phi(x^{\prime})=\mathbf{x}_{H}\), and with \(\phi(h)\) equal to the segment of length \(H\) joining \(\mathbf{x}_{0}\) and \(\mathbf{x}_{H}\). Now, we have \(\mathbf{const}\,e^{F_{1}+F_{2}}\) choices for the images of \(f_{1}\) and \(f_{2}\) subject to the restriction that \(\phi(f_{i})\) is for \(i=1,2\) a geodesic segment of length at most \(F_{i}\). Note also that the horospherical simple loop based at \(\mathbf{x}_{H}\) has length \(\mathbf{const}\cdot e^{-H}\) and that geodesic loops in the cusp, based at
\(\mathbf{x}_{H}\) and with length \(\ell\) are homotopic to horospherical segments of length at most \(\mathbf{const}\cdot e^{\frac{\ell}{2}}\). This implies that, if we want to map \(e_{i}\) to a loop in the cusp and of length at most \(E_{i}\), then we have at least \(\mathbf{const}\cdot e^{\frac{1}{2}E_{i}}\cdot e^{H}\) choices. Altogether we have at least
\[\mathbf{const}\cdot e^{F_{1}+F_{2}+\frac{1}{2}E_{1}+\frac{1}{2}E_{2}+\frac{1}{ 2}E_{3}+3H}\]
choices of (homotopy classes of) realizations of \(X\) into \(\Sigma\) with \(\ell(\phi(f_{1}))\leqslant F_{1}\), \(\ell(\phi(f_{2}))\leqslant F_{2}\), \(\ell(\phi(e_{1}))\leqslant E_{1}\), \(\ell(\phi(e_{2}))\leqslant E_{2}\)\(\ell(\phi(e_{3}))\leqslant E_{3}\), \(\ell(\phi(H))\leqslant H\). In particular if we set
\[\vec{L}=(F_{1},F_{2},E_{1},E_{2},E_{3},H)=(n,n,\frac{1}{2}n,\frac{1}{2}n, \frac{1}{2}n,\frac{5}{2}n)\]
we have at least \(\mathbf{const}\cdot e^{\frac{61}{4}n}\) such realizations, and this is a much larger number than \(\mathbf{const}\,e^{6n}=\mathbf{const}\,e^{\|L\|}\). This proves that the analogue of Lemma 4.1 fails if \(\Sigma\) is not compact.
In summary, if \(\Sigma\) has finite volume but is not compact, then we do not even know whether Theorem 1.3 holds. Although we suspect that the answer is yes.
**Can \(\Sigma\) be non-orientable?** If \(\Sigma\) is a compact hyperbolic surface which however is non-orientable then we know that both Theorem 1.3 and Theorem 1.1 hold true: we never used orientability during their proofs. We did however in the proof of Theorem 1.2: unless \(\Sigma\) is oriented it makes little sense to speak about the induced fat graph structure. We do not know what happens with Theorem 1.2 when the ambient surface is not oriented.
**And what about higher dimensions?** If we replace \(\Sigma\) by a closed hyperbolic manifold of other dimension than \(2\), then Theorem 1.3 and Theorem 1.1 should still hold, and with proofs which, if not identical, keep the same spirit. It is however less clear whether there should be an interesting analogue of Theorem 1.2: at least if \(\dim\geqslant 5\), where every map of a surface can be deformed to an embedding.
## Appendix A The geometric prime number theorem
The argument we used to prove Theorem 1.3 can be used to recover Huber's geometric primer number theorem, and this is what we do here. Besides giving a simple proof of this theorem, it might help the reader understand the logic of the proof of Theorem 1.3.
**The Geometric Prime Number Theorem** (Huber).: _Let \(\Sigma\) be a closed, connected, orientable, hyperbolic surface and let \(\mathbf{C}(L)\) be the set of closed non-trivial oriented geodesics in \(\Sigma\) of length at most \(L\). We have_
\[|\mathbf{C}(L)|\sim\frac{e^{L}}{L}\]
_as \(L\to\infty\)._
_Remark_.: In Section 9 we discussed some of the difficulties one would face when extending the main results to the case that \(\Sigma\) is a finite volume surface or an orbifold. None of these problems really arise when proving the geometric prime number theorem, and indeed the proof we present here works with only minimal changes if \(\Sigma\) is replaced by an arbitrary finite are orbifold \(\Gamma\backslash\mathbb{H}^{2}\). We decided to just deal with the compact case to avoid hiding the structure of the argument.
The proof of Huber's theorem would be cleaner and nicer if all closed geodesics were primitive. Luckily, this is almost true. Indeed it follows for example from the work of Coornaert and Knieper [6] that there is some \(C>0\) with
(A.1) \[\frac{1}{C}\cdot\frac{e^{L}}{L}\leqslant|\mathbf{C}(L)|\leqslant C\cdot e^{L}\]
for all \(L>0\). We stress that the Coornaert-Knieper argument is pretty coarse: what they actually prove is a statement about groups acting isometrically, discretely and cocompactly on Gromov-hyperbolic spaces.
The point for us is that (A.1) implies that most geodesics of length at most \(L\) are primitive. Indeed, every non-primitive geodesic of length at most \(L\) is a multiple of a primitive geodesic of length at most \(\frac{1}{2}L\). On the other hand, if \(s_{0}\) is the systole of \(\Sigma\) then at most \(\frac{L}{s_{0}}\) geodesics of length at most \(L\) arise as multiples of any given geodesic. These two observations, together with the right side of (A.1), imply that there are at most \(\frac{L}{s_{0}}\mathbf{C}(\frac{1}{2}L)\leqslant\frac{C}{s_{0}}\cdot L\cdot e ^{L/2}\) non-primitive geodesics of length at most \(L\). Taking the left side of (A.1) into consideration we deduce that, as we had claimed above, most geodesics are primitive.
**Lemma A.1**.: _Fix \(h\) and let \(\mathbf{C}(L,h)\) and \(\mathbf{P}(L,h)\) be, respectively, the sets of all geodesics and of all primitive geodesics of length in \([L,L+h]\). Then we have_
\[|\mathbf{C}(L,h)|\sim|\mathbf{P}(L,h)|\]
_as \(L\to\infty\). _
After this preparatory comment let us start the real business. Let \(\mathcal{L}\) be the space of all geodesic loops in \(\Sigma\) and consider the map \(\Pi:\mathcal{L}\to\Sigma\) mapping each loop to its base point. As was the case for more general graphs, the map \(\Pi\) is a covering, meaning that when we pull-back the hyperbolic metric using \(\Pi\) we can think of \(\mathcal{L}\) as being a hyperbolic surface. Note also that the set of connected components of \(\mathcal{L}\) agrees with the set of free homotopy classes of loops. It follows that for every closed geodesic \(\gamma\) we have a connected component \(\mathcal{L}^{\gamma}\).
Now, given \(\varepsilon>0\) small let \(\mathcal{L}_{\varepsilon}\) be the set of all geodesic loops with angle defect at most \(\varepsilon\), that is, the set of geodesic loops whose initial and terminal velocity vectors meet with unoriented angle in \([0,\varepsilon]\). For \(L>0\) let \(\mathcal{L}_{\varepsilon}(L,L+h)\) be the elements in \(\mathcal{L}_{\varepsilon}\) with length between \(L\) and \(L+h\). Accordingly, set \(\mathcal{L}_{\varepsilon}^{\gamma}(L,L+h)=\mathcal{L}^{\gamma}\cap\mathcal{L} _{\varepsilon}(L,L+h)\).
Let us establish some basic properties of \(\mathcal{L}_{\varepsilon}(L,L+h)\) and \(\mathcal{L}_{\varepsilon}^{\gamma}(L,L+h)\):
**Lemma A.2**.: _Fix \(h>0\) and \(\varepsilon>0\). We have that_
\[\operatorname{vol}(\mathcal{L}_{\varepsilon}(L,L+h))\sim\varepsilon\cdot(e^{L+ h}-e^{L})\text{ as }L\to\infty.\]
_Moreover, there is a function \(C(\varepsilon)\) with \(\lim_{\varepsilon\to 0}C(\varepsilon)=1\) such that for every sufficiently long closed geodesic \(\gamma\) we have that_
1. \(\mathcal{L}_{\varepsilon}^{\gamma}(L,L+h)=\varnothing\) _unless_ \(\ell(\gamma)\in[L-\varepsilon,L+h]\)_,_
2. \(\operatorname{vol}(\mathcal{L}_{\varepsilon}^{\gamma}(L,L+h))\leqslant C( \varepsilon)\cdot\varepsilon\cdot L\)_, and_
3. \(C(\varepsilon)^{-1}\leqslant\frac{1}{\varepsilon\cdot L}\operatorname{vol}( \mathcal{L}_{\gamma}^{\varepsilon}(L,L+h))\leqslant C(\varepsilon)\) _if_ \(\gamma\) _is primitive and_ \(\ell(\gamma)\in[L,L+h-\varepsilon]\)_._
Proof.: As in the proof of Proposition 4.2 we exploit the cover \(\Pi:\mathcal{L}\to\Sigma\) to compute the volume of \(\mathcal{L}_{\varepsilon}(L,L+h)\):
\[\operatorname{vol}(\mathcal{L}_{\varepsilon}(L,L+h))=\int_{\Sigma}|\Pi^{-1}(x )\cap\mathcal{L}_{\varepsilon}(L,L+h)|dx\]
Now, \(\Pi^{-1}(x)\cap\mathcal{L}_{\varepsilon}(L,L+h)\) is nothing other than the number of geodesic arcs going from \(x\) to \(x\) with length within \([L,L+h]\) and such that the initial and terminal velocities make at most angle \(\varepsilon\). Once we fix \(x\), the set of admissive pairs of initial and terminal velocities is a subset \(T_{x}^{1}\Sigma\times T_{x}^{1}\Sigma\) with volume \(4\pi\varepsilon\). We thus get from Theorem 3.2 that
\[|\Pi^{-1}(x)\cap\mathcal{L}_{\varepsilon}(L,L+h)|\sim\varepsilon\cdot\frac{e^{ L+h}-e^{L}}{\operatorname{vol}(\Sigma)}\]
Since we are assuming that \(\Sigma\) is closed, this is uniform in \(x\), meaning that we get
\[\operatorname{vol}(\mathcal{L}_{\varepsilon}(L,L+h))\sim\int_{\Sigma} \varepsilon\cdot\frac{e^{L+h}-e^{L}}{\operatorname{vol}(\Sigma)}=\varepsilon \cdot(e^{L+h}-e^{L})\]
We have proved the first claim.
**Figure 6. The cylinder \(\langle\gamma\rangle\backslash\mathbb{H}^{2}\). The marked angles each measures one half of the angle defect of \(\gamma_{x}\).**
To prove the rest, let us give a concrete description of \(\mathcal{L}^{\gamma}\). Well, writing \(\gamma=\eta^{k}\) for some \(\eta\) primitive and \(k\geqslant 1\) consider the cover
\[\pi_{\eta}:\langle\eta\rangle\backslash\mathbb{H}^{2}\to\Sigma\]
of \(\Sigma\) corresponding to \(\eta\). Now, for each \(x\in\langle\eta\rangle\backslash\mathbb{H}^{2}\) there is a unique hyperbolic loop \(\gamma_{x}\) based at \(x\) and freely homotopic to running \(k\) times over \(\eta\). The basic observation is that the map
\[\langle\eta\rangle\backslash\mathbb{H}^{2}\to\mathcal{L}^{\gamma},\ x\mapsto\pi_ {\eta}\circ\gamma_{x}\]
is an isometry between \(\langle\eta\rangle\backslash\mathbb{H}^{2}\) and the connected component \(\mathcal{L}^{\gamma}\).
Now, if one denotes by \(d\) the distance in \(\langle\eta\rangle\backslash\mathbb{H}^{2}\) between \(x\) and the central geodesic (see Figure 6) one gets from formula 2.3.1(vi) in the final page in [3] that
\[\frac{1}{2}\cdot\text{(angle defect of $\gamma_{x}$)} \sim\tan\left(\frac{1}{2}\cdot\text{(angle defect of $\gamma_{x}$)}\right)\] \[=\sinh(d)\cdot\tanh\left(\frac{\ell(\gamma)}{2}\right)\sim d\]
where the asymptotics hold when \(\ell(\gamma)\) is large and the angle defect is small. Now, claims (1), (2) and (3) follow from elementary considerations.
Armed with these two lemmas we are ready to conclude the proof of the geometric prime number theorem. Note that it suffices to prove that for \(h\) positive and fixed we have
(A.2) \[|\mathbf{C}(L,h)|\sim\frac{e^{L+h}-e^{L}}{L}\]
as \(L\to\infty\). Here \(\mathbf{C}(L,h)\) is as in Lemma A.1.
Anyways, with notation as in both lemmas we have for \(\varepsilon\) positive and small and for \(L\) large that
\[|\mathbf{P}(L,h)| \overset{\text{A.2}(3)}{\preceq}\frac{C(\varepsilon)}{\varepsilon \cdot L}\operatorname{vol}\left(\cup_{\gamma\in\mathbf{P}(L,h)}\mathcal{L}^{ \gamma}_{\varepsilon}(L,L+h+\varepsilon)\right)\] \[\leqslant\frac{C(\varepsilon)}{\varepsilon\cdot L}\operatorname{ vol}(\mathcal{L}_{\varepsilon}(L,L+h+\varepsilon))\] \[\overset{\text{A.2}}{\sim}C(\varepsilon)\cdot\frac{e^{L+h+ \varepsilon}-e^{L}}{L}\]
On the other hand we also have that
\[|\mathbf{C}(L,h)| \overset{\text{A.2}(2)}{\geqslant}\frac{1}{C(\varepsilon)\cdot \varepsilon\cdot L}\operatorname{vol}\left(\cup_{\gamma\in\mathbf{C}(L,h)} \mathcal{L}_{\varepsilon}(L,h)\right)\] \[=\frac{1}{C(\varepsilon)\cdot\varepsilon\cdot L}\operatorname{ vol}(\mathcal{L}_{\varepsilon}(L,h))\] \[\overset{\text{A.2}}{\sim}\frac{1}{C(\varepsilon)}\frac{e^{L+h}- e^{L}}{L}\]
Since this holds for all \(\varepsilon\), and since \(C(\varepsilon)\) tends to \(1\) when \(\varepsilon\to 0\) we have that
\[|\mathbf{P}(L,h)|\preceq\frac{e^{L+h}-e^{L}}{L}\preceq|\mathbf{C}(L,h)|\]
Now, Lemma A.1 implies (A.2), and Bob's your uncle.
|
2305.07421 | Selective imitation on the basis of reward function similarity | Imitation is a key component of human social behavior, and is widely used by
both children and adults as a way to navigate uncertain or unfamiliar
situations. But in an environment populated by multiple heterogeneous agents
pursuing different goals or objectives, indiscriminate imitation is unlikely to
be an effective strategy -- the imitator must instead determine who is most
useful to copy. There are likely many factors that play into these judgements,
depending on context and availability of information. Here we investigate the
hypothesis that these decisions involve inferences about other agents' reward
functions. We suggest that people preferentially imitate the behavior of others
they deem to have similar reward functions to their own. We further argue that
these inferences can be made on the basis of very sparse or indirect data, by
leveraging an inductive bias toward positing the existence of different
\textit{groups} or \textit{types} of people with similar reward functions,
allowing learners to select imitation targets without direct evidence of
alignment. | Max Taylor-Davies, Stephanie Droop, Christopher G. Lucas | 2023-05-12T12:40:08Z | http://arxiv.org/abs/2305.07421v1 | # Selective imitation on the basis of reward function similarity
###### Abstract
Imitation is a key component of human social behavior, and is widely used by both children and adults as a way to navigate uncertain or unfamiliar situations. But in an environment populated by multiple heterogeneous agents pursuing different goals or objectives, indiscriminate imitation is unlikely to be an effective strategy--the imitator must instead determine who is most useful to copy. There are likely many factors that play into these judgements, depending on context and availability of information. Here we investigate the hypothesis that these decisions involve inferences about other agents' reward functions. We suggest that people preferentially imitate the behavior of others they deem to have similar reward functions to their own. We further argue that these inferences can be made on the basis of very sparse or indirect data, by leveraging an inductive bias toward posting the existence of different _groups_ or _types_ of people with similar reward functions, allowing learners to select imitation targets without direct evidence of alignment.
**Keywords:** imitation, social cognition, goal inference, theory of mind
## Introduction
The complexity and variety of the real world is such that we often find ourselves required to act in new environments or unfamiliar scenarios. When determining what actions to take in such situations, we could follow an exploratory approach, trying lots of different behaviors until we achieve our desired result. But this will likely prove inefficient, and in some circumstances may even be dangerous or intractable. Imitation provides an attractive alternative--if we can observe other agents in our environment, perhaps we can learn to follow their example. Children are quick to learn by imitating others, sometimes faithfully and sometimes selectively Howard et al. (2015), although their strategies are still not completely understood Over & Carpenter (2013). Selectivity makes sense in the context of bounded resources: in an environment populated by diverse agents, not all will be equally useful to imitate. Indeed, copying an unknown agent may actually be counterproductive: the stranger may be eating something intolerably spicy, or be able to navigate terrain, like deep water, that we cannot. Therefore, any would-be imitator must engage in some form of selection or filtering, identifying which agent(s) to imitate and which to ignore (or in some cases electing to avoid imitation altogether). This process likely relies on many different factors, and is heavily context-dependent. We will first give a brief overview of existing research that attempts to map out these factors, before considering one in particular that we believe to be underexplored.
### Social learning strategies
A substantial body of empirical and theoretical research into the field of _social learning strategies_, primarily from an evolutionary perspective, has identified a range of rules or heuristics used by both humans and other animals to guide their selection of imitation targets Rendell et al. (2011). One strategy, observed in stickleback fish Kendal et al. (2009) and humans alike Zmyj et al. (2010), is to copy the agents that are observed receiving the highest payoff within the domain of interest (such as locating sources of food). Other strategies rely on less task-specific characteristics of potential models. For instance, stickleback fish also prefer to copy the actions of larger demonstrators Duffy et al. (2009), and chimpanzees prefer to copy older individuals and those occupying higher social rank Horner et al. (2010). Human children also imitate based on age and social status, preferring to copy adults rather than their same-age peers, even when those peers possess better domain knowledge Wood et al. (2012). Children additionally take into account the familiarity of potential model agents, placing higher trust in information provided by a more familiar teacher Corriveau & Harris (2009).
### Inferring others' reward functions
Heuristics such as "pick the older/more successful/more familiar person" likely do not capture the full picture of how and when people choose who to imitate. For instance, they do not entail any direct reference to the cognitive or mental states of either imitator or model. It is well established that people develop, from a young age, the ability to reason about the hidden mental states of others from observations of their behavior--typically referred to as Theory of Mind (ToM) Wellman (1990); Fodor (1992); Repacholi & Gopnik (1997); Onishi & Baillargeon (2005). It is plausible that rea
soning about the cognition of other agents could be useful in determining their suitability as potential imitation targets, and Heyes (2016) has argued for the existence of a subset of social learning strategies (SLSs) that are explicitly metacognitive. An example of how an SLS may rely on ToM is given by Diaconescu et al. (2014). When faced with a repeated binary-choice task, and provided with advice from an advisor motivated either to help or mislead them, participants followed the advice to the extent that they believed their advisor wanted to be helpful.
One key component of ToM concerns the determination of other agents' goals or reward functions. This inference process is usually invoked in the context of predicting the behavior of another agent in order to manage some sort of interaction with them, but it may also play a role in selecting which agent(s) to imitate. In recent years, efforts have been made to capture this process using computational models. Lucas et al. (2014); Jarat-Ettinger et al. (2015). One promising approach has been to develop models based on the idea of _inverse planning_Baker et al. (2009); Pantelis et al. (2014); Baker et al. (2017); Zhi-Xuan et al. (2020) or _inverse reinforcement learning_Ramachandran & Amir (2007); Ziebart et al. (2008), where generative models of (approximately) rational behavior are inverted to produce inferences of mental states or reward functions from observed actions. These models are often evaluated using simple 2D environments, referred to as "gridworlds". A gridworld usually contains a small fixed set of possible goal items or states, and agents within the gridworld are limited to an action space that consists only of movement along the four cardinal directions. While representing a significant simplification relative to naturalistic settings, their bare-bones structure allows for clear and unambiguous presentation of goal-directed agent behavior, as well as enabling tractable modelling. Although our aim in this paper is not in explicitly evaluating or modelling people's ability to infer reward functions _per se_, this line of research inspires both our choice of experimental paradigm and the predictions we make in the following section. We believe that simple gridworld environments are well-suited as a setting for exploring how people _use_ the inferences they make about other agents' reward functions to guide their own behavior--something which, to our knowledge, they have not previously been used to investigate.
### Imitation on the basis of reward function inference
Knowing that people engage both in selective imitation and in inferring the reward functions of others in their environment, we can pose a simple question: do these inferences inform the selection of imitation targets? More specifically, do people selectively imitate those in their environment they believe have reward functions that are similar in some sense to their own (**Question 1**)? In the following sections of this paper, we attempt to test this proposal. Extending this simple question, we also suggest and investigate two possible ways in which people might _generalize_ their decisions about who to imitate: _goal generalization_ and _agent generalization_. First, when placed in a new context where they have limited or no information about _their own reward function_, do people continue to imitate those who in a previous context were judged as having a reward function similar to their own (**Question 2**)? Second, can people take advantage of correlations between the reward functions of different agents to select imitation targets without comparing their own reward function directly (**Question 3**)? To investigate these questions, we conduct an experiment using navigation-like tasks in an online gridworld environment. In the following section we describe in more detail the structure of the experiment, and outline the predictions made.
## Experiment
To test the three questions posed in the previous section, we conduct an online experiment, in which participants navigate a series of 2D gridworld environments ("levels") and score points by collecting colored "gems". Each level is populated by a set of simulated agents, visible to participants, that collect gems to maximize their own fixed reward functions. By applying restrictions to the availability of information in certain levels, we incentivize participants to imitate agents' behavior, with their choice of imitation target reflected in the path they choose to take through the level. We manipulate two factors between participants. Participants are assigned to either the _path+goal_ condition (addressing **Questions 1-2**) or the _agent_ condition (addressing **Question 3**); in addition, they are randomly assigned one of two fixed reward functions (\(\mathbf{r}_{1}\) or \(\mathbf{r}_{2}\)), which determine the mapping from gem colors to points.
Following an unscored practice level, the experiment is divided into distinct phases that each consist of two levels (see Fig. 1 for a visual schematic). The phases completed by each participant depend on which condition they are assigned. To begin with, all participants complete the **learning phase**, which provides evidence of the reward functions of Agents 1 and 2 with respect to gems A,B,C. In the _path+goal_ condition, the **learning phase** is followed by the **path uncertainty phase**. In this phase, the environment becomes partially observable (see Fig. 2), such that the participant knows the value of all gems available but not where each is located, making imitation the optimal strategy. This phase is followed finally by the **goal generalization phase** - while the environment in this phase is fully observable, participants still face uncertainty since all the gems present are new and have unknown value.
In the _agent_ condition, participants proceed from the first **learning phase** to the **additional learning phase**. In this phase, two new agents (Agents 3 and 4) are introduced, and the participant receives evidence of how the new agents relate to the original agents (Agents 1 and 2) in terms of reward function. The gems in this phase are also new; furthermore, the participant does not collect any gems themselves (remaining a passive observer). This means that they receive no _direct_ evidence about how the new agents' reward functions relate to their own. Finally, this phase is followed by the **agent generalization phase**, in which the original agents are removed, and the participant imitates one of the new agents in a choice
between gems of unknown value.
### Predictions
The predictions made for the different phases were as follows:
1. [leftmargin=*]
2. (_Path uncertainty phase_) when people cannot see the location of the gems, nor which gems the agents collected, but can only see which direction each agent travelled in, they should choose to follow the direction of the agent which previously collected gems that maximizes their reward function.
3. (_Goal generalization phase_) when people do not know the values of the gems available, they should generalize their previous choice of imitation target to the new set of options (imitating the same agent as before).
4. (_Agent generalization phase_) when faced with a choice of unfamiliar agents to imitate, people should use evidence of correlations between agents' reward functions to identify and imitate the agent most likely to be aligned with their preferences. More specifically, suppose that they judge that Agent 1 has a similar reward function to them over one set of gems, and then see that Agent 3 makes the same choices as Agent 1 over a second set of gems. If a participant believes other agents' preferences to be correlated (e.g. because agents belong to some set of latent groups or types) then when faced with a third set of gems they should imitate Agent 3.
## Methods
### Participants
We recruited 150 UK-based adults through the online platform Prolific. Participants were paid PS1.05 for taking part, plus a bonus of PS0.01 for every 5 points they scored (mean PS1.34, min PS1.15, max PS1.43). The experiment lasted 7m 55s \(\pm\) 4m 27s. Each participant was randomly assigned with uniform probability to either the _path+goal_ (\(N=72\)) condition or the _agent_ condition (\(N=78\)).
### Stimuli
The experiment took place within an online 2D gridworld environment created using the GridllyJS framework Bamford et al. (2022) and hosted on a custom web platform. Participants completed a number of levels within this environment, each containing a number of colored gems out of a total set of seven gems \(\{A,B,C,D,E,F,G\}\). Participants complete a level by collecting any gem, but the number of points obtained varies for each gem depending on their assigned reward function. Participants were informed at the beginning of the experiment of the values of only the first three gems. Each level was either fully observable or partially observable; within a partially observable level, only gridworld tiles within a certain radius of the player avatar's current position are visible (Figure 2 shows examples of both level type).
### Reward functions
Across both the _path+goal_ condition and _agent_ condition, each participant was assigned one of two different reward functions (\(\mathbf{r}_{1}\) or \(\mathbf{r}_{2}\)), which remained fixed throughout the experiment and controlled how many points were obtained for each color of gem. To encourage efficient trajectories, both reward functions also imposed a fixed cost of 1 point for every step taken in the environment. A participant's reward function thus completely determined the optimal trajectory for any given level. Each agent observed within the environment also had one of these two reward functions (\(\mathbf{r}_{1}\) for Agents 1 and 3, \(\mathbf{r}_{2}\) for Agents 2 and 4), and always executed the corresponding optimal trajectory. For the sake of brevity, we will use the terms _aligned_ and _misaligned_ to refer to agents with the same or different reward functions, respectively. For example, for a participant with reward function \(\mathbf{r}_{1}\), Agents 1 and 3 were aligned, while Agents 2 and 4 were misaligned.
Figure 1: A diagram of the experiment structure, showing the progression of levels and indicating which agents and gems were present in each. Green labels indicate levels included in the analysis; grey labels indicate learning/training levels. Gems whose value were unknown to the participant are highlighted with a question mark.
Figure 2: Examples of fully observable (left) and partially observable (right) gridworld levels.
### Procedure
Following an unscored practice level, participants completed a sequence of six levels in the gridworld environment, as described in Experiment. The levels presented to each participant depended on whether they were assigned to the _path+goal_ condition or the _agent_ condition (see Fig. 1). At the beginning of a level, the participant watched a sequence of prerecorded trajectories, each showing a different agent completing the level by navigating to one of the available gems (determined by their reward function). Agents were represented using stylised geometric avatars (see Fig. 1). After watching these demonstrations, the participant completed the level themselves by using the arrow keys on their keyboard to move their own avatar around the gridworld to a gem. Their trajectory for each level was recorded as a sequence of 2D coordinates. At the end of the experiment, after completing all assigned levels, participants were asked to supply a short explanation (minimum 100 characters) for the choices they made.
### Analysis
The trajectory of each participant was recorded for each level as a sequence of coordinates. For each of the levels included in the analysis there were two demonstrator agents shown; the trajectories of these agents were compared to the trajectory recorded for the participant. Letting \(\tau_{p}\) and \(\tau_{j}\) represent the trajectories of the participant and demonstrator agent \(j\), respectively, we compute the **similarity** as
\[s(\tau_{p},\tau_{j})=e^{-\frac{1}{T}\sum_{t=1}^{T}d(\tau_{p}^{(t)},\tau_{j}^{( t)})} \tag{1}\]
where \(d(\tau_{p}^{(t)},\tau_{j}^{(t)})\) gives the Euclidean distance between the two trajectories at timestep \(t\). The function \(s\) produces values that lie in the range \((0,1]\), with a value of 1 indicating that the two trajectories are exactly identical.
For each participant and level, we used the two similarity values (corresponding to the two agents) to compute a binary variable
\[\alpha_{p}=\arg\max_{j\in\{1,2\}}\{s(\tau_{p},\tau_{j})\} \tag{2}\]
indicating which agent participant \(p\)'s behavior was most similar to. This was used to perform (for each level) a single-tailed binomial test, with \(n\) as the number of participants who completed the level, \(k\) as the number who were more similar to the aligned agent, and a null hypothesis of \(k/n=0.5\). In addition, we performed a logistic regression to predict \(\alpha_{p}\) from the participants' assigned reward function (\(\mathbf{r}_{1}\) or \(\mathbf{r}_{2}\)).
## Results
### Exploratory analysis
We performed exploratory analysis on participants' free text responses explaining their answers. For this, text responses were stripped of respondent or condition tags and manually coded for mention of various strategies. We observed that participants often mentioned following the agent with the preference for high scores, and only rarely explicitly mentioned transferring their allegiance to the agent who was like the first agent, which was the key experimental condition for Experiment 2, mentioned by 16 respondents out of 78. Those participants who did mention this strategy performed better at the task (average score 155) compared to those who did not pick up on this strategy (average score 135).
Unfortunately, despite the instructions and training phase, many participants did not understand the task or were not motivated to complete it successfully, as evidenced by mentioning they followed a random strategy (53 mentioned random). The participants who mentioned this performed worse at the task (average score 131) compared to those who did not (\(N=98\), average score 156).
### Trajectory similarity
For each level included in the analysis, we used Equation 2 as the basis of both a single-tailed binomial test and a logistic regression. Table 1 gives the results from both. During the _path uncertainty phase_, participants were able to infer that when gems' locations were hidden, they could achieve the best outcome by following the agent which had previously been seen to favour their reward-maximising gem (\(p<.001\), accuracy \(=87.5\%\)). Furthermore, in the _goal generalization phase_, participants were able to generalize this inference to novel environments containing only gems of unknown value (\(p<.001\), accuracy \(=69.4\%\) in level 5; \(p<.001\), accuracy \(=76.4\%\) in level 6). Finally, when presented with unfamiliar agents, and given evidence _only_ of the relation between new and old agents (and not directly between the new agents and themselves), participants in the _agent generalization phase_ were able to identify which of the new agents was more likely to be aligned with their own preferences and imitate them. This was seen in level 9 (\(p=.00328\), accuracy \(=65.8\%\))--but in level 10 (the final level) participants were much more prone to explore, with imitation choices not significantly different from chance (\(p=.411\), accuracy \(=53.2\%\)).
Figure 3 (top) shows, for each level across the three test phases, the mean trajectory similarity of participants to both aligned and misaligned agents. In the _path uncertainty phase_ (levels 3-4), the participants' behavior showed overwhelming similarity to that of the aligned agent. In the _goal generalization phase_ (levels 5-6), the trend is slightly weaker, but still we see significantly more similarity to the aligned agent. Finally, considering the _agent generalization phase_, while for level 9 we see significant favouring of the aligned agent, for level 10 the difference is within standard error, and so is not significant. This can likely be explained by participants having additional motivations not captured by their assigned reward functions, such as curiosity. Since the values of the gems in this phase were unknown, some participants in level 10 (the final level) will have explored in order to learn the value of whichever gem they didn't select in level 9. The visualisations of actual participant and agent trajectories in Figure 3 (bottom) provides an additional view into the same results.
## Discussion
In this paper, we have explored the question of whether people's selection of imitation targets depends on inferences about their reward functions, and to what extent these selections are generalized. By conducting a behavioral experiment within a virtual gridworld environment, we found that when faced with a choice of imitation targets, people preferentially copy the behavior of an agent they judge to have a similar reward function to their own. Furthermore, we demonstrated evidence that people generalize these selections beyond the original context in which the inference was made, continuing to imitate the same agent's choices over a new set of options with unknown value. More interestingly, our results support the idea that people can extend their inferences not only to unknown items or goals, but also to _unfamiliar demonstrators_. By using observed correlations between the reward functions of different agents, people are able to select appropriate imitation targets even without direct evidence of similarity to themselves.
### Imitating agents who share your reward function
In any environment populated by heterogeneous agents pursuing different tasks or goals, imitating at random is unlikely to produce favourable results. In some cases, simple SLSs based on superficial factors like age or appearance will be sufficient to discriminate 'good' targets from bad. But in other cases, these approaches will fall short. For example, we might have a setting where agents appear similar or identical in terms of explicit observable characteristics, but still vary significantly along dimensions that are important to determining their behavior. Alternatively, a highly dynamic and fast-changing environment could render more stable agent characteristics increasingly less informative. We argue that under conditions such as these, a metacognitive SLS based on reward function inference can provide a valuable alternative. By identifying and imitating agents whose behavior is directed towards the same task or goal that they are trying to accomplish, a learner can acquire behavior that is more likely to lead to outcomes satisfying their own reward function. Indeed, our results offer evidence for the existence of such a strategy: in agreement with Prediction 1 (see Experiment), participants in the _path uncertainty phase_ showed an overwhelming preference for imitating the agent that they had previously been able to infer (during the _learning phase_) shared their reward function over the available gems. However, it is also important to highlight
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline level & 3 & 4 & 5 & 6 & 9 & 10 \\ \(k/n\) & 63 / 72 & 63 / 72 & 50 / 72 & 55 / 72 & 52 / 78 & 41 / 78 \\ \(p\) & \(2.09\times 10^{-11}\) & \(2.09\times 10^{-11}\) & \(6.47\times 10^{-4}\) & \(4.07\times 10^{-6}\) &.00328 &.411 \\ accuracy (\%) & 87.5 & 87.5 & 69.4 & 76.4 & 65.8 & 53.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of a single-tailed binomial test and a logistic regression. At each level, \(n\) gives the number of participants that completed the level, and \(k\) gives the number of participants that followed the aligned agent. \(p\)-values are computed based on the null hypothesis that people imitate the aligned and misaligned agents with equal probability (i.e. \(k/n=0.5\)). The bottom row reports the per-level accuracy of a logistic regression model predicting imitation decisions (which agent a participantβs trajectory was most similar to) from assigned reward functions (\(\mathbf{r}_{1}\) or \(\mathbf{r}_{2}\)).
Figure 3: **Top**: mean similarity of participant trajectories to those of the aligned and misaligned agents for each level, computed following Equation 1. Error bars represent standard error. **Bottom**: visualisation of recorded trajectories from participants and agents with each reward function. Tiles with a lighter color were visited by a greater proportion of participants, and the starting location for each level is highlighted in green.
certain limitations of the current experiment. For instance, as outlined in the _Exploratory analysis_ section, the free text responses provided by a number of participants indicated that they failed to understand the task. This suggests that the way the task is presented to participants should be improved in any future versions. Furthermore, a possible alternative explanation for participants' choices is that they judged the agents as having the same reward function but varying in how competent they were at satisfying it. In an attempt to preclude this, participants were instructed directly that agents were equally capable but could vary in their preferences for gems. However, future versions of this experiment should be designed in such a way as to explicitly distinguish between these two explanations.
### generalization to unfamiliar domains
In Prediction 2, we suggested that people can generalize these inferences to new contexts, where the relative value of all options is unknown. Our results from the _goal generalization phase_ support this prediction. Participants facing a novel choice between gems of unknown value were able to generalize their previous selection of imitation target. This has implications for how people handle situations involving uncertainty around _their own reward function_. As an example, imagine ordering a meal in a foreign country whose food is completely unfamiliar to you. You may have no idea which of the many dishes you would like best, but if your friend has some experience with the cuisine, and you know from prior experience that you and your friend often have similar tastes, then you might assume that this similarity transfers to the new context and copy your friend's order. By doing this, you can reduce the uncertainty in your own reward function through selective imitation, effectively learning something about yourself by copying the behavior of someone else. Of course, even people with highly similar preferences don't agree on _every_ conceivable choice, and so this type of generalization will not always make sense. An interesting direction for future work would be to investigate the factors that determine people's cross-domain generalization of reward-function-based imitation. For instance, we might expect that people are more willing to generalize when the two domains have more in common, or when they are more familiar with the agent they're imitating.
### generalization to unfamiliar agents
While we have argued that inferring the reward functions of other agents in a heterogeneous environment can provide a useful basis for selecting imitation targets, there are reasons to believe that this picture is incomplete. Firstly, recovering a complete representation of an agent's reward function just from observations of their behavior is likely a sample-intensive process. In an environment populated by a large number of agents, it could therefore prove prohibitive to carry out this inference for every agent individually; especially when reward functions are high-dimensional. Furthermore, even in a complex environment that supports a large space of possible reward functions, the probability distribution over that space will likely be strongly peaked around a small number of points corresponding to common reward functions, with some variation. Encountering a new and unfamiliar agent, it is therefore likely that we can capture a reasonable approximation of their reward function just by assigning them to one of these points; in most cases this should be substantially more sample-efficient than trying to recover their reward function 'from scratch'. As a first step towards investigating this idea, our results from the _agent generalization phase_ show that people can in fact judge the suitability of an unknown demonstrator by using evidence of correlations between different agents' reward functions. We suggest that this is enabled by an inductive bias that pushes people towards modelling the existence of distinct agent _types_ or _groups_, which they leverage to achieve a more sample-efficient understanding of the agents in their environment. This idea is related to recent work in the area of _social structure learning_ concerned with computational accounts of how people learn latent groups from observations of individuals' choices Gershman et al. (2017); Gershman and Cikara (2020). Given an assumption that members of the same latent group or type share consistently similar reward functions, then the group membership of any particular agent can serve in essence as a compressed representation of their reward function; and thus of their suitability as an imitation target. Future work will explore these ideas in greater depth, considering more specifically the question of imitation on the basis of inferred social groups, through both further behavioral experiments and computational modelling.
### Selective imitation in machines
The results of the current work have implications not only for our understanding of human social learning strategies, but also for how we might design artificial agents that use selective imitation to learn more efficiently in unfamiliar environments. Imitation learning, which has long been a common paradigm for behavior learning in robotics Osa et al. (2018); Argall et al. (2009), typically makes the assumption that there is only ever a single possible demonstrator--an assumption that quickly breaks down outside of only the most controlled environments. Giving machines the ability to actively select suitable imitation targets within rich multi-agent environments could pave the way to robots that are able to better navigate the complexity and uncertainty of the real world. By pointing towards certain priors or inductive biases involved in how people generalize reward function inferences across domains and agents, our initial findings may also have value for the development of more humanlike inverse reinforcement learning algorithms.
## Conclusion
In sum, our results, while preliminary, represent an important step towards understanding how theory of mind abilities such as reward function inference can support sophisticated metacognitive social learning strategies that enable people to acquire adaptive behaviors under various sources of uncertainty. |
2306.11075 | Diffusion model based data generation for partial differential equations | In a preliminary attempt to address the problem of data scarcity in
physics-based machine learning, we introduce a novel methodology for data
generation in physics-based simulations. Our motivation is to overcome the
limitations posed by the limited availability of numerical data. To achieve
this, we leverage a diffusion model that allows us to generate synthetic data
samples and test them for two canonical cases: (a) the steady 2-D Poisson
equation, and (b) the forced unsteady 2-D Navier-Stokes (NS)
{vorticity-transport} equation in a confined box. By comparing the generated
data samples against outputs from classical solvers, we assess their accuracy
and examine their adherence to the underlying physics laws. In this way, we
emphasize the importance of not only satisfying visual and statistical
comparisons with solver data but also ensuring the generated data's conformity
to physics laws, thus enabling their effective utilization in downstream tasks. | Rucha Apte, Sheel Nidhan, Rishikesh Ranade, Jay Pathak | 2023-06-19T17:19:47Z | http://arxiv.org/abs/2306.11075v1 | # Diffusion model based data generation for partial differential equations
###### Abstract
In a preliminary attempt to address the problem of data scarcity in physics-based machine learning, we introduce a novel methodology for data generation in physics-based simulations. Our motivation is to overcome the limitations posed by the limited availability of numerical data. To achieve this, we leverage a diffusion model that allows us to generate synthetic data samples and test them for two canonical cases: (a) the steady 2-D Poisson equation, and (b) the forced unsteady 2-D Navier-Stokes (NS) vorticity-transport equation in a confined box. By comparing the generated data samples against outputs from classical solvers, we assess their accuracy and examine their adherence to the underlying physics laws. In this way, we emphasize the importance of not only satisfying visual and statistical comparisons with solver data but also ensuring the generated data's conformity to physics laws, thus enabling their effective utilization in downstream tasks.
Machine Learning, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model, Diffusion model
## 1 Introduction
The application of machine learning (ML) and deep learning (DL) techniques in modeling partial differential equations (PDEs) has gained significant momentum over the past decade. These techniques have been employed to address various challenges in physics-based modeling, such as developing closure terms for large-eddy simulations and Reynolds-averaged Navier-Stokes equations in computational fluid dynamics (CFD) (Duraisamy, 2021; Maulik et al., 2019; Ling et al., 2016), enhancing computational efficiency for classical solvers (Bar-Sinai et al., 2019; Weymouth, 2022), and facilitating reduced-order modeling (Murata et al., 2020; Eivazi et al., 2022), among others. Additionally, the field of physics-based DL research has witnessed the emergence of numerous frameworks tailored for fast inference and generalizability across different classes of PDEs. Examples include physics-informed neural networks (PINNs) (Raissi et al., 2019), Fourier neural operators (FNOs) (Li et al., 2020), DeepONet (Lu et al., 2019), latent-space based local learning (Ranade et al., 2020; Ranade et al., 2022) and more. Although these methods have demonstrated remarkable success in solving PDEs for a variety of applications, one of the main factors impacting their accuracy and generalizability is the scarcity of high-quality training data (Sun et al., 2017).
In recent years, denoising diffusion models have emerged as the leading technique for generative modeling (Sohl-Dickstein et al., 2015; Song and Ermon, 2020; Ho et al., 2020). These models follow a two-step process: A forward step, where noise is added in a Markovian manner, followed by a reverse denoising step learnt using a deep neural network. Once trained, the model can generate new samples by starting from various realizations drawn from a Gaussian noise distribution. Diffusion models have demonstrated tremendous success in various domains, including conditional and unconditional image generation (Dhariwal and Nichol, 2021; Rombach et al., 2022), speech generation (Chen et al., 2020), image super-resolution (Saharia et al., 2022), video generation (Ho et al., 2022), to name a few. However, in the physics domain, the utilization of diffusion models has been relatively limited. Shu et al. (2023) proposed a physics-inspired diffusion model for generating high-fidelity CFD data from low-fidelity/undersampled snapshots. Vlassis and Sun (2023) used denoising diffusion models for conditional generation of microstructures, testing it on the mechanical MNIST dataset. Yang and Sommer (2023) used a diffusion-based model for temporal prediction of a chaotic fluid flow. Without encoding prior physics constraint, they found that the diffusion-based network had comparable performance to that of existing models. Lim et al. (2023) extended the use of diffusion models to map functional spaces. Trained on a single resolution, the authors demonstrated the generation of PDE solutions for a variety of resolutions across different use cases.
In this study, we use denoising diffusion implicit models (DDIMs) for the unconditional generation of data for two distinct physical systems: 2-D Poisson and 2-D Navier-Stokes flow. While our trained model lacks any prior physics encoding, we utilize physics-based constraints to select
snapshots that adhere to fundamental laws. We propose two approaches to apply the physics-based constraint: PDE residual calculation (for the 2-D Poisson equation) and comparison with solver output (for the 2-D NS equation). Our aim is that, in the future, the data generation paradigm based on diffusion models can partially alleviate the challenge of data scarcity for physics-based machine learning models.
## 2 Methodology
### Diffusion Model and Architecture
We utilize a diffusion model with a cosine scheduler that progressively degrades the data over 1000 steps (Sehwag, 2022). The reverse diffusion process is parametrized by a deep neural network based on the widely used U-Net architecture in the diffusion literature. To facilitate efficient reverse sampling from Gaussian noise, we employ the DDIM strategy described in Song et al. (2020) (Song et al., 2020) and use 500 sampling steps (instead of 1000) to accelerate the sampling speed. The input configuration of the U-Net architecture depends on the specific data to be modeled. For example, in case of the 2-D Poisson equation (\(\nabla^{2}u=f\)), both variables \(u\) and \(f\) are provided as input to the U-Net through two separate channels. For problems involving temporal variation, different timesteps can be provided as inputs to the U-Net architecture through separate channels. At the time of sampling, all the channels are initialized with Gaussian noise and DDIM is used to arrive at \(T=0\) from \(T=1000\), thus obtaining denoised channels.
### Physics-Based Constraints for Selection of Generated Data
For physics-based ML data, it is crucial to ensure that the generated data samples, whether obtained from a traditional solver or a machine learning approach, adhere to the underlying governing equations in addition to being visually and statistically accurate. To verify this, we propose two distinct approaches. In the first approach, we compute the MSE of the PDE residual over the grid, for example \(\mathrm{MSE}(|\nabla^{2}u-f|)\) in the case of a 2-D Poisson's equation. We selectively retain only those samples where the \(\mathrm{MSE}\) is less than a particular threshold. This criterion ensures that the the generated solutions satisfy the governing equation. In the second approach, one can use a traditional solver to verify the quality of the generated data. In the case of steady state PDEs, we compare the MSE between the generated solution (\(u\) in Poisson's equation) with the \(u\) evaluated from a classical solver for the same generated \(f\). Alternatively, for transient PDEs, the first channel of the generated data is provided as an input to a traditional solver and the \(\mathrm{MSE}\) (averaged across the entire grid and remaining channels) of the diffusion-generated data and solver-generated data is evaluated. We retain only those sets of generated snapshots that exhibit an MSE lower than a certain threshold. This approach guarantees that the selected samples align with the underlying physics.
## 3 Experiments
We demonstrate our diffusion model based data generation technique for two distinct use cases outlined below.
### 2-D Poisson Equation
In the case of the 2-D Poisson equation, \(\nabla^{2}u=f\), both \(u\) and \(f\) are passed as two separared channels to the U-Net architecture. The weights of the U-Net are optimized based on the loss function \(L_{\mathrm{poisson}}=\|\epsilon_{u}-\epsilon_{u}^{\mathrm{pred}}\|^{2}+ \lambda\|\epsilon_{f}-\epsilon_{f}^{\mathrm{pred}}\|^{2}\), where \(\epsilon\) corresponds to the noise added in the forward diffusion. We empirically found that \(\lambda=2\) worked the best for 2-D Poisson equation. The network is trained on 10,000 pairs of \([f,u]\) generated on a \(64\times 64\) grid using a multigrid solver.
In this study, we aimed to address the challenge of generating f and u simultaneously using a diffusion model. Traditionally, data generation tasks often focus on generating one variable at a time, such as generating f or u independently. However, in our context, it was crucial to generate f and u together due to their inherent dependencies and interactions. This posed a more complex and challenging problem, as the diffusion model had to capture the joint distribution of f and u accurately.
### 2-D Forced Navier Stokes Vorticity-Transport Equation
For the forced unsteady 2-D Navier-Stokes (NS) equation, we train the diffusion model on blocks of five consecutive vorticity (\(\omega\)) fields, where separation between two consecutive fields in time \(\Delta t=1.6\mathrm{s}\). The blocks size (five in this case) can be an arbitrary choice. These five consecutive vorticity fields are sent as five channels input to the U-Net network, and the loss in the noise prediction is calculated by summing over all the five channels, \(L_{\mathrm{NS}}=\sum_{c=1}^{5}\|\epsilon_{c}-\epsilon_{c}^{\mathrm{pred}}\|^{2}\), where \(c\) is channel index. The network for 2-D NS equations is trained with \(700\) solutions, each starting with a different initial condition. The vorticity is evolved until \(t=320\mathrm{s}\) on \(64\times 64\) grid using a NS solver (Li et al., 2020). The viscosity is set at \(\nu=10^{-4}\) and the forcing function takes the form \(f=0.1\mathrm{sin}(4\pi(x+y))+0.1\mathrm{cos}(4\pi(x+y))\). To increase the amount of data for training, a sliding window approach with a stride of three was used. Hence, there is an overlap between two consecutive blocks of five snapshots.
The U-Net architecture used for 2-D Poisson and NS equation data consists of \(6M\) parameters with adaptive group normalization. The codebase for this work is built on (Se
hway, 2022). The U-Net network is conditioned with the diffusion time \(t\) through feature vector embedding. Both networks are trained for 250 epochs.
## 4 Results
### 2-D Poisson Equation
In the context of 2-D Poisson, we conducted a comprehensive analysis of the generated \([f,u]\) pairs, focusing on their visual quality and adherence to the underlying governing equations. Figure 1 showcases four such pairs (left two columns). The contours of the synthetically generated \([f,u]\) exhibit smooth boundaries, and their values fall within the expected range. However, it is important to note that visual appearance alone does not guarantee adherence to the underlying physics equations.
To ensure the physical validity of the generated images, we examine the (PDE) residual, \(\nabla^{2}u_{\rm generated}-f_{\rm generated}\), associated with each generated pair (last column in Figure 1). All the samples in the figure correspond to \(\rm MSE\) of PDE residual \(<6\times 10^{-6}\). When comparing them to the solver-generated \(u_{\rm generated}\) (third column), obtained by passing \(f_{\rm generated}\) through a finite-difference solver, we observe that all the generated samples exhibit a very close visual resemblance between \(u_{\rm generated}\) and \(u_{\rm solver}\).
Table 1 shows the relative \(L_{2}\) error between the statistics of synthetic and solver generated \(f\) and \(u\). For both variables, the percentage error is less than 1%, indicating the efficacy of a diffusion-based model in recovering the statistics of the data distribution. For the threshold of \(\rm MSE\) of PDE residual \(<6\times 10^{-6}\), 99962 pairs out of total generated \(100,000\) pairs were admissible and can be used for other downstream tasks.
Figure 1 and table 1 clearly demonstrates that generated data with low PDE residuals demonstrate a very close agreement with the underlying physics equations and exhibit a greater visual and statistical resemblance to solver solutions, making them more suitable for subsequent analysis and utilization. Conversely, generated data with high PDE residuals should be approached with caution, as they may deviate significantly from the desired physical behavior.
solver-generated mean field (figure 2(b)) show inclined patches of alternate sign vorticity, reminiscent of the forcing function that is being applied. We find that the diffusion model is able to reproduce the inclined patches, corresponding to the forcing pattern qualitatively (figure 2(a)). However, it is important to note that the distribution of positive vorticity patches appears to be more dominant in the generated data compared to the solver-generated distribution.
Finally, in addition to quantitative evaluation metrics, we conducted a visual inspection of the generated samples to assess their diversity for both datasets (attached in appendix). It was evident that the generated data samples exhibited a wide range of variations and distinct features for both 2-D Poisson and 2-D NS equations. The visually diverse nature of the generated samples indicates the effectiveness of diffusion model model in producing novel and unique outputs.
## 5 Conclusions
In this work, we introduced a data generation methodology based on diffusion models and validated it for two canonical physical systems: 2-D Poisson and 2-D forced Navier-Stokes vorticity-transport equation. Our findings demonstrate that the diffusion model can effectively generate visually and statistically consistent samples. To leverage these samples for subsequent downstream tasks such as training physics-based machine learning algorithms, one can employ PDE-based residuals or solver-based filtering methods to select physically consistent samples. This approach ensures that the generated data adheres to the underlying physics and can be reliably used in further analyses.
It is important to note that this work is ongoing, and there are several future directions to explore. One future direction involves incorporating physics-based losses in the training and sampling algorithm of the diffusion model itself. In our current model, data generation is limited to a specific resolution. However, we are concurrently working on implementing super-resolution techniques for diffusion models. This will enable interpolation between resolutions and the generation of high-fidelity images. We are also exploring the extension of this method to solutions represented on unstructured meshes. Additionally, we plan to utilize the governing parameters of the data to condition the model in the future, enhancing its general-purpose capabilities.
## Broader Impact
This research direction can have significant impact on various fields important to society, which also face the issue of data scarcity, e.g., climate science, material science, etc. Diffusion models can contribute to the advancement of these fields by providing a means to generate realistic and physically accurate data for training and validating machine learning algorithms. However, the reliance on machine learning techniques for physics-based simulations may raise concerns about the interpretability and explainability of the models, given that physics-based simulations are crucial to ensure safety in various domains, e.g., aeronautics, nuclear
Figure 3: Two-dimensional contours of the mean vorticity field from the (a) diffusion-generated data distribution and (b) solver-generated distribution. For statistics on the diffusion-generated vorticity field, we condition on \(\mathrm{MSE}\) between diffusion-generated and solver-generated snapshots \(<2\times 10^{-2}\).
Figure 2: Diffusion- and solver-generated vorticity snapshots for 2-D forced NS system. Top two rows correspond to \(\mathrm{MSE}\approx 8\times 10^{-3}\) between solver-generated and diffusion-generated snapshots. Bottom two rows correspond to \(\mathrm{MSE}\approx 0.045\).
power, electronic appliances, etc.
|
2303.05806 | Toward an unbiased flow measurements in LHC $pp$ collisions | Long-range correlations for pairs of charged particles with two-particle
angular correlations are studied in $pp$ at ${\sqrt{{\textit s}}}=13$~TeV with
various Monte Carlo generators. The correlation functions are constructed as
functions of relative azimuthal angle $\Delta\varphi$ and pseudorapidity
separation $\Delta\eta$ for pairs of different particle species with the
identified hadrons such as $\pi$, $K$, $p$, and $\Lambda$ in wide $\Delta\eta$
ranges. Fourier coefficients are extracted for the long-range correlations in
several -multiplicity classes using a low-multiplicity template fit method. The
method allows to subtract the enhanced away-side jet fragments in
high-multiplicity with respect to low-multiplicity events. However, we found
that due to a kinematic bias on jets and differing model implementation of flow
and jet components, subtracting the non-flow contamination in small systems can
bias the results. It is found that PYTHIA8 Default model where the presence of
the collective flow is not expected but the bias results in very large flow.
Also extracting flow signal from the EPOS4 and PYTHIA8 String Shoving models is
not possible because of flow signal introduced in the low-multiplicity events.
Only AMPT String Melting model among studied model calculations is free from
this bias, and shows a mass ordering at low $p_{\mathrm{T}}$ and particle type
grouping in the intermediate $p_{\mathrm{T}}$ range. This feature has first
found in large systems but the mass ordering in small systems is different from
what is observed in the large collision systems. | SuJeong Ji, Maxim Virta, Teemu Kallio, SangHoon Lim, Dong Jo Kim | 2023-03-10T09:27:36Z | http://arxiv.org/abs/2303.05806v1 | # Toward an unbiased flow measurements in LHC \(pp\) collisions
###### Abstract
Long-range correlations for pairs of charged particles with two-particle angular correlations are studied in \(pp\) at \(\sqrt{s}=13\) TeV with various Monte Carlo generators. The correlation functions are constructed as functions of relative azimuthal angle \(\Delta\varphi\) and pseudorapidity separation \(\Delta\eta\) for pairs of different particle species with the identified hadrons such as \(\pi\), \(K\), \(p\), and \(\Lambda\) in wide \(\Delta\eta\) ranges. Fourier coefficients are extracted for the long-range correlations in several -multiplicity classes using a low-multiplicity template fit method. The method allows to subtract the enhanced away-side jet fragments in high-multiplicity with respect to low-multiplicity events. However, we found that due to a kinematic bias on jets and differing model implementation of flow and jet components, subtracting the non-flow contamination in small systems can bias the results. It is found that PYTHIA8 Default model where the presence of the collective flow is not expected but the bias results in very large flow. Also extracting flow signal from the EPOS4 and PYTHIA8 String Showing models is not possible because of flow signal introduced in the low-multiplicity events. Only AMPT String Melting model among studied model calculations is free from this bias, and shows a mass ordering at low \(p_{\rm T}\) and particle type grouping in the intermediate \(p_{\rm T}\) range. This feature has first found in large systems but the mass ordering in small systems is different from what is observed in the large collision systems.
## I Introduction
Collisions between heavy-ions (HIC) exhibit strong collectivity, as demonstrated by the anisotropy in the momentum distribution of final particles emitted at the Relativistic Heavy Ion Collider (RHIC) [1; 2; 3; 4] and the Large Hadron Collider (LHC) [5; 6; 7]. The spatial anisotropies are converted to anisotropies in the final momentum distribution due to a pressure-driven expansion of the strongly interacting quark-gluon plasma (QGP) formed during the collision event. The produced QGP in HIC is in the strongly coupled regime and the state-of-the-art Bayesian analyses utilizing the experimental data favor small values of the shear viscosity to entropy density ratio (\(\eta/s\)), which implies that the produced QGP is considered the fluid with the lowest shear viscosity to entropy density ratio observed in nature [8; 9]. In Recent years, the primary focus has been to constrain model parameters by measuring sensitive observables, using Bayesian analyses [10; 11; 12; 13; 14; 15].
To probe the collective behavior in the momentum anisotropy, long-range particle correlations are used over a wide range of pseudorapidity. Over the past few years, long-range correlations have also been observed in smaller collision systems, such as high-multiplicity (HM) proton-proton (\(pp\)) collisions [16; 17; 18; 19; 20], proton-nucleus (\(p\)A) collisions [21; 22; 23; 24], and collisions of light ions with heavy ions, such as p+Au, d+Au, \({}^{3}\)He+Au [25; 26]. These observations raise the question of whether small system collisions have a similar underlying mechanism for developing correlations as heavy AA collisions.
On the experimental side, extracting flow in small systems remains challenging due to a strong jet fragmentation bias to the long-range correlations. One commonly used approach for suppressing the non-flow contribution in two-particle correlations is to require a large \(\Delta\eta\) gap between the two particles, which is also applied in cumulant methods [19; 27]. However, this approach only eliminates non-flow contributions on the near side, not on the away side (\(\Delta\varphi\sim\pi\)). To address this limitation, a low-multiplicity template fit (LMTF) method has been proposed to remove away-side contributions as well [28; 29; 30], taking into account the autocorrelation between event multiplicity and jet yields [31]. This method enables the subtraction of enhanced away-side jet yields in HM events compared to low-multiplicity (LM) events, and may potentially provide a lower limit on the event multiplicity needed to observe the flow signal.
The observed number of constituent quark (NCQ) scaling pattern of the elliptic flow at RHIC [32; 33; 34; 35] and LHC [36; 37; 38; 39] in large collision systems often refers to evidence of the creation of a thermalized bulk system of quarks that coalesce into hadrons. Whether these patterns can still be observed in collisions of small systems is a question of great current interest. The observation of NCQ scaling in smaller systems would provide important insights into the underlying physics of the system. An approximate NCQ scaling of charged hadrons' \(v_{2}\) in \(p\)-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV is observed at intermediate \(p_{\rm T}\) with ALICE [40] and also for \(v_{2}\) of \(\pi\) and \(p\) in \({}^{3}\)He+Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV with PHENIX [41]. However, this observation was based on a limited range of \(p_{\rm T}\) with the cumulant methods and further experimental checks are needed to confirm the presence of NCQ scaling over a wider range of \(p_{\rm T}\) with the experimental LMTF method. Additionally, it is important to note that
other effects, such as initial-state fluctuations and final-state correlations, can also contribute to the observed elliptic flow in small systems. Therefore, more detailed studies are needed to understand the interplay of these effects and the possible mechanisms underlying the observed NCQ scaling patterns.
On the theoretical side, systematic mapping of the multiparticle correlations across collision systems by varying sizes is presently underway (see e.g. [42]). The quantitative description of the full set of experimental data has not been achieved yet. A summary of various explanations for the observed correlations in small systems is given in [43; 44; 45].
Another important piece of evidence for a strongly interacting medium in small collision systems would be the presence of jet quenching [46; 47]. However, no evidence of jet quenching has been observed in either HM \(pp\) or \(p\)-Pb collisions [48; 49; 50; 51; 52]. A study with two-particle angular correlations in short-range correlations around (\(\Delta\eta\), \(\Delta\varphi\)) = \((0,0)\) is a good tool for studying jet fragmentations [53].
This report investigates the relationship between jet production and collective phenomena in small systems using various Monte Carlo event generators, such as AMPT [54], PYTHIA8 String Shoving [55; 56], and EPOS4 [57]. Although all three models incorporate both jets and collective flow effects, they differ in their approach to describing collective flow. To determine the suitability of each model for a specific experimental method, we assess the latest flow extraction technique, LMTF, against these models. This paper is organized as follows. First, the model descriptions are given in Sec. II and analysis methods are described in Sec. III. The results from model calculations are presented in Sec. IV. Finally, the results are summarized in Sec. V.
## II Model descriptions
In this study, several Monte Carlo (MC) event generators, such as PYTHIA8, AMPT, and EPOS4, of different characteristics are used to compare the non-flow subtraction results. We generate a few million \(pp\) collision events with each event generator and collect final-state charged particles for further analysis. Here we have a brief description of the event generators.
_PYTHIA8:_ PYTHIA8 is a widely used event generator for high-energy \(pp\) collisions, and it recently incorporates a capability of heavy-ion collisions. It includes both hard and soft interactions for jets and underlying events, and the default parameter set called Monash tune can reasonably describe the production of soft particles [58]. In the default version, there is no partonic or hadronic interaction, so we do not expect a long-range correlation among produced particles due to the flow contribution. Hence, it has been used to verify methods to estimate the non-flow contribution [59].
_PYTHIA8 String Shoving:_ In PYTHIA8, a model to describe the long-range correlation in HM \(pp\) collisions called "string shoving" has been implemented as an option [55; 56]. This model introduces a repulsive force between strings, and the interaction can cause a microscopic transverse pressure, giving rise to the long-range correlations. The string shoving approach in PYTHIA8 successively reproduces the experimental measurements of the long-range near-side (\(\Delta\varphi\sim 0\)) ridge yield in HM \(pp\) events by ALICE [60] and CMS [18]. However, strings produced from hard scatterings are also affected by the repulsive force, which then leads to observed long-range correlation even in low-multiplicity events [61].
_AMPT:_ Besides several models based on the causal hydrodynamic framework in describing the collective evolution in small collision systems, the AMPT model with string melting [54] can reproduce the flow-like signals by modeling the evolution of medium as a collection of interacting quarks and hadrons [62]. The applicability of fluid-dynamical simulations and partonic cascade models in small systems has been explored in Ref. [63]. In the context of kinetic theory with isotropization-time approximation, the model can smoothly explain the long-range correlations by fluid-like (hydrodynamic) excitations for Pb-Pb collisions and particle-like (or non-hydrodynamic) excitations for \(pp\) or \(p\)-Pb collisions [64; 65; 66].
_EPOS4:_ The EPOS model describe the full evolution of medium produced by heavy-ion collisions with two parts called a core and a corona [67]. The core part follows the hydrodynamic expansion, and the corona part is composed of hadrons from string decays. After the hadronization process of the core part, the UrQMD model is used to describe hadronic interactions among all hadrons from two parts. The version called EPOS LHC including a different type of radial flow in the case of a small but a very dense system can successfully describe the long-range correlation in HM \(pp\) events [60]. Recently, a new version of EPOS (EPOS4) has been released to the public. We utilize the framework for this study.
The summary of the model characteristics is listed in Tab. 1. The PYTHIA8 Default model is used to understand the non-flow contributions. The PYTHIA8 Shoving, AMPT, and EPOS4 models all include both jets and collective flow effects. However, they differ in their mechanisms for describing the collective flow. It is important to note that the applicability of each model to a specific experimental method may depend on various factors, such as the collision system being studied, as well as the specific observables being measured. Therefore, it is important to carefully
\begin{table}
\begin{tabular}{|c|l|c|} \hline Models & Characteristics & Mechanism \\ \hline PYTHIA8 Default & jets and no flow & Ref. [58] \\ \hline PYTHIA8 Shoving & jets and flow & String repulsion [55; 56] \\ \hline AMPT & jets and flow & String melting[54] \\ \hline EPOS4 & jets and Hydro & Core (hydrodynamical) [57] \\ \hline \end{tabular}
\end{table}
Table 1: A list of the models used in this paper.
limitations of each model when interpreting experimental results. For instance, in the study by ALICE [60], both PYTHIA8 Shoving and EPOS4 fail to reproduce the near-side jet yields, with PYTHIA8 Shoving predicting an increasing near-side jet yield with increasing multiplicity, while EPOS4 shows the opposite trend. Regarding the ridge yields, EPOS4 overestimates them, while PYTHIA8 Shoving underestimates them. The ridge yields in low multiplicity events are similar to those in HM events for EPOS4 and PYTHIA8 Shoving, while they decrease towards low multiplicity events in the experimental data [68].
## III Analysis Procedure
### Event and particle selections
This analysis uses the same event selection criteria as the ALICE experiments, which require a charged particle in both V0A and V0C [70; 71] acceptance. The V0A and V0C cover the pseudorapidity ranges \(2.8<\eta<5.1\) and \(-3.7<\eta<-1.7\), respectively. The contribution from diffractive interactions is minimized in these events [69]. Fig. 1 shows the charged particle density in various \(p_{\rm T}\) intervals. Every model describes the trend of the data well, while PYTHIA8 String Shoving and AMPT model overestimates the data from the ALICE collaborations [69]. Despite of PYTHIA8 String Shoving model largely overestimates the data, the \(p_{\rm T}\) dependency is similar with PYTHIA8 Default and EPOS4. In the case of the AMPT model, it shows the different \(p_{\rm T}\) dependency.
The multiplicity percentiles are estimated by V0M, which is the sum of the charged particles both in the V0A and V0C acceptance. The event multiplicity of V0M from different generators is shown in Fig. 2. PYTHIA8 String Shoving model generates HM events more than other models. The vertical lines indicates the 0-5%, 5-20% and 60-100% event multiplicity of AMPT String Melting events. For the identified flow measurement, \(\pi\), \(K\), and \(p\) for all models and additionally \(\Lambda\) for AMPT model are studied by selecting the particle identification code from the models in the range of \(0.2<p_{\rm T}<6\) GeV/\(c\).
### Two-particle angular correlations
Two-particle angular correlations are measured as functions of the relative azimuthal angle (\(\Delta\varphi\)) and the relative pseudorapidity (\(\Delta\eta\)) between a trigger and associated particles
\[\frac{1}{N_{\rm trig}}\frac{{\rm d}^{2}\,N_{\rm pair}}{{\rm d}\Delta\eta{\rm d }\Delta\varphi}=B_{max}\frac{S(\Delta\eta,\Delta\varphi)}{B(\Delta\eta, \Delta\varphi)}\Big{|}_{p_{\rm T,trig,PT,\,assoc}}\,, \tag{1}\]
where the trigger and associated particles are defined for different transverse momentum ranges and different \(\eta\) acceptance of the detectors. The \(N_{\rm trig}\) and \(N_{\rm pair}\) are the numbers of trigger particles and trigger-associated particle pairs, respectively. \(S(\Delta\eta,\Delta\varphi)\) corresponds to the average number of pairs in the same event and \(B(\Delta\eta,\Delta\varphi)\) to the number of pairs in mixed events. \(B_{max}\) represents the normalization of \(B(\Delta\eta,\Delta\varphi)\), and by dividing
Figure 1: Chargedβparticle pseudorapidity density for four different \(p_{\rm T}\) intervals over a broad \(\eta\) ranges in several model calculations is compared to the ALICE data [69].
Figure 2: The distribution of the V0M charged particles in the region \(-3.7<\eta<-1.7\) and \(2.8<\eta<5.1\). This is used to determine the event multiplicity classes in \(pp\) collisions at \(\sqrt{s}\) = 13 TeV.
\(S(\Delta\eta,\Delta\varphi)\) with \(B(\Delta\eta,\Delta\varphi)/B_{max}\) the acceptance effects are corrected for. This analysis is performed for several multiplicity percentiles (0-5%, 0-20%, 20-40%, and 60-100%), and for each multiplicity percentile.
The flow studies using the ALICE detector were carried out using only the particles detected in the TPC detector [69]. However, due to the limited \(\eta\) acceptance of the TPC detector, the study was restricted to the edge of the detector with \(1.6<\Delta\eta|<1.8\), as well as \(p_{\rm T}>1.0\) GeV/\(c\) to avoid non-flow contributions [69]. To further suppress non-flow contributions, preliminary studies by the ALICE experiment have used the very forward FMD detectors to achieve a large \(\eta\) separation of the correlated particles, up to \(|\Delta\eta|\approx 6\). In this analysis, we use the same combinations of correlations between particles in the TPC and FMD detectors.
Tab. 2 lists the \(\eta\) acceptance and measurable \(p_{\rm T}\) ranges for each detector used in the analysis.
As for TPC-FMD correlations, the trigger particles are from TPC detectors with various \(p_{\rm T}\) intervals and the associated particles are from FMDA or FMDC in a different \(\eta\) ranges with \(p_{\rm T}>0.0\) GeV/\(c\). As for FMDA-FMDC correlations, both trigger and associated particles come from FMD detector with \(p_{\rm T}>0.0\) GeV/\(c\). The \(\Delta\eta\) ranges used for the default analysis with the full \(\eta\) acceptance of all detectors and four additional wider \(\Delta\eta\) gaps used further to reduce the non-flow contributions are summarized in Tab. 3.
Fig. 3 and Fig. 4 show the 2-dimensional correlation function of each detector combination with the events from PYTHIA8 Default and AMPT String Melting models, respectively. Unlike the events from AMPT having both flow and jet components, the PYTHIA8 Default events contain the particles purely from jets. The peak seen in the short-range represents the jet contribution. Even though already having long-range correlations by using the particles in TPC and FMD, still the large jet contamination is seen. To find a safe long-range region for the analysis, five different long-ranges are selected to study the effect on the degree of the jet contamination. Different shape and the amplitude of the jet peak is seen depending on the models.
In the next section, the details about the LMTF method, which is used for the non-flow subtraction, will be discussed as well as the assumptions of the method.
### Extraction of flow coefficients from the Low-Multiplicity Template Fit Method
Due to the strong jet fragmentation bias in small collision systems it is difficult to extract the flow in these collisions because of the remaining non-flow in the away-side region (\(\Delta\varphi\sim\pi\)) in Eq. 1. As discussed in Refs. [28; 29], the HM correlation function in a HM percentile can be expressed as
\[\begin{split} Y_{\rm HM}(\Delta\varphi)=G\ (1+2v_{2,2}\cos(2\Delta \varphi)\\ +2v_{3,3}\cos(3\Delta\varphi)\\ +2v_{4,4}\cos(4\Delta\varphi))\\ +F\ Y_{\rm LM}(\Delta\varphi)\quad,\end{split} \tag{2}\]
where \(Y_{\rm LM}(\Delta\varphi)\) is the LM correlation function, G is the normalization factor for the Fourier component up to the fourth harmonic, and the scale factor \(F\) corresponds to the relative away-side jet-like contribution with respect to the low-multiplicity (LM) (the 60-100%). This method assumes that \(Y_{\rm LM}\) does not contain a peak in the near side originating from jet fragmentation and that the jet shape remains unchanged in HM events compared to LM events. The first assumption is well-verified using the selected LMTF for the experimental data [30], while the second assumption regarding the modification of jet shapes is tested using the near-side \(\Delta\eta\) distributions. Additionally, the ATLAS Collaboration's study of HM \(pp\) and \(p\)-Pb collisions in Ref. [30] provides further support for this assumption, as there is no evidence of jet quenching in these collisions [48; 49; 50; 51; 52]. The fit determines the scale factor \(F\) and pedestal \(G\), and \(v_{n,n}\) are calculated from a Fourier transform. It is worthwhile noting that this method does not rely on the zero yield at minimum (ZYAM) hypothesis to subtract an assumed flat combinatorial component from the LMTF as done previously in Refs. [72; 73]. Whether or not if the models agree on the assumption about the jet shape modification depending on the event multiplicity will be discussed in the Sec. IV.
Fig. 5 shows the LMTF results of TPC-FMDA correlation for 0-20% multiplicity percentile from the AMPT String Melting configuration. Even with the Default \(\Delta\eta\) gap, no ridge structure on the near side is seen in LM correlation function, which indicates that there is almost no jet contamination. The figure also shows the \(v_{2,2}\) and \(v_{3,3}\) components, yet the \(v_{2,2}\) component is dominant.
The LM templates of each \(\Delta\eta\) gap are seen in Fig. 6. As the jet shape is well described in PYTHIA8 Default, the comparison is done using the PYTHIA8 model. Each template is normalised by its \(\Delta\eta\). Decreasing near-side
\begin{table}
\begin{tabular}{|c|c|c|} \hline Detector & \(\eta\) acceptance & \(p_{\rm T}\) range \\ \hline TPC & \(|\eta|<0.8\) & \(0.2<p_{\rm T}<6.0\) GeV/\(c\) \\ \hline FMDA & \(1.9<\eta<4.8\) & \(p_{\rm T}>0.0\) GeV/\(c\) \\ \hline FMDC & \(-3.1<\eta<-1.9\) & \(p_{\rm T}>0.0\) GeV/\(c\) \\ \hline \end{tabular}
\end{table}
Table 2: The acceptance of the detectors used for the trigger and/or associated particles.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Correlations & Default & Gap-A & Gap-B & Gap-C & Gap-D \\ \hline TPC-FMDA & [1.1, 5.6] & [1.5, 5.6] & [2.0, 5.6] & [2.5, 5.6] & [3.0, 5.6] \\ \hline TPC-FMDC & [1.1, 3.9] & [1.6, 3.9] & [2.0, 3.9] & [2.5, 3.9] & [3.0, 3.9] \\ \hline FMDA-FMDC & [3.8, 7.9] & [4.3, 7.9] & [4.8, 7.9] & [5.3, 7.9] & [5.8, 7.9] \\ \hline \end{tabular}
\end{table}
Table 3: The \(|\Delta\eta|\) ranges of each correlation function and four additional wider \(\Delta\eta\) gaps used further to reduce the non-flow contributions.
yield is seen with increasing \(\Delta\eta\) gap (from Default gap to gap-D), and almost the same feature is seen in gap-C and gap-D. Under the first assumption of the template fit method, which requires no near-side yield in the low-\(p_{\rm T}\) region, the \(p_{\rm T}\) region is not seen in gap-D. The \(p_{\rm T}\) region is seen in gap-D, and the \(p_{\rm T}\) region is seen in gap-D, and the \(p_{\rm T}\) region is seen in gap-D. The \(p_{\rm T}\) region is seen in gap-D, and the \(p_{\rm T}\) region is seen in gap-D.
multiplicity events, we selected the gap-D for the precise analysis. To see if the other models meet the assumption, the LM templates of each model are compared in gap-D.
The comparison between the LM templates of each model in the Default \(\Delta\eta\) gap is seen in the Fig. 7. As the near-side yield in the LM events comes from the jets, there should be no near-side ridge yield for the precise nonflow subtraction. The presence of the LM jet bias indicates that there is a chance of the jet shape modification in the away-side. The ratio is calculated by dividing the AMPT String Melting from the PYTHIA8 Default, PYTHIA8 String Shoving and the EPOS4 models. The PYTHIA8 shows a small near-side yield and the String Shoving shows larger yield, whilst there is no ridge yield from the AMPT String Melting and the EPOS4 models. In the case of the away-side yield, fairly broad shape is seen in the AMPT String Melting version and narrow shape in EPOS4 compared to both PYTHIA8 configurations.
However, we can not test whether the models agree with the second assumption requiring no jet shape modification depending on the event multiplicity. As every model apart from the PYTHIA8 Default contains the flow components in the away-side, we can not disentangle the flow and jets.
Finally, \(v_{n}\) are extracted, based on the observed factorization of \(v_{n,n}\) to single harmonics [28; 29], using the following equation,
\[v_{n}(p_{\rm T,TPC})=\sqrt{\frac{v_{n,n}^{\rm TPC-FMDA}\cdot v_{n,n}^{\rm TPC -FMDC}}{v_{n,n}^{\rm FMDA-FMDC}}}, \tag{3}\]
where \(v_{n,n}(p_{\rm T,trig}\) and \(p_{\rm T,assoc})\) are measured in \(0.2<p_{\rm T,trig}<6\) GeV/\(c\) and integrated \(p_{\rm T}\) ranges.
Figure 5: The low-multiplicity template fit results. The black markers shows the signal for the 0β20% multiplicity percentile together with its fit shown as a blue band. The red squares correspond to the low-multiplicity template. The orange and green curves correspond to the extracted \(v_{2}\) and \(v_{3}\) signals, respectively. (see also Fig. 12 in Appendix from the different models.)
Figure 6: The \(\Delta\eta\) gap dependent LM templates with PYTHIA8 Default.
Figure 7: The LM template for the different model calculations using the Default gap.
Results
### Unidentified charged hadron flow
The \(p_{\rm T}\)-differential \(v_{2}\) of the charged particles for different \(\Delta\eta\) gap intervals in \(pp\) collisions at \(\sqrt{s}\) = 13 TeV are shown in Fig. 8 for several model calculations. The top panel shows the final \(v_{2}\) and bottom two von panels show \(v_{2,2}\) measured from TPC-FMDA and TPC-FMDC, respectively. The results for PYTHIA8 Default are shown in the first column, PYTHIA8 String Showing in the second, EPOS4 in the third, and AMPT String Melting on the last. Even though the PYTHIA8 Default does not contain any flow component, non-zero \(v_{2}\) is seen in every \(\Delta\eta\) gap. As the \(\Delta\eta\) gap becomes larger, the less non-flow dominant region we contain as shown in the Fig. 3, therefore smaller amplitude of \(v_{2}\) is seen with increasing \(\Delta\eta\) gap. Despite having both flow and non-flow components in the PYTHIA8 String Showing, similar behaviour is seen with the PYTHIA8 Default with smaller magnitude of the flow component in overall. This can be due to the presence of the near-side yield in the low multiplicity which can be seen in the template fit results (see Fig. 12 in Appendix). In the case of the EPOS4, which also includes the flow components, smaller magnitude of \(v_{2}\) and \(v_{2,2}\) are seen compared to the both PYTHIA8 configurations and similar \(p_{\rm T}\) and \(\Delta\eta\) gap dependence is seen with PYTHIA8. Lastly, the AMPT String Melting model shows that in low \(p_{\rm T}\) regions \(v_{2}\) doesn't vary much on the \(\Delta\eta\) gap selection. However, the \(v_{2}\) increases with increasing \(\Delta\eta\) gap unlike other models and these are mostly influenced by the fact that the TPC-FMDC has effected by the jet contamination in smaller \(\Delta\eta\) gap selections as seen in the bottom panel of AMPT (see the correlation function). In the low \(p_{\rm T}\) regions, \(v_{2}\) are increased by 50 % and in high \(p_{\rm T}\) regions, a factor of two respectively (see Fig. 14 in Appendix). Since the largest \(\Delta\eta\) gap has the smallest contribution from non-flow, in latter sections, only results from the AMPT SM with the gap-D will be shown.
### Identified charged hadron flow
Fig. 9 shows the \(v_{2}\) of the identified charged particles in 0-20% and 20-40% events with the AMPT String Melting model. Grouping of \(v_{2}\) is seen depending on the particle species, especially whether the particle is meson or baryon in 0-20% events. In the case of the 20-40% events, the mass splitting is not clearly seen mostly due to the lack of the statistics. Also, as the smaller \(v_{2}\) is seen in 20-40% compared to 0-20%, we also studied about the multiplicity dependence of \(v_{2}\).
Fig. 10 shows the dependence of \(v_{2}\) on transverse kinetic energy, normalized by the number of quark constituents (\(n_{q}\)), using the AMPT String Melting model. The transverse kinetic energy, \(\rm{KE_{T}}\), is defined as \(\rm{KE_{T}}=m_{\rm{T}}-m_{0}\), where \(m_{\rm{T}}=\sqrt{m_{0}^{2}+p_{\rm{T}}^{2}}\) is the transverse mass.
The observed variation of flow with particle species arises from the hydrodynamic pressure gradient, which is dependent on the particles' mass. As \(\rm{KE_{T}}\) is directly related to the pressure gradient, we measured the transverse kinetic energy-dependent flow. We normalized the \(v_{2}\) and \(\rm{KE_{T}}\) by the number of quark constituents, as the number of quarks in a particle varies by its type. While previous data from large collision systems at LHC show that the flow coefficients approximately lie on a line regardless of the particle species [36; 37; 38; 39], the AMPT results in \(pp\) collisions show some deviation from the scaling in both 0-20% (left) and 0-5% (right) events. Experimental results obtained with the LMTF method over a wider range of \(p_{\rm{T}}\) will provide further insight into the presence of NCQ scaling in small system collisions.
Figure 10: The NCQ scaled \(m_{\rm{T}}\)-dependent \(v_{2}\) for different particle species in 0β20% (left) and 0β5% (right) high multiplicity percentiles in \(pp\) collisions at \(\sqrt{s}\) = 13 TeV from the AMPT String Melting model calculations.
Figure 9: The \(p_{\rm{T}}\)-differential \(v_{2}\) for different particle species in 0β20% and 20β40% multiplicity percentiles in \(pp\) collisions at \(\sqrt{s}\) = 13 TeV from the AMPT String Melting model calculations.
### Multiplicity dependent flow
In Fig. 11, we present the magnitude of \(v_{2}\) as a function of multiplicity for various particle species in two \(p_{\rm T}\) ranges. The \(|\Delta\eta|\) range considered is \(>3\), and \(v_{2}\) is shown for \(0.8<p_{\rm T}<1.3,\mbox{GeV}/c\) and \(1.3<p_{\rm T}<1.8,\mbox{GeV}/c\). Firstly, we observe that the magnitude of \(v_{2}\) increases with increasing multiplicity for both \(p_{\rm T}\) ranges, regardless of the particle type. Secondly, \(v_{2}\) decreases towards lower multiplicities and starts to saturate at a multiplicity of around 50. While the AMPT String Melting model shows a linear multiplicity dependence, the experimental results reported in Refs. [17; 28; 29] show a mild decrease towards low multiplicity events.
In the case of the higher \(p_{\rm T}\) range shown in the bottom panel of Fig. 11, we observe that the multiplicity dependence of charged hadrons differs from that of identified mesons in the first two multiplicity bins. Interestingly, baryons do not show this saturation yet in those multiplicity ranges, within the uncertainties. Furthermore, the ordering in the \(v_{2}\) magnitudes between different particle species is visible, as discussed in the previous section. For both \(p_{\rm T}\) ranges, the magnitudes of \(v_{2}\) are clearly separated between mesons and baryons in higher multiplicities.
## V Conclusions
We extracted flow coefficients for various particle species, including \(\pi\), \(K\), \(p\), and \(\Lambda\), with identified hadrons using multiple MC generators and detector combinations in wide \(\Delta\eta\) ranges for \(pp\) collisions at \(\sqrt{s}\) = 13 TeV. The flow measurements were obtained through long-range correlations in different high-multiplicity classes by employing the LMTF method. This approach enabled us to eliminate the enhanced away-side jet fragments in high-multiplicity events relative to low-multiplicity events. However, we found that subtracting non-flow contamination in small systems could lead to biased results, due to the kinematic bias on jets and different model implementations of flow and jet components. Specifically, we observed that the PYTHIA8 Default model, which does not account for collective flow, produces biased results towards large flow. Moreover, it was not possible to extract flow signals from the EPOS4 and PYTHIA8 Shoving models, which contain flow components, as they violate the assumptions of the template fit method, containing near-side yield in low-multiplicity events. We conducted studies of the LM-template method in multiple \(\Delta\eta\)-gaps and found that the current ALICE \(\eta\) acceptance might still be influenced by non-flow contamination, suggesting the need for larger \(\Delta\eta\)-gaps in future analyses. Only the AMPT String Melting model among the studied models was free from this bias and showed a mass ordering at low \(p_{\rm T}\) and particle type grouping in the intermediate \(p_{\rm T}\) range, similar to what is observed in large systems. However, this ordering was quite distinct from that seen in large systems.
###### Acknowledgements.
We thank Klaus Werner, Christian Bierlich and Zi-Wei Lin for fruitful discussions with their model calculations. We acknowledge CSC - IT Center for Science in Espoo, Finland, for the allocation of the computational resources. MV, TK, and DJK are supported by the Academy of Finland, the Centre of Excellence in Quark Matter (project 346328). SJ and SHL are supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) under Contract No. 2020R1C1C1004985. We also acknowledge technical support from KIAF administrators at KISTI.
|
2304.10512 | "Can We Detect Substance Use Disorder?": Knowledge and Time Aware
Classification on Social Media from Darkweb | Opioid and substance misuse is rampant in the United States today, with the
phenomenon known as the "opioid crisis". The relationship between substance use
and mental health has been extensively studied, with one possible relationship
being: substance misuse causes poor mental health. However, the lack of
evidence on the relationship has resulted in opioids being largely inaccessible
through legal means. This study analyzes the substance use posts on social
media with opioids being sold through crypto market listings. We use the Drug
Abuse Ontology, state-of-the-art deep learning, and knowledge-aware BERT-based
models to generate sentiment and emotion for the social media posts to
understand users' perceptions on social media by investigating questions such
as: which synthetic opioids people are optimistic, neutral, or negative about?
or what kind of drugs induced fear and sorrow? or what kind of drugs people
love or are thankful about? or which drugs people think negatively about? or
which opioids cause little to no sentimental reaction. We discuss how we
crawled crypto market data and its use in extracting posts for fentanyl,
fentanyl analogs, and other novel synthetic opioids. We also perform topic
analysis associated with the generated sentiments and emotions to understand
which topics correlate with people's responses to various drugs. Additionally,
we analyze time-aware neural models built on these features while considering
historical sentiment and emotional activity of posts related to a drug. The
most effective model performs well (statistically significant) with
(macroF1=82.12, recall =83.58) to identify substance use disorder. | Usha Lokala, Orchid Chetia Phukan, Triyasha Ghosh Dastidar, Francois Lamy, Raminta Daniulaityte, Amit Sheth | 2023-04-20T17:47:13Z | http://arxiv.org/abs/2304.10512v1 | _Can We Detect Substance Use Disorder?"_: Knowledge and Time Aware Classification on Social Media from Darkweb
###### Abstract
Opioid and substance misuse is rampant in the United States today, with the phenomenon known as the "opioid crisis". The relationship between substance use and mental health has been extensively studied, with one possible relationship being: substance misuse causes poor mental health. However, the lack of evidence on the relationship has resulted in opioids being largely inaccessible through legal means. This study analyzes the substance use posts on social media with opioids being sold through crypto market listings. We use the Drug Abuse Ontology, state-of-the-art deep learning, and knowledge-aware BERT-based models to generate sentiment and emotion for the social media posts to understand users' perceptions on social media by investigating questions such as: which synthetic opioids people are optimistic, neutral, or negative about? or what kind of drugs induced fear and sorrow? or what kind of drugs people love or are thankful about? or which drugs people think negatively about? or which opioids cause little to no sentimental reaction. We discuss how we crawled crypto market data and its use in extracting posts for fentanyl, fentanyl analogs, and other novel synthetic opioids. We also perform topic analysis associated with the generated sentiments and emotions to understand which topics correlate with people's responses to various drugs. Additionally, we analyze time-aware neural models built on these features while considering historical sentiment and emotional activity of posts related to a drug. The most effective model performs well (statistically significant) with \((macroF1\)=\(\$2.12\), \(recall=\)\(\$3.58\)) to identify substance use disorder.
Dark Web, Crypto market, Substance Use, Ontology, Social Computing, Social Media.
## I Introduction
North America is facing the worst opioid epidemic in its history. This epidemic started with the mass diversion of pharmaceutical opioids (e.g., Oxycodone, Hydromorphone), resulting from the strong marketing advocacy of the potential benefits of opioids [1]. The increase in opioid use disorder prevalence and pharmaceutical opioid-related overdose deaths resulted in a stricter distribution of pharmaceutical opioids, unintentionally leading to a dramatic increase in heroin usage among pharmaceutical opioid users [2]. The epidemic entered its third wave when novel synthetic opioids (e.g., fentanyl, U-47,700, carfentanil) emerged on the drug market. Several recent research and reports are pointing at the role of crypto markets in the distribution of emerging Novel Psychaocative Substances (NPS) [3, 4]. The importance of crypto markets has been further exacerbated by the spillover mental health and anxiety resulting from the ongoing Covid19 pandemic: recent results from the Global Drug Survey suggest that the percentage of participants who have been purchasing drugs through crypto markets has tripled since 2014 reaching 15 percent of the 2020 respondents [5].
In this study, we assess social media data from active opioid users to understand the behaviors associated with opioid usage and to identify what types of feelings are expressed. Substance use disorder (SUD) in social media posts is defined as a post that shows the risk of substance use, attitudes, and behavior related to substance use, as well as the corresponding social and environmental factors [6]. We employ deep learning models to perform sentiment and emotion analysis of social media data with the drug entities derived from crypto markets. We implemented state-of-art sentiment and emotion models for social media data. Also, we performed topic analysis to extract frequently discussed opioid-related topics in social media. For preliminary analysis, we examined temporal variations in topics that differentiate between posts at each drug level and topics over time across all the years, followed by considering data per quarter for each year. We also analyzed how users' language in their posts varies temporally by topic. We also observed variations in emotions and sentiment that differentiate between posts containing expressions of SUD. For this task, we finetuned a pre-trained transformer language model for emotions and sentiments and used it to automatically extract the emotions and sentiments for all the historical posts related to a drug and analyzed variations in sentiment and emotion over time.
We further aim to achieve the identification of SUD on social media by examining the core research question of this study: **Can we differentiate between posts containing expressions of substance misuse or not with temporal activity, emotion, sentiment, and language features related to that drug?** We build a knowledge aware bi-directional sequential neural model that differentiates between posts where expressions of SUD are present versus those posts where it is absent.
**Findings and Contributions** The major contributions and findings of this work are as follows:
1. We compile a high-quality, rare, challenging, and valuable dark web dataset (eDark) by crawling four crypto markets namely Dream, Tochka, Agora, and Wall Street. The dataset is available for release upon acceptance.
2. We propose an end-to-end architecture \(D2S\) (Dark web to Social Media) for harnessing social media trends for opioid listings found on the crypto market. It involves Crawling Techniques, Drug identification, Data Collection, Processing from social media, and Computational Models to predict SUD considering the temporal variations in sentiment and emotional language among posts indicative of SUD. We also contribute the knowledge and historic posts aware sequential neural model that can differentiate if SUD is present or absent for a drug based on these variations by factoring in the relative time difference between historical posts. We present that knowledge, sentiment, and emotion-aware models outperform other models of language feature-based approaches by performance measures, ablation study, and error analysis.
3. To the best of our knowledge, our work is the first one to detect SUD in social media posts considering the above factors and as a reflection of the opioid listings extracted from the dark web. Resources created as a part of the study will be made available upon request to the corresponding author upon acceptance. The resources include emotion, sentiment, and SUD-labeled dataset with timestamps for each drug type, and the \(eDark\) dataset.
## II Related Work
**Darkweb Marketplaces:** Darkweb serves as a favorable and promising market for illegitimate goods ranging from drugs to weapons [7, 8, 9]. Elbahrawy et al. [10] investigated the market dynamics of dark web markets based on a unique dataset of Bitcoin transactions. They have also analyzed how the market ecology restructures itself once it closes. As traditional web scraping tools have failed to remove the veil of the vendors of dark marketplaces, Hayes et al. [11] proposed an automated framework to overcome this barrier. The suggested framework was further evaluated by gathering information from 3000 sellers on a dark marketplace. Harviainen et al. [12] presented an analysis of the pattern the buyers and the sellers expose themselves on Sipulitori (Finnish darkweb drug trading market). Hassio et al. [13] extended research on Sipulitori by exploring it from the viewpoint of understanding the needs behind the messages posted by users and the physiological and cognitive factors that come into play. Researchers examined the underground marketplaces Agora and Dream Market to examine fluctuations in the availability of fentanyl, fentanyl analogs, and other illegal opioids in connection to overdose fatalities [14]. Orsolini et al. [15] provided intuition behind darkweb drug marketplaces through the perspective of psychiatists so that they can be equipped with adequate information for providing countermeasures to addiction booming out of drugs available through it. The prior works on analyzing darkweb marketplaces suggest the ability of such data in detecting trends in real world. In the next part of related work, we discuss how time series analysis on social media helps to quantify such trends.
**Time Series Analysis on Social Media:** Earlier research has demonstrated the use of time series analysis on social media data, such as for comprehending changes in public perceptions' sentiment that can be beneficial to the government and commercial organizations [16] and understanding the sentiments of users for compelling smartphone applications such as PUBG and TikTok [17]. Time series analysis has also been used in research pertaining to mental health [18], such
Fig. 1: Proposed Architecture \(D2S\) for Harnessing Social Media trends for listings found on Crypto market
as variations of mental health of individuals throughout the COVID-19 lockdown phase [19].
Over time, topic analysis and sentiment analysis have been used to deepen the understanding of online retail customer behavior from tweets [20]. Researchers have employed time series analysis to analyze bursts of activity in social networks, and for its prediction, they used LSTM (Long-Short-Term-Memory) network-based model [21]. SAGE (Sparse Additive Generative Model), a topic analysis tool, was used to assess the temporal linguistic changes in tweets with and without evidence of self-harm. Furthermore, they explored temporal linguistic features of tweets with and without suicidal intent signs [22]. A transformer-based model was also proposed for suicidal ideation detection in social media that takes into consideration the temporal context [23].
**Substance Use Analysis on Social Media:** Several researchers have explored social media analysis for different investigations of drug use. These works have analyzed the content, sentiment, and emotion for drug-related data collected from social media platforms like Twitter, and Instagram. Lossio et al. [24] worked on a large amount of opioid-related data collected from Twitter to gain an overall understanding of drug-related discussions on Twitter, behavior related to drug consumption, drugs co-used, and also street terms for various drugs. This study reinstated that Twitter had a huge corpus of data and could provide insights into its correlation with pain management and alcohol consumption. A similar study by Cherian et al. [25] was conducted on Instagram data on the misuse of codeine. The temporal data collected related to codeine misuse showed its interconnection with alcohol and soda consumption. The influence of social media in propagating this imagery increases the risk of normalizing drug use to extremes. Kim et al. [26] further explored how big data can be utilized to understand drug use and addiction better. Social media is a huge platform for monitoring prescription drug use and addiction using linguistic and behavioral cues. The work done by Lokala et al. [14] investigates the relation between the availability of fentanyl-related drugs on crypto markets on the dark web and overdoses of fentanyl. Time-lagged correlation analysis was done between fentanyl-related drugs from the crypto market and overdoses of fentanyl in this first-of-its-kind study for epidemiological surveillance. Sarker et al. [27] investigated various opioid-related sub-reddits to better understand the differences in conversations concerning prescription/illegal opioids and access to SUD treatment during the Pre-Covid-19 and Covid-19 periods. They also noticed a rise in opioid withdrawal discussions during Covid-19. Posts from various subreddits related to opioids (both medication and illicit) were collected for identifying the increase in the use of stimulants among opioid users and individuals suffering opioid use disorder [28]. This further corresponds to the increasing number of casualties because of opioids and simulants overdose. Desrosiers et al. [29] reported the perseverance of negative sentiments in the conversations of individuals with increased drug use severity. Liu et al. [30] outlined the presence of positive emotion in Facebook posts of individuals who underwent SUD treatment for a longer period of time than those who stopped their therapy. A study was also made by Singh et al. [31] to probe the sentiment patterns of tweets related to SUD before and during the Covid-19 pandemic. Cameron et al. [32] followed the development of a semantic web platform called PREDOSE for harvesting data related to prescription drug use from social media platforms. Supporting several types of content analysis, PREDOSE provides easy access to data for drug use research. Fan et al. [33] illustrates a new framework called AutoDOA to detect drug addiction behavior from Twitter. This will aid in understanding patterns of drug use and addiction. Eshleman et al. [34] discussed how social media can be leveraged for drug recovery. Using linguistic patterns and machine learning algorithms, groups of people more likely to participate in the drug recovery process would be an important step in managing the drug addiction epidemic. Our work aims to build an end-to-end system where we see the reflection of the dark web on social media in terms of trends, sentiment, emotion, and substance use context, which is necessary for timely public health intervention.
## III Data Collection
This section presents the modules- Crawling Techniques, Drug Identification, and Data collection proposed in \(D2S\) architecture as shown in Figure 1.
### _eDark Data collection_
Concerning Dark web data, four crypto markets, Agora, Dream market, Tochka, and Wall Street were periodically crawled between June 2014 and January 2020. Over 82,000 opioid-related listings were collected to extract posts about fentanyl, fentanyl analogs and other non-pharmaceutical synthetic opioids in the crypto market. Data sources include four different crypto markets. Further, we discuss \(eDark\) dataset summary, and description of crypto markets in this section.
### _Dark Web Data (eDark) Collection and Summary_
1. **Dream Market**: Late 2013 saw the market's establishment. Dream Market, after AlphaBay, was the largest darknet market in the world prior to 2017. Dream Market quickly overtook AlphaBay as the largest darknet market in the world, nevertheless, once AlphaBay went down in 2017 [35]. Between November 2014 and April 2019, there were 261 withdrawals from the market in total. During this time, the market saw transactions worth over 197,000 dollars [36].
2. **Tochka**: The market started operating in 2015. It is a fairly modest market that mostly operates in North America and Europe. More than 3621 things, including pharmaceuticals, malware, and other products, are sold on the website. The market changed its name to the Point market and is currently open [37]. Between November 2014 and April 2019, there were a total of 2,990 withdrawals from the market.
3. **Wall Street**: The market featured a site for the sale of illegal substances, weapons, hacking tools, and stolen login information. But the exit scam has been hurting the market since April 2019 [38]. The administrators allegedly stole between 30 million dollar worth of XMR and bitcoins from vendor accounts by switching the site into maintenance mode and transferring the clients' funds [39]. In May 2019, the market was later shut down. Before being taken over in May 2019 by the German Federal Criminal Police, Wall Street was the second-largest darknet market in the entire globe. In total, 7,755 withdrawals were made from the market between November 2014 and April 2019. During this time, there was almost 18,000 dollars worth of transactions on the market [36].
4. **Agora**: Agora Market was a darknet market in operation from September 2013 to August 2015 that sold illegal narcotics and controlled substances, drugs, counterfeit and fraud-related goods, services, and other illegal contraband. The data for Agora for the period June 2014 to September 2015 is obtained from Grams dataset [9]. Agora was chosen because it was one of the largest crypto markets that emerged after the FBI shut down Silk Road [40].The summary of the dark web dataset is shown in Table I.
The sample product page of Dream Market is shown in Figure 2. The Scrapy framework was used to create the unique web crawler for each market, circumventing security protections built into these markets. To get over security safeguards, it uses specialized Scrapy downloader middleware. By creating a Linux virtual machine on AWS running the Tor daemon and Privacy, the custom crawler was able to reach the Deep Web. The outputs of the crawler are unaltered HyperText Markup Language (HTML) files used for drug advertising. The University's Information Security Office evaluated and approved the data extraction, storage, and access processes, which all adhered to stringent security standards. The information that was extracted from the data included the following: the product name provided by the vendor, the vendor screen pseudonym, the number of sales made by the vendor, and their level of trust, the drug name(s), drug category, the information the vendor provided about the product, the unit, the quantity in stock, the price (in Bitcoin and US dollars), the price per volume, the country/region of origin, the destination country/region, and the security precautions for transactions. We further used custom-built Named Entity Recognition (NER) to extract substance names, product weight, price of the product, shipment information, availability, and administration route as shown in Table II. The NER algorithm consists of three key components: (1) the Natural Language ToolKit (NLTK) is used to curate and process text portions from crawled data; (2) the Drug Abuse Ontology (DAO) that serves as a conceptual framework for interconnecting groups of drug
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline
**Data** & **Dream** & **Tochka** & **Wall** & **Agora** \\ & & & **Street** & \\ \hline \# Vendors & 3456 & 765 & 876 & 910 \\ \hline \# Substances & 2862 & 679 & 765 & 821 \\ \hline \# Locations & 436 & 62 & 37 & 214 \\ \hline USD Worth & 5 197k & 5 5,072 & 5 18k & 5 220k \\ \hline \# Withdrawals & 262 & 2990 & 7755 & 844 \\ \hline \hline \end{tabular}
\end{table} TABLE I: eDark Summary, USD Values and Withdrawals are approximated to nearest value.
Fig. 2: Data Source of \(eDark\): A Sample Product Listing page from Dream Crypto Market
focused lexic to produce a list of items to be identified and; (3) Regular Expressions which is a sequence of symbols and characters creating a pattern that can be searched in text or a sentence constructed using the DAO selected entities to extract things of interest.
### _Named Entity Recognition_
Extracted data included features like product name, vendor screen name (vendor name), drug category, product description, price (Bitcoin or US Dollar), country/region of origin and destination, how to administer the drug, shipping information, and others. We used a pre-trained NER deep learning (NER DL) bidirectional LSTM-CNN approach [41] on crypto market data to identify drug entities that use a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. The entities are then matched to a superclass using Drug Abuse Ontology (DAO) [42] that acts as a domain-specific resource with all superclasses related to the entities. DAO is a domain-specific knowledge source containing drug and health-related classes, properties, relationships, and instances. Apart from medical terms, it includes concepts of Mental Health disorders and symptoms aligned with the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition) scale. DAO is a domain-specific conceptual framework for interconnecting sets (named "classes") of drug-focused lexicons. One of the key benefits of using an ontology-enhanced semantic approach is the ability to identify all variants of a concept in data (e.g., generic names, slang terms, scientific names). The DAO contains names of psychoactive substances (e.g., heroin, fentanyl), including synthetic substances (e.g., U-47,700, MT-45), brand and generic names of pharmaceutical drugs (e.g., Duragesic, fentanyl transdermal system) and slang terms (e.g., roxy, fent). It also contains information regarding the route of administration (e.g., oral, IV), unit of dosage (e.g., gr, gram, pint, tablets), physiological effects (e.g., dysphoria, vomiting), and substance from (e.g., powder, liquid, hcl). Initially, it was used to determine user knowledge, attitudes, and behaviors related to the non-medical use of buprenorphine and other illicit opioids through analysis of web forum data. Later, this ontology evolved to understand trends of drug use in the context of changing legalization policies in the USA. This also proved effective in capturing gleaning trends in the availability of novel synthetic opioids through analysis of crypto market data. DAO is defined utilizing a common ontology methodology known as 101 ontology development. The 101 technique entails the following steps: 1. establishing the ontology's domain and scope; 2. reusing prior knowledge; 3. enumerating key terms in the ontology; 4. defining classes and their properties; and 5. producing instances of the classes. A collection of techniques and best practices accepted by the Semantic Web community and the AI community that do natural language processing were used to assess the ontology's quality. Protege is the most used tool for creating ontologies [43], hence the metrics list the numbers for its structures and representation. The DAO ontology metrics are evaluated as shown in Table V.
The OWL (Web Ontology Language) representation of DAO is presented in Figure 3. In this study, we leverage DAO to identify 90 drug entities, which we then broadly classify into eight categories by mapping each entity to a super drug class in DAO. The eight broad categories considered are Heroin, Synthetic Heroin, Pharmaceutical Fentanyl, Non-Pharmaceutical Fentanyl, Fentanyl, Oxycodone, Kratom, and Opium (chosen as per data available in each category on social media). The categorization of the five types of opioid listings containing specific types and subclasses identified using DAO is shown in Table III.
### _Identifying Substance Use Discussions on Social Media_
We crawl the data using a carefully curated lexicon extracted from DAO consisting of around 120 terms (slang names, brand names, drug names, street names, marketing names, commonly used names, abbreviations) of those 8 drug categories. Utilizing the compiled list, we collect 290,458 opioid
\begin{table}
\begin{tabular}{|c|c|} \hline
**Property Name** & **Crypto market Listing Information** \\ \hline Has Product Name & 50 Gr ***** Heroin AAA+ With Spots Free Shipping \\ \hline Is Substance & Heroin \\ \hline Has Class & Opiate \\ \hline Has Dosage & 1.5 gram \\ \hline Has Quantity & 50 gram \\ \hline Has Verdor & BulkBignade \\ \hline Has Price & BTC 0.0444 \\ \hline Ships To & Worldwide \\ \hline Ships From & Germany \\ \hline \end{tabular}
\end{table} TABLE II: Sample of property types in \(eDark\) identified from crypto market product listing
Fig. 3: The OWL (Web Ontology Language) representation of Drug Abuse Ontology (\(DAO\))
related posts from six sub-Reddits using custom-built crawlers, which we call Substance Use Disorder corpus (**SUDS**). The six SubReddits chosen for data collection are r/drug nerds, r/research chemicals, r/opiates, r/heroin, r/suboxone, and r/opiates recovery. The SubReddit sources are mentioned in Appendix A. The SubReddit corpus is spread over different drug categories such as Heroin (136,745), Kratom (77,443), Fentanyl (36,166), Oxycodone (25,890), Opium (9,675), Non-Pharmaceutical Fentanyl (2,798), Pharmaceutical Fentanyl (876), and Synthetic Heroin (865). To build the social media emotion analysis model, additionally, we collected 151,563 posts from Twitter using Twitter API with the same lexicon we used for the subreddit crawl. We applied TF-IDF over unigrams, bigrams, and trigrams to identify topics in each SubReddit as shown in Table IV. We also conducted the topic analysis using BERTopic [44] model for all drugs over time from 2015 to 2020, as shown in Figure 5.
## IV Methods
In this section, we build upon the previous data collected to create BERT based Sentiment, Emotion, and SUD models. We leverage those models to predict if a post exhibits substance use disorder present (SUDP) or substance use disorder absent (SUDA) while considering the history of the post. We applied stratified random sampling [45] to identify a sample population that best represents all the features of interest and ensures that every data subgroup is represented, thus avoiding potential bias in the several datasets we collected for this study.
### _Sentiment Analysis and Sentiment BERT Model_
We classified SubReddit posts as Positive, Negative, and Neutral categories for sentiment analysis. We implemented Valence Aware Dictionary for Sentiment Reasoning (VADER) [46] to generate sentiment for each SubReddit post in \(SUDS\) to consider both the polarity and intensity of each sentiment. VADER uses a lexicon of words with human-annotated sentiment polarity scores like SentiWordNet, AFINN, and the NRC Word-Emotion Association Lexicon. We chose VADER as it is a rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media and it uses a combination of sentiment-related words, emoticons, and syntax to produce a sentiment score for a given text. Following the individual scoring of each word, the ultimate sentiment is determined by performing a pooling procedure, such as averaging all the sentiments. This dataset is split into train, dev, and test sets (75:5:20). The generated training set is used to train state-of-art deep learning algorithms like CNN, LSTM, and BERT. The highest F\({}_{1}\) achieved is 82.36 with the BERT model. We trained the Sentiment BERT model on this training data for later use. We report the statistics of Sentiment labels for SubReddit posts obtained from sampling 800 random data points from each drug category reported in Table VI. The comparison for drugs Pharmaceutical Opioids and Heroin by top three sentiments: positive, negative, and neutral for the time period between 2015 and 2020 is presented in Figure 4 which shows the temporal variation in sentiment for each drug.
\begin{table}
\begin{tabular}{p{85.4pt}|p{28.5pt}|p{28.5pt}} \hline
**Ontology Metric** & **count** & **Description** \\ \hline Axiom & 4876 & No of combined logical and non-logical axioms \\ \hline Logical Axiom Count & 3478 & No of logical axioms \\ \hline Declaration Axiom Count & 1185 & No of declaration axioms \\ \hline Classes & 316 & No of distinct classes \\ \hline Objects & 12 & No of object properties \\ \hline Data property & 13 & No of data properties \\ \hline Individual Count & 845 & No of individual entities \\ \hline \end{tabular}
\end{table} TABLE IV: Sample of Topics identified from \(SUDS\) dataset obtained from six different subreddits
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}} \hline
**SubReddit** & **Topics of Interest** \\ \hline Opiates Recovery & Cold turkey withdrawal, cravings, anxiety, rehab, depression, sobriety, Loperamide, Benzo, Subutex, quitting, Vivitrol, Imodium, Naltrexone \\ \hline Opiates & Codeine, Hydrocodone, Oxymorphone, Dilaudid, hydromorphone, Opana, Oxycontin, Acetaminophen, Gabapentin, benzos, Roxicodone \\ \hline Suboxone & Buprenorphine, Subutex, Agonist, Clonidine, Tramadol, Hydrocodone, Dilaudid, Vicodin, Sublocade, Percoet, Phenibut, Klonopin, Valium \\ \hline Heroin & Dope, Opium, Opiates, Crack, Diaectylmorphine, China White, codeine, acetaminophen \\ \hline Drug Nerds & Methdadone, Alkaloids, Miragymine, Benzos, Poppy, Buprenorphine, Antagonist, Gabapentin, Naloxone, Ampatamine, Hydrocodone \\ \hline Research Chemicals & Benzos, Psychoactive, Psychedelic, Kratom, Pyrovalerone, Quasalude, Oxycodone, Morphine, Xanax, Tramadol, Cocaine, Methadone, Ketamine, Gabapentin, Amphetamine, Hydromorphone \\ \hline \end{tabular}
\end{table} TABLE III: Opioid listings Categories, Subclasses and Specific Types identified using \(DAO\)
### _Emotion Analysis and Emotion BERT Model_
We did not choose to work on SubReddit data for emotion analysis as we do not have self-tagged emotions in posts on SubReddit. Therefore, we chose to crawl Twitter for Emotion analysis, where emotions are present as hashtags. We limited our crawl to 7 kinds of emotions, as stated in work done by Wang et al. [47]. The tweets are assigned a class label corresponding to the emotion hashtag they are associated with. We further remove any URLs or usernames that could potentially contain sensitive information. For generating emotion labels for drug related tweets, we implement an inductive transfer learning approach with BERT [48]. For this task, we extracted \(61k\) posts as labeled training data by crawling tweets with each emotion hashtag: Joy, Sadness, Anger, Love, Fear, Thankfulness, and Surprise. We split this dataset into train, dev, and test sets (75:5:20). We train Emotion BERT, which is a BERT-based model for 10 epochs using a learning rate of 1e-5, batch size of 32 on this labeled data and also on Emonet, a corpus of around \(790k\) tweets [49] to generate the emotion labels for subreddit posts in \(SUDS\) corpus.We report the statistics of emotion labels for SubReddit posts obtained from sampling 800 random data points from each drug category reported in Table VI. The comparison for all drugs by seven emotions: Joy, Sadness, Anger, Love, Fear, Thankfulness, and Surprise is shown in Figure 6.
### _Substance Use Disorder Dataset_
We focus on building and interpreting a predictive model based on these exploratory results to identify posts where SUD is present or absent. We formulate this problem as a binary classification task to predict a label for a post at a particular time. Each post is associated with a drug name, historical posts, time, emotion, and sentiment. We now prepare our training dataset for generating SUDP and SUDA labels for \(SUDS\) corpus. We made use of high-quality addiction-labeled data from Lokala et al. [50] work on social media data for exploring the association between drug and mental health symptoms. Lokala et al. [50] created a labor-intensive, high-quality corpus of \(9888\) tweets manually annotated by domain experts and substance use epidemiologists with experience in Interventions, Treatment, and Addictions Research. We train a Transfer Learning BERT model for 10 epochs using a learning rate of 1e-5 batch size of 32 on this labeled data to generate the SUDP and SUDA labels for posts in the \(SUDS\) corpus. We also examine manual inter-annotator agreement among three domain experts for SUDP and SUDA labels of 300 posts which is 0.74 Kappa score, to validate the annotations. The manual annotations are evaluated in the same way as automated labels, and our macro F measure against ground truth is 0.71. The results for transfer learning using BERT are reported in Table VII.
### _Temporal Predictive Model of Posts to detect SUDP or SUDA_
We built our domain-specific Sentiment BERT model to serve as a sentiment feature extractor over historical tweets and a domain-specific Emotion BERT model as an emotion feature extractor for historical tweets. The reason why we built
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Drug** & **Positive** & **Negative** & **Neutral** & **Top 3 Emotions in the order found** \\ \hline Opium & 481 & 218 & 101 & Sadness, Love, Joy \\ \hline Oxycodone & 460 & 245 & 95 & Sadness, Fear, Thankfulness \\ \hline Kratom & 459 & 231 & 110 & Love, Sadness, Fear \\ \hline Fentanyl & 467 & 274 & 59 & Sadness, Love, Fear/Thankfulness \\ \hline Heroin & 455 & 255 & 90 & Sadness, Joy, Thankfulness \\ \hline Synthetic Heroin & 500 & 240 & 6 0 & Sadness, Fear, Thankfulness \\ \hline Pharmaceutical & 570 & 197 & 33 & Sadness, Love, Joy/Thankfulness \\ \hline Fentanyl & & & & \\ \hline SNon- Pharmaceutical & 502 & 264 & 34 & Sadness, Love, Thankfulness \\ \hline Fentanyl & & & & \\ \hline \end{tabular}
\end{table} TABLE VI: \(D2S\) Sentiment and Emotion Analysis Module - Sentiment stats(Number of Posts) after sampling 800 random points for each drug category identified from 6 Subreddits and Top emotions identified for each drug from Twitter
Fig. 4: \(D2S\) - Sentiment Analysis Module - Comparison of drugs Pharmaceutical Opioids and Heroin by sentiments
\begin{table}
\begin{tabular}{l l l l} \hline
**TL-BERT Model** & **Precision** & **Recall** & **F1** \\ \hline Emotion BERT & \(80.12\) & \(82.29\) & \(81.19\) \\ SUD BERT & \(81.28\) & \(83.65\) & \(82.44\) \\ \hline \end{tabular}
\end{table} TABLE VII: Validation results for Emotion BERT and Substance Use Disorder (SUD) BERT through the transfer learning approach. The trained model is then used to obtain the emotion labels, SUDP, SUDA labels for the posts in the \(SUDS\) Corpus.
fine-tuned BERT models as they can capture a better sense of sentiment, emotions, social media jargon, and slang terms [51]. We contribute Knowledge Aware Time Series Analysis Computation Model to predict SUDP and SUDA for a post as shown in Figure 7. We present the SUD detection as a binary classification model with SUDP and SUDA as labels. We focus on building and interpreting a predictive model based on these exploratory results to identify posts where SUD is present or absent. Each post is associated with the drug, historical posts, time, emotion, and sentiment in \(SUDS\) corpus.
For a post \(P_{i}\) in SUDS, the concepts/slang terms/synonyms related to drug entities are masked using the DAO ontology, which forms the knowledge component of the model. We use BERT to encode the language representation as BERT can produce more thorough representations of linguistic elements in social media data [51], and we averaged the vector outputs for all tokens in each post at the final layer. To extract emotional language features from posts, We used our Emotion BERT model that takes the historical posts and obtains the 768-dimensional emotion vector of each historical post. To extract sentiment language features from posts, we used our Sentiment BERT model that takes the historical posts and obtains the 768-dimensional sentiment vector of each historical post. Now we have the encoding representing the sentiment and emotion spectrum. Sequential models such as RNN and LSTM models are apt ways to encode representations that learn from a sequence of a user's historical tweets due to the sequential nature of a social media post history. We then pass the historic posts through Bi-LSTM + attention layer concatenated with the post to be assessed. Then we feed the extracted features from the attention layer to a dense layer with the rectified linear unit (\(ReLU\)) to get the prediction vector. Finally, we use the softmax function to output the probability that the post has SUDP or SUDA.
For experiments, we split the dataset into 75:5:20 ratios for the train, development, and test sets, respectively. We fine tune the hyper-parameters using the development set. Each model is trained for 10 epochs with a learning rate of 1e-5 batch size of 64. We use cross-entropy loss and ADAM [52] for the optimization. For regularization, we use dropout [53] with a probability of 0.2. We got the best performance with the Bidirectional LSTM (Bi-LSTM) model with the attention layer as it captured context over longer span considering bi-directional context of a word. For all models, we report recall, precision, and F1-score. We interpreted the higher performance gains of our model in the results section through an ablation study.
**Performance Comparisons:** We compare the performance of these state-of-the-art methods through replications of the architectures and representations presented in prior works on similar tasks on social media.
1. Logistic Regression (LR) [54]: We implement a logistic regression classifier that utilizes part of speech (POS) and term frequency-inverse document frequency (TF-IDF) as language feature representations.
2. Random Forest (RF) [55]: We implement random forest model with features like Linguistic Inquiry and Word Count (LIWC), POS, and TF-IDF.
3. History Aware Recurrent neural network (H-RNN) [56]: We deploy H-RNN that encodes input using fine-tuned fast text embeddings. Historical posts are passed sequentially through the model and concatenated with the post to be assessed. The sigmoid activation was selected for the the hidden LSTM layer which is fully connected to both the Input and output layer.
4. History Aware Long Term Short Term Memory (H-LSTM) [57]: We replicate H-LSTM that use BERT (Bidirectional Encoder Representations from Transformers) embeddings for encoding historic posts given to an attention based LSTM layer, allowing the model to choose whether to focus more or less on each post in order to reflect user representation finally fed to fully connected layer with a sigmoid activation function to get the prediction.
## V Results
Out of all Opioid listings in \(eDark\), \(4.2\)% are novel synthetic opioids, and heroin was identified in \(57.8\)% of all
Fig. 5: \(D2S\) Topic Modeling Module - Topics over time for all the eight drug categories for time period 2015 - 2020. x axis= Year, y axis= # of Topics
opioid-related listings. When comparing the average monthly ad volume for fentanyl, fentanyl analogs, and other non-pharmaceutical drugs, data indicate a rise in the availability of items containing fentanyl. The listings for pharmaceutical and non-pharmaceutical fentanyl and analogs made up \(1.9\)% of all opioid-related listings, which is \(48.6\)% of unique synthetic opioid-related ads. The most frequent type of novel synthetic opioid which is synthetic heroin is offered for sale at an average of 1.6 kg at each time point of data collection during the study period. Furanylfentanyl was the fentanyl analog that was promoted the most with an average of 3.6 kg being offered for sale at each data point. Carfentanil, a highly strong fentanyl analog was typically available for purchase for 489.6 grams on average. Newer synthetic opioids (e.g., U-48,800, U-4TDP) kept replacing the non-pharmaceutical synthetic opioids (e.g., W-18, MT-45, AH-7921, and U-47,700) in the listings found on marketplaces.
From the exploratory analysis on \(SUDS\) corpus, Kratom, Heroin, Fentanyl, Morphine, Cocaine, Methadone, Suboxone, and Oxycodone are the most commonly discussed drugs across six subreddits. In Table IV, for example, consider Research chemicals (RC); it is interesting to find that more posts talk about Pyrovalerone, a psychoactive drug with stimulant effects. Another term found is 'Quaalude,' a brand name for 'Methaqualone,' a sedative and hypnotic medication. The RC subreddit mostly discusses psychoactive and psychedelic
Fig. 6: \(D2S\) - Emotion Analysis Module - Comparison of eight Major Drug Categories by seven emotions: Joy, Sadness, Anger, Love, Fear, Thankfulness, and Surprise
drugs, while DrugNerds discusses Alkaloids [58]. Interestingly, DrugNerds talks about Naloxone, which can treat Opioid overdose. Dope is a slang term for Heroin identified in Heroin Subreddit. Several brand names of medications for anxiety, pain, seizures, insomnia, and sedatives are discussed in the Suboxone subreddit. Gabapentin is the typical seizure and pain medication discussed among most of the subreddits. Opiates Recovery is more about withdrawal symptoms and mental health disorders, for example, 'cold turkey.' The 'cold turkey' used in the context of substance use is quitting a substance abruptly, which carries significant risks if the drug you are discontinuing is benzodiazep or opiate [59, 60]. The results show that we can derive and analyze slang terms, brand names, novel drugs, mental health symptoms, and medications from social media. From the results in Table VI, it is found that the highest positive sentiment is found in Pharmaceutical Fentanyl, the highest negative sentiment for Fentanyl, and the highest neutral opinion for Kratom. The emotion 'Love' is detected the top one for Kratom as people use it for self-medication. Table X presents the medians of metrics for different embeddings and architectures obtained over 20 runs. The baseline models we compared our model with are Logistic Regression (LR), Random Forest (RF), History Aware Recurrent Neural Network (H-RNN), History Aware Long Term Short Term Memory (H-LSTM) with varied language representations like part of speech, term frequency-inverse document frequency. We also extracted LIWC features from posts to pass through a predictive model instead of BERT encoding. Under identical circumstances, we empirically discovered that BERT outperformed LIWC considerably (\(p\) < 0.05). We present model interpretability and significance as Ablation Study.
We employ the Wilcoxon Signed Rank Test [Woodson, 2007] to compare the emotional expression in posts and comments between those with and without substance use to assess statistical significance. We see that there is a significant correlation between emotion displayed in posts with SUDP (\(p\) < 0.001) and the post with SUDA. We next conduct ablation research, where we remove one component from our model and assess the performance to analyze the prime components in our methodology. Instead of employing attention, we concatenate the substance use post-encoding \(\text{e}_{i}^{(\text{S})}\) and emotion post-encoding \(\text{e}_{i}^{(\text{E})}\) and utilize the resulting representation as input to the linear layer to exclude the attention component from the model. We train our encoders with raw data that is directly collected from social media in order to remove the entity masking component. Also, we trained our model by merely training the classifier and excluding post-history from the model. In Table VIII, we report the findings for the SUD prediction task of posts. Entity masking, which considerably improves the SUD identification task (+3.73 precision, +3.43 recall), is where we see our gains. The Wilcoxon Signed Rank test demonstrates that contextualized representation is very desirable for the SUD identification task in this study since it performs better than the model without entity masking (\(p\) < 0.05). Additionally, by adding the history of the post significantly boosts performance where we saw our highest increase (+4.62 precision). Additionally, we see that attention increased the model's precision by \(1.85\)% and recall by \(3.24\)%, meaning that every feature of the model affects how well it performs this task. Further, We discuss the examples of SUD and the result error Analysis.
### _Error Analysis_
We analyze the sources of errors and discuss the predictions made by our models in Table IX among three interesting scenarios.
1. **Polydrug use with variable emotions:** For Post 1, examining the post where multiple drugs co-exist along with emotion variability in history associated with other drugs, for example mixing depressants and stimulants or mixing medications with opioids. Our model is not able to predict correctly. e.g., when substance \(A\) might not often co-occur with substance \(B\) in history.
2. **Post-level Ambiguity:** For Post 2, Our model is able to predict SUD by examining the post even if is too ambiguous to assess given the user has clear SUDP in the past, undergoing healing process now with emotion intensity for the historical posts like increased sadness-related emotion.
Fig. 7: Knowledge Aware Time Series Analysis Computation Model - A \(D2S\) Computational Module
\begin{table}
\begin{tabular}{l|l|l|l} \hline
**Model** & \(\mathbf{F_{1}}\)**-Score** & **Precision** & **Recall** \\ \hline
**Proposed Model** & **82.12** & **78.34** & **83.58** \\ (**EM+A+H**) & & & \\ \hline
**-Attention (A)** & 78.98 (3.14\(\downarrow\)) & 76.49 (1.85\(\downarrow\)) & 80.34 (3.24\(\downarrow\)) \\ \hline
**-Entity Masking (EM)** & 78.50 (3.62\(\downarrow\)) & 74.61 (3.73\(\downarrow\)) & 80.15 (3.43\(\downarrow\)) \\ \hline
**-History of Post (H)** & 77.58 (4.54\(\downarrow\)) & 73.72(4.62\(\downarrow\)) & 79.12 (4.46\(\downarrow\)) \\ \hline \end{tabular}
\end{table} TABLE VIII: Ablation Study: Median of metrics over 10 different runs. Bold denotes best performance.
3. **Sarcasm detection:** For Post 3, even if it does not contain any clear SUDP/SUDA, but with sarcasm identified in the post, such a post with a history of ambiguous posts presents difficulty identifying SUD which makes it an interesting Natural Language Understanding problem and explains task complexity which levies path for future work.
## VI Discussion and Future Work
Crawling crypto markets poses a significant challenge to apply data science and machine learning to study the opioid epidemic due to the restricted crawling process [1, 61, 62]. To identify the best strategies to reduce opioid misuse, a better understanding of crypto market drug sales that impact consumption and how it reflects social media discussions is needed [63]. We limit this study to eight broad category drugs due to the availability and abundance of related posts on dark web; we hope to refine further and expand our categories for future work. Further, we have identified the processes for future research. We plan to expand this work to extract mental health symptoms from the drug-related social media data to connect the association between drugs and mental health problems, for example, the association between cannabis and depression [64, 65]. We also plan to build an Opioid Drug Social Media Knowledge graph with all the diverse data points (Drug, Sentiment, Emotion, mental health symptom, location) and compare it against the state-of-art 'Knowledge Graph-based Approach For Exploring The U.S. Opioid Epidemic' [63]. Potential areas of application would be identifying risk factors regarding addiction and mental health from subreddit data [66], and identifying drug trends based on location with a possible Opioid epidemic prediction. We would also like to rely on Drug Enforcement Agency (DEA) Drug Seizures to include in our preliminary data collection process to be aware of related social media discussions.
## VII Ethical Statement
We apply our model to study how historic emotion and sentiment of a drug impacts social media conversation dynamics related to substance use. An important aspect that we need to consider while working with addiction-related issues is to respect the users' privacy and adhere to good ethical practices adopted by previous research [67, 68]. Therefore, similar to Matthews et al. [69], we censor several sensitive information, such as user names, personal information, platform identifiers, and URLs which might be directly linked to the user's identity, from the collected posts. All examples used in this paper are anonymized, and de-identified for user privacy [70]. We also adopted the proposed guidelines for the treatment of Names and Online Pseudonyms in posts gathered from social media [71]. In this work, we study substance use in Subreddit groups in the form of textual interactions. The expressed addiction intent may differ from the intent actually perceived or experienced by the person. However, obtaining perceived intent from social media is challenging and involves ethical risks. Before behavioral health intervention apps based on social media data can be used in real-world settings, difficulties with potential biases and user privacy must first be resolved, along with establishing suitable regulations and boundaries in this domain. We can adopt the approach used in this work to develop the data set with more human supervision and we acknowledge that the data may be prone to demographic, annotator, and platform-specific biases [72, 73]. We also acknowledge that the current work does not make any clinical diagnosis or treatment suggestions in any manner whatsoever.
|
2307.09513 | Probing the Conditions for the HΔ±-to-H$_{2}$ Transition in the
Interstellar Medium | In this paper, we investigate the conditions for the HI-to-H$_{2}$ transition
in the solar neighborhood by analyzing HI emission and absorption measurements
toward 58 Galactic lines of sight (LOSs) along with $^{12}$CO(1$-$0) (CO) and
dust data. Based on the accurate column densities of the cold and warm neutral
medium (CNM and WNM), we first perform a decomposition of gas into atomic and
molecular phases and show that the observed LOSs are mostly HI-dominated. In
addition, we find that the CO-dark H$_{2}$, not the optically thick HI, is a
major ingredient of the dark gas in the solar neighborhood. To examine the
conditions for the formation of CO-bright molecular gas, we analyze the
kinematic association between HI and CO and find that the CNM is kinematically
more closely associated with CO than the WNM. When CNM components within CO
line widths are isolated, we find the following characteristics: spin
temperature $<$ 200 K, peak optical depth $>$ 0.1, CNM fraction of $\sim$0.6,
and $V$-band dust extinction $>$ 0.5 mag. These results suggest that CO-bright
molecular gas preferentially forms in environments with high column densities
where the CNM becomes colder and more abundant. Finally, we confront the
observed CNM properties with the steady-state H$_{2}$ formation model of
Sternberg et al. and infer that the CNM must be clumpy with a small volume
filling factor. Another possibility would be that missing processes in the
model, such as cosmic-rays and gas dynamics, play an important role in the
HI-to-H$_{2}$ transition. | Gyueun Park, Min-Young Lee, Shmuel Bialy, Blakesley Burkhart, J. R. Dawson, Carl Heiles, Di Li, Claire Murray, Hiep Nguyen, Anita Hafner, Daniel R. Rybarczyk, SneΕΎana StanimiroviΔ | 2023-07-18T18:00:20Z | http://arxiv.org/abs/2307.09513v1 | # Probing the Conditions for the H i-to-H\({}_{2}\) Transition in the Interstellar Medium
###### Abstract
In this paper, we investigate the conditions for the H i-to-H\({}_{2}\) transition in the solar neighborhood by analyzing H i emission and absorption measurements toward 58 Galactic lines of sight (LOSs) along with \({}^{12}\)CO(1-0) (CO) and dust data. Based on the accurate column densities of the cold and warm neutral medium (CNM and WNM), we first perform a decomposition of gas into atomic and molecular phases and show that the observed LOSs are mostly H i-dominated. In addition, we find that the CO-dark H\({}_{2}\), not the optically thick H i, is a major ingredient of the dark gas in the solar neighborhood. To examine the conditions for the formation of CO-bright molecular gas, we analyze the kinematic association between H i and CO and find that the CNM is kinematically more closely associated with CO than the WNM. When CNM components within CO line widths are isolated, we find the following characteristics: spin temperature \(<200\) K, peak optical depth \(>0.1\), CNM fraction of \(\sim\)0.6, and \(V\)-band dust extinction \(>0.5\) mag. These results suggest that CO-bright molecular gas preferentially forms in environments with high column densities where the CNM becomes colder and more abundant. Finally, we confront the observed CNM properties with the steady-state H\({}_{2}\) formation model of Sternberg et al. and infer that the CNM must be clumpy with a small volume filling factor. Another possibility would be that missing processes in the model, such as cosmic-rays and gas dynamics, play an important role in the H i-to-H\({}_{2}\) transition.
ISM: atoms - ISM: clouds - dust, extinction - ISM: molecules - ISM: structure - radio lines: ISM 0000-0002-0002-0002-0003]Gyueun Park
0000-0002-0002-0002-0003]Min-Young Lee
0000-0002-3181-8888]Shmuel Bialy
0000-0002-3188-0003]Blakesley Burkhar
0000-0002-0002-318-8888]J. R. Dawson
0000-0002-3188-8888]Cari Heiles
0000-0002-3188-8888]D Li
0000-0002-3188-8888]L. Hu
0000-0002-3188-8888]H. Hu
0000-0002-3188-8888]H. Hu
## 1 Introduction
As the most abundant molecule in the universe, molecular hydrogen (H\({}_{2}\)) plays a key role in the heating and cooling of the interstellar medium (ISM), as well as in the formation of other heavier molecules (e.g., Sternberg & Dalgarno 1995; Hollenbach & Tielens 1997). In addition, H\({}_{2}\) is an essential ingredient for star formation, as extensively shown by Galac
tic and extragalactic observations (e.g., Kennicutt & Evans, 2012). Considering this significance of H\({}_{2}\) in astrophysics, it is of critical importance to understand how H\({}_{2}\) forms out of the surrounding diffuse atomic (H i) gas.
Observationally, the H i-to-H\({}_{2}\) transition has been directly examined through ultraviolet (UV) absorption measurements toward early-type stars or active galactic nuclei (e.g., Savage et al., 1977; Rachford et al., 2002; Shull et al., 2021). These measurements in the Lyman \(\alpha\) (1216 A) and Lyman-Werner (LW; 912-1108 A) bands probe diffuse to translucent gas with the color excess \(E(B-V)\) of \(\sim\)0.01-1.0 mag and were analyzed to derive H i and H\({}_{2}\) column densities (\(N\)(H i) and \(N\)(H\({}_{2}\))). The molecular fraction, \(f\)(H\({}_{2}\)) = 2\(N\)(H\({}_{2}\))/[\(N\)(H i) + 2\(N\)(H\({}_{2}\))], was then found to increase from very low (\(\lesssim\) 0.01) to high values (\(\gg\)0.1) at the total hydrogen column density \(N\)(H) = \(N\)(H i) + 2\(N\)(H\({}_{2}\)) of \(\sim\)10\({}^{21}\) cm\({}^{-2}\) or \(E(B-V)\) of \(\sim\)0.1 mag, indicating a sharp conversion from H i to H\({}_{2}\).
In addition, the H i-to-H\({}_{2}\) transition has been indirectly inferred from the flattening of the H i column density with respect to other dense gas tracers. For example, Barriault et al. (2010) compared H i and OH emission in infrared (IR) cirrus clouds and showed that the OH column density increases with the H i column density up to \(N\)(OH) \(\sim\) 0.3 \(\times\) 10\({}^{14}\) cm\({}^{-2}\). At higher OH column densities, the H i column density saturates to \(\sim\)5 \(\times\) 10\({}^{20}\) cm\({}^{-2}\), implying the presence of molecular gas not traced by H i emission. Similarly, IR studies of diffuse clouds found a positive deviation from the linear relation between the H i column density and IR emission (e.g., Reach et al., 1994; Douglas & Taylor, 2007). The observed excess in IR emission indicates that a substantial amount of H\({}_{2}\) exists beyond the threshold H i column density of \(\sim\)5 \(\times\) 10\({}^{20}\) cm\({}^{-2}\).
Theoretically, the H i-to-H\({}_{2}\) transition has been explored as one of the key processes in photodissociation regions (PDRs; e.g., van Dishoeck & Black, 1986; Draine & Bertoldi, 1996; Browning et al., 2003; Goldsmith et al., 2007; Liszt, 2007). In interstellar space, molecular-dominated regions are found in dense regions where gas and dust grains provide sufficient shielding against dissociating UV radiation. These molecular regions are bound by PDRs, where the gas is primarily neutral. The structure of PDRs has been solved numerically and analytically, and recent analytical models (Krumholz et al., 2009; Sternberg et al., 2014; Bialy & Sternberg, 2016) predict that the minimum H i column density to shield H\({}_{2}\) from photodissociation depends on ISM conditions (e.g., \(N\)(H i) \(\sim\) 10\({}^{21}\) cm\({}^{-2}\) for solar metallicity). Once this minimum H i column density is accumulated, all excess H i is converted into H\({}_{2}\), resulting in the uniform H i distribution.
While the observed threshold H i column density of \(\sim\)(0.5-1) \(\times\) 10\({}^{21}\) cm\({}^{-2}\) is consistent with what the analytical H\({}_{2}\) formation models predict for H i shielding layers, the previous observational studies could not provide insights into what H i conditions aside from the minimum column density are required for H\({}_{2}\) formation as they did not distinguish between different H i phases. The distinct velocity structures between H i emission and absorption spectral pairs have been interpreted as the presence of H i gas with a range of temperatures and densities (e.g., Radhakrishnan et al., 1972), and theoretical models of neutral atomic gas indeed have suggested that two H i phases can coexist over the range of thermal pressure \(P/k_{\rm B}\)\(\sim\) 10\({}^{3}\)-10\({}^{4}\) cm\({}^{-3}\) K (\(k_{\rm B}\) = Boltzmann constant): cold neutral medium (CNM) and warm neutral medium (WNM) with densities and temperatures of (\(n\), \(T\)) \(\sim\) (5-120 cm\({}^{-3}\), 40-180 K) and (0.04-1 cm\({}^{-3}\), 7000-8000 K) (e.g., Wolfire et al., 1995, 2003; Bialy & Sternberg, 2019). In addition to these stable phases, the thermally unstable medium (UNM) with intermediate densities and temperatures has been commonly observed (e.g., Murray et al., 2015, 2018). As for the formation of molecular gas, the denser and colder CNM is expected to be crucial (e.g., H\({}_{2}\) formation \(\propto\) H i density), but the impact of the different H i phases on the H i-to-H\({}_{2}\) transition has been largely unexplored mainly because of a lack of observational constraints.
In this paper, we examine how the different H i phases are related to the H i-to-H\({}_{2}\) transition by analyzing H i emission and absorption spectra along with \({}^{12}\)CO(\(J\) = 1 \(\rightarrow\) 0) (CO(1-0) hereafter) data toward 58 lines of sight (LOSs) at Galactic latitudes \(b<-\)5\({}^{\circ}\). These data have been obtained as part of the Galactic Neutral Opacity and Molecular Excitation Survey (GNOMES) collaboration, whose primary science goal is to understand the properties of atomic and molecular gas in and around molecular clouds. So far the H i and OH data were presented in Stanimirovic et al. (2014), Nguyen et al. (2019), and Petzler et al. (2023), and we make use of the derived H i properties, such as the optical depth (\(\tau_{\rm CNM}\)) and spin temperature (\(T_{\rm s}\)) of the CNM and the column densities of the CNM and WNM (\(N_{\rm CNM}\) and \(N_{\rm WNM}\)), to explore what conditions are required for the formation of CO-bright molecular gas. The observed H i properties are also compared to the analytical model of Sternberg et al. (2014) (S14 hereafter) to test if H\({}_{2}\) formation in steady state is indeed valid for solar neighborhood conditions.
This paper is organized as follows. In Section 2, we summarize two of the most relevant studies, Nguyen et al. (2019) and S14, to provide background information. In Sections 3 and 4, we present the H i, CO, and dust data for our analyses and investigate the environmental conditions of the observed GNOMES LOSs. In Section 5, we describe the results from the CO observations and decompose the gas along each LOS into different atomic and molecular gas phases. The observed H i and CO properties are compared to each other, as well as to the prediction from the S14 model, to provide observational and theoretical perspectives on the conditions for the formation of CO-bright molecular gas (Sections 6 and 7). Fi
nally, our results are discussed and summarized in Sections 8 and 9.
## 2 Background
In this section, we summarize recent observational and theoretical studies that are most relevant to our work.
### CNM and WNM in and around molecular clouds
As part of GNOMES collaboration, Nguyen et al. (2019) analyzed Arecibo H i emission and absorption spectra toward 77 continuum sources located behind Perseus, Taurus, California, Rosette, NGC 2264, and Mon OB1. For their analyses, the authors divided the observed LOSs into the following three environments: (1) 22 LOSs at \(b>5\arcdeg\) tracing the diffuse medium ("diffuse"); (2) 20 LOSs at \(|b|<5\arcdeg\) penetrating the dense Galactic Plane with likely strong UV radiation field ("Plane"); (3) 35 LOSs at \(b<-5\arcdeg\) probing the surroundings of local molecular clouds including Taurus and Perseus ("Perseus"). The H i spectra along these LOSs were examined via the Gaussian decomposition method of Heiles & Troland (2003a) to estimate the physical properties of H i, such as the optical depth, spin temperature, and column density of the CNM and WNM (see Section 3.1 for details on the observations and analysis methods).
Strong H i absorption was detected toward all the observed LOSs, and a total of 349 CNM and 327 WNM components were identified. For the identified CNM components, the peak optical depth ranges from \(\sim\)0.01 to \(\sim\)16.2 with a median of \(\sim\)0.4, and the spin temperature varies from \(\sim\)10 K to \(\sim\)480 K with the distribution peak at \(\sim\)50 K. Interestingly, these individual properties are comparable between the three environments and agree with results from previous measurements of random LOSs (e.g., Heiles & Troland, 2003b; Murray et al., 2015, 2018b), implying that the CNM has universal properties throughout the Galaxy. On the other hand, the CNM fraction, which is defined as the ratio of the CNM to total H i column density, is systematically higher in molecular cloud environments (median fractions of 0.43 and 0.37 for the Plane and Perseus LOSs versus 0.16 for the diffuse LOSs), suggesting a close association between the abundance of the CNM and the formation of molecular gas.
### Theoretical Modeling of H\({}_{2}\) Formation in the Steady-State Medium
S14 developed an analytical model of the H i-to-H\({}_{2}\) transition in a one-dimensional plane-parallel slab of gas and dust and provided the following expression of the total H i column density for two-sided isotropic UV radiation:
\[N(\mathrm{H\;\textsc{i}})\;(\mathrm{cm}^{-2})=\frac{8.4\times 10^{20}}{\dot{ \sigma_{\mathrm{g}}}}\;\ln\left(\frac{\alpha G}{3.2}+1\right) \tag{1}\]
where \(\dot{\sigma_{\mathrm{g}}}\) is the dust absorption cross-section per hydrogen nucleus in the LW band (\(\sigma_{\mathrm{g}}\)) normalized to the canonical solar metallicity value of \(1.9\times 10^{-21}\) cm\({}^{2}\).
The dimensionless parameter \(\alpha\) in Equation (1) is the ratio of the unattenuated H\({}_{2}\) photodissociation rate to the H\({}_{2}\) formation rate, which can be expressed as
\[\begin{split}\alpha&=\frac{D_{0}}{Rn}\\ &=1.9\times 10^{4}\;\left(\frac{I_{\mathrm{UV}}}{\dot{\sigma_{ \mathrm{g}}}}\right)\left(\frac{100\;\mathrm{cm}^{-3}}{n}\right),\end{split} \tag{2}\]
where \(D_{0}\) is the free-space H\({}_{2}\) photodissociation rate, \(R\) is the rate coefficient for H\({}_{2}\) formation on dust grains, \(n=n_{1}+2n_{2}\) is the total gas number density, \(n_{1}\) is the H i number density, \(n_{2}\) is the H\({}_{2}\) number density, and \(I_{\mathrm{UV}}\) is the strength of UV radiation relative to the Draine field (Draine, 1978; Bialy, 2020). On the other hand, the other dimensionless parameter \(G\) can be interpreted as the average H\({}_{2}\) self-shielding factor. Here we employ the expression derived by Bialy & Sternberg (2016), which uses a more accurate fitting function for the H\({}_{2}\) dissociation bandwidth and reads as
\[G=3\times 10^{-5}\;\dot{\sigma_{\mathrm{g}}}\left(\frac{9.9}{1+8.9\dot{\sigma_{ \mathrm{g}}}}\right)^{0.37}. \tag{3}\]
Combining Equations (2) and (3), \(\alpha G\) can be written as1
Footnote 1: Equation (4) is taken from Bialy & Sternberg (2016), who examined the H i and H\({}_{2}\) density profiles of optically thick interstellar clouds based on the S14 model. While this expression was originally derived for beamed UV radiation, it is also applicable for isotropic UV radiation, considering that \(\alpha\) is equal for beamed and isotropic UV radiation fields with the same strength and \(G\) is independent of the UV field geometry (S14).
\[\alpha G=0.59\;I_{\mathrm{UV}}\left(\frac{100\;\mathrm{cm}^{-3}}{n}\right) \left(\frac{9.9}{1+8.9\dot{\sigma_{\mathrm{g}}}}\right)^{0.37} \tag{4}\]
and has the physical meaning of the ratio of the effective H\({}_{2}\) photodissociation rate (accounting for UV shielding) to the H\({}_{2}\) formation rate. For realistic ISM conditions, \(\alpha G\) can range from large to small values. For example, when \(\alpha G\) is small (\(\ll\)1; "weak-field limit"), H\({}_{2}\) self-shielding primarily protects H\({}_{2}\) from dissociating UV photons, and the H i-to-H\({}_{2}\) transition is gradual. In other words, most of the H i column density is built up beyond the transition point where the gas is mainly molecular. On the contrary, when \(\alpha G\) is large (\(\gg\)1; "strong-field limit"), dust absorption becomes important, resulting in a sharp H i-to-H\({}_{2}\) transition due to the exponential reduction of UV radiation with cloud column density. In this case, the H i column density is built up in the outer layer of the gas slab prior to the transition point. We refer to S14 and Bialy & Sternberg (2016) for details on the model and the parameters.
## 3 Data
### Gnomes: H i and OH
In this study, we make use of the H i (1.4204 GHz) and OH (1.6654 and 1.6673 GHz) emission/absorption spectra from Stanimirovic et al. (2014) and Nguyen et al. (2019). These spectra were obtained with the 305 m Arecibo telescope (providing angular and velocity resolutions of 3.5' and 0.16 km s\({}^{-1}\)) toward 100 extragalactic continuum sources that were selected from the NRAO VLA Sky Survey (NVSS; Condon et al., 1998) with 1.4 GHz flux densities \(S_{1.4}\gtrsim 0.6\) Jy. Among the observed sources, 58 at \(b<-5\)deg probing the surroundings of the Perseus, Taurus, and California molecular clouds were considered for our study (Figure 1 and Table 1).
The methodology of the observations and data reduction in Stanimirovic et al. (2014) and Nguyen et al. (2019) is essentially based on Heiles and Troland (2003), and we provide here a summary for the H i data. For each source, 1 on-source and 16 off-source measurements were made to obtain optical depth (\(\tau_{\rm CNM}\)) and "expected" emission (\(T_{\rm exp}\)) spectra. The expected emission spectrum is the one that would be observed at the source position if the source were turned off, and was derived by approximating the off-source spectra as a second-order Taylor expansion of the expected emission spectrum. This approximation was to consider spatial variations in H i emission, and the derivatives were used to estimate the uncertainty spectrum of expected emission. The median 1\(\sigma\) uncertainties in the measured optical depth (\(\sigma_{e^{-\sigma}}\)) and expected emission (\(\sigma_{T_{\rm exp}}\)) at a velocity resolution of 0.16 km s\({}^{-1}\) are 0.02 and 0.36 K, respectively.
The obtained H i absorption and emission spectra were analyzed through the Gaussian decomposition method of Heiles and Troland (2003). This method simultaneously fits the absorption and emission spectra with individual Gaussian components under the assumption that the CNM is detected in both absorption and emission, while the WNM contributes to the emission spectrum only. In the fitting process, all possible permutations of the CNM components are considered to find the best-fit model with a minimum chi-square value. In addition, the fitting takes into account the possibility that a certain fraction of the WNM (\(F\)) could be located in front of the CNM by assuming three cases \(F=0\), 0.5, and 1 (e.g., \(F=1\) means that the WNM is not absorbed by the CNM at all). The final parameters from the fitting process include the velocities, widths, spin temperatures, peak optical depths, and H i column densities of individual Gaussian components2, and we refer to Section 3 of Stanimirovic et al. (2014) for details on the fitting procedure. For our analyses, we mostly used the derived H i properties and utilized the OH spectra only to separate LOSs with molecular gas (Section 4.1).
Footnote 2: For WNM components, lower and upper limits are provided on the spin temperature and peak optical depth, respectively.
Finally, we note that 3C092, 3C131, and 4C+27.14 were observed both in Stanimirovic et al. (2014) and Nguyen et al. (2019). These observations are essen
Figure 1.β GNOMES LOSs at \(b<-5\)Β° overlaid on the _Planck_\(A_{V}\) image (Section 4.1; 1, 3, and 5 mag as the white contours). Among the total 58 LOSs, 19 LOSs where CO(1β0) emission is clearly detected are shown as the tan crosses. The remaining 39 LOSs without CO detection are indicated as the blue crosses. The green boxes represent the approximate extents of local molecular clouds (top to bottom: California, Taurus, and Perseus; Lee et al., 2018).
uncertainties, and we used the spectra from Nguyen et al. (2019) for our analyses since they have better sensitivities.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Source & R.A. (J2000) & Decl. (J2000) & \(l\) & \(b\) & \(S_{1.4}\) & \(T_{\rm sky}\) & CO(1β0) \\ & (hh:mm:ss) & (dd:mm:ss) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (Jy) & (K) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline J034053+073525 (4C+07.13) & 03:40:53.73 & 07:35:25.40 & 178.87 & \(-\)36.27 & 1.0 & 4.07 & \\ J032153+122114 (PKS0319+12) & 03:21:53.11 & 12:21:14.00 & 170.59 & \(-\)36.24 & 1.9 & 4.51 & \\ J032723+120835 (4C+11.15) & 03:27:23.11 & 12:08:35.80 & 171.98 & \(-\)35.48 & 1.2 & 4.17 & β \\ J031857+162833 (4C+16.09) & 03:18:57.77 & 16:28:33.10 & 166.64 & \(-\)33.60 & 8.0 & 6.93 & \\ J033626+130233 (3C090) & 03:36:26.56 & 13:02:33.20 & 173.15 & \(-\)33.29 & 2.0 & 4.67 & \\ J015712+285138 (NV0157+28) & 01:57:12.85 & 28:51:38.49 & 139.90 & \(-\)31.83 & 1.4 & 2.78 & \\ J021701+280458 (4C+27.07) & 02:17:01.89 & 28:04:59.12 & 145.01 & \(-\)31.09 & 1.0 & 2.79 & \\ J020136+293340 (4C+29.05) & 02:01:35.91 & 29:33:44.18 & 140.72 & \(-\)30.88 & 1.2 & 2.79 & \\ J022412+275011 (3C067) & 02:24:12.31 & 27:50:11.69 & 146.82 & \(-\)30.70 & 3.0 & 2.79 & \\ J035613+130535 & 03:56:13.81 & 13:05:35.80 & 177.02 & \(-\)29.78 & 0.9 & 4.14 & \\ J023752+284809 (4C+28.07) & 02:37:52.42 & 28:48:09.16 & 149.47 & \(-\)28.53 & 2.2 & 2.79 & \\ J023535+290857 (4C+28.06) & 02:35:35.41 & 29:08:57.73 & 148.78 & \(-\)28.44 & 1.3 & 2.79 & \\ J035900+143622 (3C096) & 03:59:00.91 & 14:36:22.50 & 176.27 & \(-\)28.26 & 1.2 & 4.37 & \\ J022048+324106 (5C06.237) & 02:20:48.06 & 32:41:06.64 & 143.88 & \(-\)26.53 & 0.9 & 2.79 & \\ J042725+085330 (4C+08.15) & 04:27:25.05 & 08:53:30.30 & 186.21 & \(-\)26.51 & 0.9 & 4.08 & \\ J023423+313418 (3C068.2) & 02:34:23.87 & 31:34:17.62 & 147.33 & \(-\)26.38 & 1.0 & 2.79 & \\ J032504+244445 (4C+24.06) & 03:25:04.35 & 24:44:45.60 & 161.92 & \(-\)26.26 & 0.8 & 4.13 & β \\ J035633+190034 (4C+18.11) & 03:56:33.46 & 19:00:34.60 & 172.23 & \(-\)25.66 & 1.1 & 4.15 & \\ J022610+342130 (4C+34.07) & 02:26:10.34 & 34:21:30.45 & 144.31 & \(-\)24.55 & 2.9 & 2.79 & \\ J041140+171405 (4C+17.23) & 04:11:40.77 & 17:14:05.10 & 176.36 & \(-\)24.24 & 1.0 & 4.26 & β \\ J023228+342405 (NV0232+34) & 02:32:28.72 & 34:24:06.08 & 145.60 & \(-\)23.98 & 2.6 & 2.79 & \\ J02105+355613 (B20218+35) & 02:21:05.48 & 35:56:13.91 & 142.60 & \(-\)23.49 & 1.7 & 2.79 & \\ J031135+304320 (4C+30.04) & 03:11:35.19 & 30:43:20.62 & 155.40 & \(-\)23.17 & 1.0 & 2.79 & β \\ J032957+275615 (B20326+27) & 03:29:57.69 & 27:56:15.64 & 160.70 & \(-\)23.07 & 1.3 & 2.79 & \\ J042022+175355 (3C114) & 04:20:22.17 & 17:53:55.20 & 177.30 & \(-\)22.24 & 1.1 & 4.23 & β \\ J042524+175525 (4C+17.25) & 04:25:24.43 & 17:55:25.30 & 178.11 & \(-\)21.31 & 0.9 & 4.16 & \\ J035204+262418 (4C+26.12) & 03:52:04.36 & 26:24:18.11 & 165.82 & \(-\)21.06 & 1.4 & 2.78 & \\ J042756+175242 (4C+17.26) & 04:27:56.98 & 17:52:42.80 & 178.56 & \(-\)20.88 & 1.0 & 4.22 & \\ J044907+112128 (PKS0446+11) & 04:49:07.65 & 11:21:28.20 & 187.43 & \(-\)20.74 & 0.9 & 4.16 & \\ J030142+351219 (4C+34.09) & 03:01:42.38 & 35:12:20.84 & 150.94 & \(-\)20.49 & 1.9 & 2.79 & \\ J041243+230506 (3C108) & 04:12:43.69 & 23:05:05.53 & 171.87 & \(-\)20.12 & 1.5 & 2.79 & β \\ J040305+260001 (B20400+25) & 04:03:05.61 & 26:00:01.61 & 168.03 & \(-\)19.65 & 0.9 & 2.79 & β \\ J034008+320901 (3C092) & 03:40:08.54 & 32:09:01.30 & 159.74 & \(-\)18.41 & 1.6 & 3.95 & β \\ J042846+213331 (4C+21.17) & 04:28:46.64 & 21:33:31.40 & 175.70 & \(-\)18.36 & 1.3 & 4.35 & β \\ J0440442+290215 (4C+28.11) & 04:04:42.82 & 29:02:15.90 & 166.06 & \(-\)17.22 & 1.0 & 3.69 & β \\ \hline \end{tabular}
\end{table}
Table 1: 58 LOSs in our study
### Co
Single-pointing observations of the CO(1-0) transition at 115.2712 GHz were carried out toward the 58 GNOMES LOSs at \(b<-5^{\circ}\) using the 13.7 m telescopes at the Taeduk Radio Astronomy Observatory (TRAO) and Purple Mountain Observatory (PMO). The TRAO observations were made in March and December 2020 and in February 2021, while the PMO observations were performed from March to May 2020. During these observations, the system temperature was 400-900 K and 200-300 K for the TRAO and PMO telescopes, respectively.
The obtained CO(1-0) spectra were processed using the GILDAS CLASS software3. For the TRAO data, a beam efficiency of \(\eta_{\rm MB}=0.40\) was adopted to convert the corrected antenna temperature into the main-beam brightness temperature (\(T_{\rm MB}=T_{\rm A}^{*}\) / \(\eta_{\rm MB}\)). On the other hand, no conversion was made for the PMO data since they were delivered in units of main-beam brightness temperature. The final spectra on 48\({}^{\prime\prime}\) scales were smoothed to a velocity resolution of 0.32 km s\({}^{-1}\) and have a median root-mean-square (rms) noise level of 0.1 K. A comparison between the CO(1-0) emission and H i absorption spectra is presented in Appendix A.
Footnote 3: [https://www.iram.fr/IRAMFR/GILDAS/](https://www.iram.fr/IRAMFR/GILDAS/)
To determine the presence of CO emission, we adopted a 3\(\sigma\) threshold and considered components whose peak-to-rms
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Source} & R.A. (J2000) & Decl. (J2000) & \(l\) & \(b\) & \(S_{1.4}\) & \(T_{\rm sky}\) & CO(1β0) \\ & & (hh:mm:ss) & (dd:mm:ss) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (Jy) & (K) & \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline J042049+252627 (4C+25.14) & 04:20:49.30 & 25:26:27.63 & 171.37 & \(-\)17.16 & 1.0 & 2.79 & β \\ J034846+335315 (3C093.1) & 03:48:46.93 & 33:53:15.41 & 160.04 & \(-\)15.91 & 2.4 & 2.80 & \\ J052424+074957 (4C+07.16) & 05:24:24.04 & 07:49:57.10 & 195.51 & \(-\)15.35 & 0.8 & 4.25 & \\ J051240+151723 (PKS0509+152) & 05:12:40.99 & 15:17:23.80 & 187.41 & \(-\)13.79 & 1.0 & 4.11 & \\ J053239+073243 & 05:32:39.01 & 07:32:43.50 & 196.84 & \(-\)13.74 & 2.7 & 4.96 & \\ J051930+142829 (4C+14.14) & 05:19:30.95 & 14:28:29.00 & 189.04 & \(-\)12.85 & 0.9 & 4.15 & β \\ J045643+224922 (3C132) & 04:56:43.08 & 22:49:22.27 & 178.86 & \(-\)12.52 & 3.4 & 2.80 & \\ J053450+100430 (4C+09.21) & 05:34:50.82 & 10:04:30.30 & 194.89 & \(-\)11.98 & 1.1 & 4.62 & β \\ J041437+341851 (B20411+34) & 04:14:37.28 & 34:18:51.31 & 163.80 & \(-\)11.98 & 1.9 & 2.79 & \\ J041236+353543 (4C+35.07) & 04:12:36.28 & 35:35:43.20 & 162.58 & \(-\)11.36 & 0.9 & 3.93 & \\ J052109+163822 (3C138) & 05:21:09.93 & 16:38:22.20 & 187.41 & \(-\)11.34 & 8.6 & 7.59 & \\ J053056+133155 (PKS0528+134) & 05:30:56.44 & 13:31:55.30 & 191.37 & \(-\)11.01 & 1.6 & 4.64 & β \\ J042353+345144 (3C115) & 04:23:53.25 & 34:51:44.80 & 164.76 & \(-\)10.24 & 1.3 & 3.88 & \\ J050258+251624 (3C133) & 05:02:58.51 & 25:16:25.16 & 177.73 & \(-\)9.91 & 5.8 & 2.80 & β \\ J060536+014512 (4C+01.17) & 06:05:36.56 & 01:45:12.70 & 206.08 & \(-\)9.37 & 0.6 & 4.07 & \\ J045956+270602 (4C+27.14) & 04:59:56.09 & 27:06:02.90 & 175.83 & \(-\)9.36 & 0.9 & 3.90 & β \\ J051740+235110 (4C+23.14) & 05:17:40.81 & 23:51:10.20 & 180.86 & \(-\)8.01 & 1.0 & 4.32 & β \\ J045323+312924 (3C131) & 04:53:23.34 & 31:29:24.20 & 171.44 & \(-\)7.80 & 2.9 & 4.04 & β \\ J053557+175600 (4C+17.33) & 05:35:57.42 & 17:56:00.70 & 188.22 & \(-\)7.67 & 0.8 & 4.23 & \\ J044708+332747 (4C+33.10) & 04:47:08.90 & 33:27:46.85 & 169.05 & \(-\)7.57 & 1.2 & 2.80 & β \\ J053444+192721 (PKS0531+19) & 05:34:44.51 & 19:27:21.70 & 186.76 & \(-\)7.11 & 7.0 & 6.48 & \\ J054046+172839 (4C+17.34) & 05:40:46.05 & 17:28:39.20 & 189.21 & \(-\)6.93 & 1.5 & 4.50 & \\ J050929+295755 (4C+29.16) & 05:09:29.51 & 29:57:55.80 & 174.77 & \(-\)5.97 & 1.1 & 4.03 & \\ \hline \end{tabular} Note. β(1) Source name; (2, 3) Right ascension (R.A.) and declination (Decl.) coordinates; (4, 5) Galactic coordinates; (6) Flux density at 1.4 GHz; (7) Diffuse background radio continuum emission; (8) Detection of CO(1β0) emission. Here the columns (1)β(7) are from Stanimirovic et al. (2014) and Nguyen et al. (2019).
\end{table}
Table 1: (continued)
ratios are equal to or higher than three as detections. Once the presence of CO emission was confirmed, we fitted Gaussians to the spectra to derive line parameters such as the central velocity (\(v_{\rm CO}\)), full width at half maximum (FWHM; \(\Delta\nu_{\rm CO}\)), and peak main-beam brightness temperature (\(T_{\rm peak,CO}\)). The derived line parameters, as well as the CO integrated intensity (\(I\)(CO); calculated by integrating CO(1-0) emission over a velocity range where the emission is clearly visible) and rms noise, are presented in Table 2.
Finally, we note that two of our target sources (3C092 and 3C108) were observed using both telescopes to check the calibration levels of the TRAO and PMO observations. The difference between the TRAO and PMO observations was 10-20%, which is within the calibration uncertainty of \(\sim\)20% for the TRAO telescope at 115 GHz. This suggests that the obtained CO(1-0) spectra are well calibrated and can be used for further analyses.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Source & \(v_{\rm CO}\) & \(\Delta\epsilon_{\rm CO}\) & \(T_{\rm peak,CO}\) & \(I\)(CO) & \(\sigma_{\rm rms}\) & Telescope \\ & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (K) & (K km s\({}^{-1}\)) & (K) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline
4C+07.13 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\ PKS0319+12 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & PMO \\
4C+11.15 & \(6.96\pm 0.02\) & \(0.41\pm 0.06\) & \(1.14\pm 0.13\) & \(0.49\pm 0.04\) & 0.07 & PMO \\
4C+16.09 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.17 & TRAO \\
3C090 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\ NV0157+28 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
4C+27.07 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
4C+29.05 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
3C067 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\ J035613+130535 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & PMO \\
4C+28.07 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.08 & PMO \\
4C+28.06 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
3C096 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.08 & PMO \\
5C06.237 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
4C+08.15 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.20 & TRAO \\
3C068.2 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.07 & PMO \\
4C+24.06 & \(6.95\pm 0.09\) & \(1.02\pm 0.21\) & \(0.44\pm 0.08\) & \(0.50\pm 0.09\) & 0.10 & PMO \\
4C+18.11 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
4C+34.07 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.08 & PMO \\
4C+17.23\({}^{\rm a}\) & \(9.10\pm 0.01\) & \(0.64\pm 0.02\) & \(4.96\pm 0.11\) & \(3.43\pm 0.07\) & 0.11 & PMO \\
4C+17.23\({}^{\rm a}\) & \(11.17\pm 0.02\) & \(0.92\pm 0.05\) & \(2.07\pm 0.09\) & \(2.04\pm 0.07\) & 0.11 & PMO \\ NV0232+34 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.08 & PMO \\ B20218+35 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.07 & PMO \\
4C+30.04\({}^{\rm a}\) & \(0.69\pm 0.01\) & \(0.62\pm 0.03\) & \(2.64\pm 0.12\) & \(1.77\pm 0.06\) & 0.10 & PMO \\
4C+30.04\({}^{\rm a}\) & \(0.89\pm 0.19\) & \(2.79\pm 0.53\) & \(0.37\pm 0.09\) & \(1.10\pm 0.13\) & 0.10 & PMO \\ B20326+27 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
3C114\({}^{\rm a}\) & \(8.55\pm 0.11\) & \(0.70\pm 0.27\) & \(0.33\pm 0.10\) & \(0.25\pm 0.06\) & 0.10 & PMO \\
3C114\({}^{\rm a}\) & \(9.48\pm 0.05\) & \(0.46\pm 0.15\) & \(0.66\pm 0.16\) & \(0.33\pm 0.05\) & 0.10 & PMO \\ \hline \end{tabular}
\end{table}
Table 2: Derived CO(1β0) properties
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Source & \(v_{\rm CO}\) & \(\Delta\)\({}_{\rm CO}\) & \(T_{\rm peak,CO}\) & \(I\)(CO) & \(\sigma_{\rm rms}\) & Telescope \\ & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (K) & (K km s\({}^{-1}\)) & (K) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline
4C+17.25 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
4C+26.12 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & TRAO \\
4C+17.26 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & PMO \\ PKS0446+11 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.12 & PMO \\
4C+34.09 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
3C108\({}^{\rm b}\) & 6.14 \(\pm\) 0.11 & 0.40 \(\pm\) 0.21 & 0.69 \(\pm\) 0.29 & 0.30 \(\pm\) 0.07 & 0.13 & TRAO \\
3C108\({}^{\rm b}\) & 9.42 \(\pm\) 0.01 & 1.13 \(\pm\) 0.02 & 11.10 \(\pm\) 0.15 & 13.49 \(\pm\) 0.07 & 0.13 & TRAO \\ B20400+25 & 7.07 \(\pm\) 0.06 & 1.01 \(\pm\) 0.13 & 0.59 \(\pm\) 0.07 & 0.69 \(\pm\) 0.08 & 0.09 & PMO \\
3C092 & 8.80 \(\pm\) 0.01 & 1.66 \(\pm\) 0.02 & 9.68 \(\pm\) 0.10 & 17.22 \(\pm\) 0.12 & 0.10 & PMO \\
4C+21.17 & 10.17 \(\pm\) 0.06 & 0.65 \(\pm\) 0.13 & 0.66 \(\pm\) 0.12 & 0.58 \(\pm\) 0.09 & 0.12 & PMO \\
4C+28.11 & 6.63 \(\pm\) 0.01 & 1.10 \(\pm\) 0.02 & 9.83 \(\pm\) 0.12 & 11.78 \(\pm\) 0.12 & 0.10 & PMO \\
4C+25.14\({}^{\rm a}\) & 3.59 \(\pm\) 0.09 & 1.33 \(\pm\) 0.21 & 0.47 \(\pm\) 0.06 & 0.67 \(\pm\) 0.09 & 0.09 & PMO \\
4C+25.14\({}^{\rm a}\) & 6.78 \(\pm\) 0.01 & 0.74 \(\pm\) 0.02 & 4.78 \(\pm\) 0.10 & 3.80 \(\pm\) 0.09 & 0.09 & PMO \\
4C+25.14\({}^{\rm a}\) & 7.92 \(\pm\) 0.02 & 1.14 \(\pm\) 0.06 & 2.64 \(\pm\) 0.07 & 3.24 \(\pm\) 0.08 & 0.09 & PMO \\
3C093.1 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & PMO \\
4C+07.16 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & PMO \\ PKS0509+152 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.12 & PMO \\ J053239+073243 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & PMO \\
4C+14.14 & \(2.17\pm 0.13\) & 0.34 \(\pm\) 0.42 & 0.60 \(\pm\) 0.57 & 0.28 \(\pm\) 0.08 & 0.15 & PMO \\
3C132 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.10 & PMO \\
4C+09.21 & \(1.99\pm 0.26\) & 2.59 \(\pm\) 0.61 & 0.23 \(\pm\) 0.05 & 0.64 \(\pm\) 0.11 & 0.10 & PMO \\ B20411+34 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & PMO \\
4C+35.07 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & PMO \\
3C138 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.15 & PMO \\ PKS0528+134 & \(9.63\pm 0.02\) & 0.89 \(\pm\) 0.06 & 2.64 \(\pm\) 0.15 & 2.77 \(\pm\) 0.15 & 0.17 & PMO \\
3C115 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & TRAO \\
3C133 & \(7.45\pm 0.02\) & 0.84 \(\pm\) 0.04 & 3.57 \(\pm\) 0.15 & 3.18 \(\pm\) 0.14 & 0.18 & PMO \\
4C+01.17 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & TRAO \\
4C+27.14\({}^{\rm a}\) & 6.03 \(\pm\) 0.04 & 1.00 \(\pm\) 0.10 & 1.11 \(\pm\) 0.08 & 1.19 \(\pm\) 0.07 & 0.09 & PMO \\
4C+27.14\({}^{\rm a}\) & 7.78 \(\pm\) 0.01 & 1.27 \(\pm\) 0.02 & 7.16 \(\pm\) 0.07 & 9.77 \(\pm\) 0.08 & 0.09 & PMO \\
4C+23.14\({}^{\rm b}\) & -3.70 \(\pm\) 0.13 & 1.32 \(\pm\) 0.30 & 0.36 \(\pm\) 0.07 & 0.51 \(\pm\) 0.09 & 0.10 & TRAO \\
4C+23.14\({}^{\rm b}\) & 1.15 \(\pm\) 0.13 & 0.95 \(\pm\) 0.31 & 0.41 \(\pm\) 0.08 & 0.42 \(\pm\) 0.09 & 0.10 & TRAO \\
4C+23.14\({}^{\rm b}\) & 2.37 \(\pm\) 0.13 & 0.84 \(\pm\) 0.30 & 0.39 \(\pm\) 0.09 & 0.35 \(\pm\) 0.07 & 0.10 & TRAO \\
3C131\({}^{\rm a}\) & 4.79 \(\pm\) 0.12 & 1.60 \(\pm\) 0.31 & 0.51 \(\pm\) 0.06 & 0.87 \(\pm\) 0.10 & 0.10 & PMO \\
3C131\({}^{\rm a}\) & 6.86 \(\pm\) 0.02 & 1.31 \(\pm\) 0.04 & 3.67 \(\pm\) 0.07 & 5.16 \(\pm\) 0.09 & 0.10 & PMO \\
4C+17.33 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & TRAO \\
4C+33.10\({}^{\rm b}\) & -2.29 \(\pm\) 0.01 & 1.22 \(\pm\) 0.02 & 5.53 \(\pm\) 0.09 & 7.24 \(\pm\) 0.09 & 0.10 & PMO \\
4C+33.10\({}^{\rm b}\) & 5.91 \(\pm\) 0.01 & 0.42 \(\pm\) 0.06 & 3.09 \(\pm\) 0.35 & 1.38 \(\pm\) 0.09 & 0.10 & PMO \\
4C+33.10\({}^{\rm b}\) & 6.61 \(\pm\) 0.01 & 0.68 \(\pm\) 0.02 & 6.24 \(\pm\) 0.13 & 4.55 \(\pm\) 0.07 & 0.10 & PMO \\ \hline \end{tabular} \
### Planck Data
To estimate the environmental conditions of the GNOMES LOSs such as the strength of UV radiation (\(I_{\rm UV}\)), \(V\)-band dust extinction (\(A_{V}\)), and dust-to-gas ratio (DGR), we used _Planck_ data. Specifically, we employed the images of dust temperature (\(T_{\rm dust}\)), spectral index (\(\beta\)), and dust opacity at 353 GHz (\(\tau_{353}\)) from Planck Collaboration et al. (2016) and extracted the values toward the 58 LOSs by using the Python package "dustmaps" of Green (2018). These extracted values are on 5' scales.
## 4 Environmental Conditions
### Dust and Gas Properties
Before comparing the observed H i and CO properties, we probed the environmental conditions of the GNOMES LOSs based on the _Planck_ data. As the first step, we calculated dust abundances by converting the 353 GHz dust opacity into the \(V\)-band dust extinction:
\[\begin{split} A_{V}\ (\rm mag)&=R_{V}\ E(B-V)\\ &=3.1\times(1.5\times 10^{4}\ \tau_{353}).\end{split} \tag{5}\]
For this calculation, the total-to-selective extinction ratio \(R_{V}=3.1\) for the diffuse ISM is assumed (Mathis, 1990). In addition, the conversion factor of \(1.5\times 10^{4}\) mag is adopted to translate the 353 GHz dust opacity \(\tau_{353}\) into the reddening \(E(B-V)\) based on Planck Collaboration et al. (2014). The derived \(A_{V}\) ranges from 0.2 mag to 4 mag with a median of 1 mag, suggesting that our LOSs probe diffuse to dense interstellar gas.
Next we estimated the DGR by deriving \(A_{V}/N(\rm H\ i)\) for diffuse LOSs where gas is dominated by atomic gas. To identify such LOSs, the following criteria were applied: (1) no CO or OH detection; (2) \(A_{V}<0.5\) mag. The second threshold is motivated by observational and theoretical studies that found H\({}_{2}\) formation at \(A_{V}\sim 0.5\) mag in the solar neighborhood (e.g., Lee et al., 2012; Sternberg et al., 2014). With the two criteria, we found 13 atomic-dominated LOSs and calculated \(N(\rm H\ i)\) by considering both CNM and WNM column densities:
\[\begin{split} N(\rm H\ i)\ (\rm cm^{-2})&=N_{\rm CNM}+N_{\rm WNM}\\ &=1.823\times 10^{18}\int\left(\sum_{0}^{N-1}T_{s,n}\tau_{0,n}e^{- \left[(e-\tau_{0,n})/\delta\kappa_{n}\right]^{2}}\right.\\ &\left.+\sum_{0}^{K-1}T_{0,k}e^{-\left[(e-\tau_{0,k})/\delta \kappa_{n}\right]^{2}}\right)dv\ \left(\rm K\ km\ s^{-1}\right).\end{split} \tag{6}\]
Here the subscripts \(n\) and \(k\) refer to CNM and WNM components, \(\tau_{0}\) is the peak optical depth, \(v_{0}\) is the central velocity, \(T_{0}\) is the peak brightness temperature, and \(\delta v\) is the \(1/e\) width of the component. The derived \(A_{V}/N(\rm H\ i)\) in units of mag \(\rm cm^{2}\) has a range of \((0.3-0.5)\times 10^{-21}\) with a median of \(0.4\times 10^{-21}\) and is in good agreement with typical Galactic DGR values (e.g., Bohlin et al., 1978; Liszt, 2014; Lenz et al., 2017; Nguyen et al., 2018).
Finally, we estimated total gas column densities toward the GNOMES LOSs by dividing the _Planck_-based \(A_{V}\) by the representative DGR of \(0.4\times 10^{-21}\) mag \(\rm cm^{2}\):
\[\begin{split} N(\rm H)_{A_{V}\rm-based}\ (\rm cm^{-2})&=N(\rm H\ i)+2N(\rm H _{2})\\ &=\frac{A_{V}}{0.4\times 10^{-21}\ \rm mag\ cm^{2}}.\end{split} \tag{7}\]
### UV Radiation
We estimated the strength of UV radiation in units of the Draine field using the expression derived for dust grains at high Galactic latitudes (e.g., Boulanger et al., 1996; Paradis et al., 2011):
\[I_{\rm UV}=\left(\frac{T_{\rm dust}}{17.5\ \rm K}\right)^{\beta+4}. \tag{8}\]
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Source & \(v_{\rm CO}\) & \(\Delta\tau_{\rm CO}\) & \(T_{\rm peak,CO}\) & \(I(\rm CO)\) & \(\sigma_{\rm rms}\) & Telescope \\ & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (K) & (K km s\({}^{-1}\)) & (K) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline PKS0531+19 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.09 & TRAO \\
4C+17.34 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.11 & TRAO \\
4C+29.16 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & 0.18 & TRAO \\ \hline \end{tabular} Note. β(1) Source name; (2) Central velocity; (3) FWHM; (4) Peak brightness temperature; (5) Integrated intensity; (6) rms noise level; (7) Telescope that was used to obtained the spectrum. a Multiple peaks are close enough in velocity to be considered as one component based on our threshold (Section 5.1). b These sources are considered to have two distinct peaks.
\end{table}
Table 2_(continued)_
Except for four LOSs with relatively high dust temperatures (20-24 K), \(T_{\rm dust}\) is mostly 18 K, resulting in the typical solar neighborhood condition of \(I_{\rm UV}\sim 1\).
### Uncertainties
Our estimation of the strength of UV radiation and total gas column density likely suffers from several systematic uncertainties. Firstly, the _Planck_ dust data (\(T_{\rm dust}\), \(\beta\), and \(\tau_{353}\)) were derived based on the model of modified blackbody emission, and the main assumptions for this derivation, such as a single dust temperature along a LOS, could be invalid under some circumstances. Secondly, the conversion of \(\tau_{353}\) into \(N\)(H) involves a few steps, which are likely reasonable for diffuse gas, but could be less appropriate for H\({}_{2}\)-dominated LOSs. For example, \(R_{V}\) could be higher than 3.1 in dense regions due to grain growth (e.g., Chapman & Mundy, 2009; Steinacker et al., 2010). Grain growth could also cause an underestimation of the DGR (e.g., Roman-Duval et al., 2014). Finally, the different angular resolutions of the _Planck_ and H i measurements could hinder a derivation of accurate gas and dust properties.
This discussion on possible uncertainties demonstrates that various factors affect our derivation of the dust and gas properties. However, it is not straightforward to evaluate the impact of each factor based on the currently available data, and we hence proceeded bearing in mind the uncertainty sources.
## 5 Results
### Observed CO Properties
CO(1-0) is detected toward 19 sources (tan crosses in Figure 1), suggesting a detection rate of 33% at a rms level of 0.1 K. Among these sources, ten show simple spectra with single Gaussians, while nine have multiple peaks. For these peaks, we examined the difference between their central velocities and considered them as one component if the velocity difference is smaller than the sum of their 2 \(\times\) FWHMs. Based on this threshold, only the peaks toward 3C108, 4C+23.14, and 4C+33.10 are regarded as sufficiently distinct components.
The derived central velocities are mostly between 7 km s\({}^{-1}\) and 11 km s\({}^{-1}\) with a few components at lower velocities (\(-\)4 km s\({}^{-1}\) to 4 km s\({}^{-1}\)), suggesting that the observed CO(1-0) emission is associated with Perseus, Taurus, California, and their surrounding regions (e.g., Ridge et al., 2006; Narayanan et al., 2008; Lada et al., 2009). The FWHM line widths are generally small, with a median of 1 km s\({}^{-1}\). Finally, the peak main-beam brightness temperature ranges from 0.2 K to 11.1 K, indicating that we are tracing diffuse (\(\lesssim 1\) K) to dense (\(\gtrsim 5\) K) molecular gas.
### Gas Phases
With the available multi-wavelength data, we can examine gas phases toward the GNOMES LOSs where
\[\begin{split} N(\rm H)&=N(\rm H\ i)+2N(\rm H_{2})\\ &=N(\rm H\ i)_{\rm thin}+N(\rm H\ i)_{\rm thick}+2N(\rm H_{2})_{ \rm dark}+2N(\rm H_{2})_{\rm bright}.\end{split} \tag{9}\]
For our examination, we used \(A_{V}\) as a tracer of total gas column density and estimated \(N\)(H) by dividing \(A_{V}\) by the representative DGR of \(0.4\times 10^{-21}\) mag cm\({}^{2}\) (Section 4.1). In the case of atomic gas, its "true" column density is a sum of the following two: (1) \(N\)(H i)\({}_{\rm thin}\) that is calculated by assuming optically thin emission; (2) \(N\)(H i)\({}_{\rm thick}\) that is missing in the optically thin approximation due to a high opacity. Similarly, molecular gas consists of two types of H\({}_{2}\): (1) \(N\)(H\({}_{2}\))\({}_{\rm dark}\) that is invisible in CO(1-0) emission; (2) \(N\)(H\({}_{2}\))\({}_{\rm bright}\) that is traced by CO(1-0) emission. H\({}_{2}\) with faint or no CO(1-0) emission ("CO-dark" H\({}_{2}\)) is expected due to the different locations of H\({}_{2}\) and CO formation in interstellar clouds (\(A_{V}\sim 0.5\) mag and 2 mag) and exists along with C\({}^{+}\) and C\({}^{0}\)(e.g., Tielens & Hollenbach, 1985; Grenier et al., 2005; Wolfire et al., 2010; Bolatto et al., 2013). Among the four components of gas in Equation (9), the optically thick H i and CO-dark H\({}_{2}\) together are called "dark gas", since they are not probed by traditional gas tracers such as H i and CO(1-0) emission. Our examination of the different gas phases is illustrated in Figure 2.
As the first step of our examination, we derived \(N\)(H i) = \(N\)(H i)\({}_{\rm thin}\) + \(N\)(H i)\({}_{\rm thick}\) based on Equation (6) and separated \(N\)(H i)\({}_{\rm thin}\) and \(N\)(H i)\({}_{\rm thick}\) for the observed 58 LOSs by:
\[N \tag{10}\]
and
\[N \tag{11}\]
The derived optically thin H i column densities make up 16-99% (median of 62%) of the total \(N\)(H), while the optically thick H i column densities constitute only 1-38% (median of 12%). These results suggest that observed LOSs are mostly H i-dominated and the contribution from the optically thick H i to the total \(N\)(H) is small (Figure 3 and Table 3).
In addition, we calculated \(2N\)(H\({}_{2}\))\({}_{\rm dark}\) for the 39 CO non-detected LOSs, which include H i-only and H i + CO-dark H\({}_{2}\) LOSs, by
\[\begin{split} 2N(\rm H_{2})_{\rm dark}&=N(\rm H)_{A_{V}- \rm based}-N(\rm H\ i)\\ &=N(\rm H)_{A_{V}-\rm based}-(N(\rm H\ i)_{\rm thin}+N(\rm H\ i) _{\rm thick})\end{split} \tag{12}\]
and present its distribution in Figure 4.
Figure 4 shows that the CO-dark H\({}_{2}\) column density distribution is approximately Gaussian from \(-5\times 10^{20}\) cm\({}^{-2}\) to
\(N\)(H) = \(N\)(HI)
\(N\)(H) = \(N\)(HI)
\(N\)(H) = \(N\)(HI) + 2\(N\)(H\({}_{2}\))\({}_{\rm dark}\)
= \(N\)(HI)\({}_{\rm thin}\) + \(N\)(HI)\({}_{\rm thick}\) + 2\(N\)(H\({}_{2}\))\({}_{\rm dark}\)
\(N\)(H) = \(N\)(HI) + 2\(N\)(H\({}_{2}\))\({}_{\rm dark}\) + 2\(N\)(H\({}_{2}\))\({}_{\rm bright}\)
= \(N\)(HI)\({}_{\rm thin}\) + \(N\)(HI)\({}_{\rm thick}\) + 2\(N\)(H\({}_{2}\))\({}_{\rm dark}\) + 2\(N\)(H\({}_{2}\))\({}_{\rm bright}\)
\(5\times 10^{20}\) cm\({}^{-2}\) with a peak of \(\sim\)0 cm\({}^{-2}\), suggesting that the 39 LOSs are dominated by atomic-only LOSs and the dispersion of the Gaussian distribution likely results from a slight variation in the DGR. In other words, our adopted DGR of \(0.4\times 10^{-21}\) mag cm\({}^{2}\) is indeed representative, and 2\(N\)(H\({}_{2}\))\({}_{\rm dark}\) values larger than 5 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) are likely reliable. For 14 CO non-detected LOSs with 2\(N\)(H\({}_{2}\))\({}_{\rm dark}\) > \(5\times 10^{20}\) cm\({}^{-2}\), we
Figure 3: Fraction of each gas phase with respect to the total hydrogen. (Top) Optically thick and thin H i in solid blue and dashed tan. (Bottom) CO-dark and CO-bright H\({}_{2}\) in solid blue and dashed tan. The leftward arrow indicates that the derived fractions for CO-bright H\({}_{2}\) are upper limits.
Figure 2: Illustration of LOSs with different gas phases. (Top) H i-only LOSs. (Middle) LOSs with H i and CO-dark H\({}_{2}\) gas. (Bottom) LOSs with H i, CO-dark H\({}_{2}\), and CO-bright H\({}_{2}\) gas.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Gas Phase & Minimum & Maximum & Median \\ & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) \\ (1) & (2) & (3) & (4) \\ \hline \(N\)(H i)\({}_{\rm thin}\) (58) & \(4.61\times 10^{20}\) & \(2.97\times 10^{21}\) & \(1.49\times 10^{21}\) \\ & 16\% & 99\% & 62\% \\ \(N\)(H i)\({}_{\rm thick}\) (58) & \(5.94\times 10^{18}\) & \(1.67\times 10^{21}\) & \(3.52\times 10^{20}\) \\ & 1\% & 38\% & 12\% \\ \(2N\)(H\({}_{2}\))\({}_{\rm dark}\) (14) & \(5.34\times 10^{20}\) & \(2.03\times 10^{21}\) & \(9.60\times 10^{20}\) \\ & 14\% & 54\% & 31\% \\ \(2N\)(H\({}_{2}\))\({}_{\rm bright}^{\rm a}\) (19) & \(1.84\times 10^{20}\) & \(6.35\times 10^{21}\) & \(2.85\times 10^{21}\) \\ & 16\% & 81\% & 44\% \\ \hline \end{tabular} Note. β(1) Gas phase. The number of LOSs that were used for the derivation is indicated in parenthesis; (2) Minimum values of the derived properties. The column density and the fraction with respect to the total hydrogen are presented in row; (3) Same as the second column, but for maximum values; (4) Same as the second column, but for median values. \({}^{\rm a}\) The derived values are upper limits, as CO-dark H\({}_{2}\) was not considered for the derivation.
\end{table}
Table 3: Derived properties of the four gas phases
then found that the ratio of \(2N(\rm H_{2})_{\rm dark}\) to \(N(\rm H)\) changes from 14% to 54% with a median of 31% (Figure 3 and Table 3). As compared to the CO-dark \(\rm H_{2}\), the contribution from the optically thick H i to the total \(N(\rm H)\) is minor (7-34% with a median of 15%). This finding of the CO-dark \(\rm H_{2}\) as a major constituent of the dark gas in the solar neighborhood is in agreement with previous studies such as Lee et al. (2015), Liszt et al. (2018), and Murray et al. (2018). In addition, our median CO-dark \(\rm H_{2}\) fraction of 31% is consistent with the Galactic average value of \(\sim\)30% derived from the _Herschel_ GOT C\({}^{+}\) survey (Langer et al., 2014).
Finally, we estimated upper limits on \(2N(\rm H_{2})_{\rm bright}\) for the 19 CO-detected LOSs by
\[\begin{split} 2N(\rm H_{2})_{\rm bright}&=N(\rm H)_{A_{V} \text{-based}}-N(\rm H\ i)-2N(\rm H_{2})_{\rm dark}\\ &<N(\rm H)_{A_{V}\text{-based}}-N(\rm H\ i)\end{split} \tag{13}\]
and summarize the results in Figure 3 and Table 3. As shown in Figure 2, the CO-detected LOSs probe CO-free \(\rm H_{2}\) shells, as well as CO-bright \(\rm H_{2}\) cores. Separating these two components is not straightforward though, unless a CO-to-\(\rm H_{2}\) conversion factor \(X_{\rm CO}\) is applied to the measured CO integrated intensity to calculate the CO-bright \(\rm H_{2}\) column density. Considering that \(X_{\rm CO}\) could change by more than a factor of 100 over the measured \(A_{V}\sim\) 0.5-4 mag for the CO-detected LOSs (e.g., Lee et al., 2014), we do not take the \(X_{\rm CO}\) approach and provide upper limits on \(2N(\rm H_{2})_{\rm bright}\) by assigning all the measured \(\rm H_{2}\) to CO-bright \(\rm H_{2}\) (inequality sign in Equation 13). In this case, the ratio of the upper limit on the CO-bright \(\rm H_{2}\) column density to the total hydrogen column density ranges from 16% to 81% with a median of 44%.
## 6 Conditions for the formation of molecular gas: observational perspective
### Kinematic Association between H i and CO
To investigate the conditions for the formation of molecular gas, we first compared H i and CO central velocities. For our analysis, we selected CNM and WNM components that are closest to the detected CO(1-0) emission in velocity and calculated absolute velocity differences between H i and CO. The cumulative distribution function (CDF) of these velocity differences is shown in Figure 5.
Figure 5 shows that the velocity difference between the CNM and CO is systematically smaller than that between the WNM and CO. Specifically, the CNM-CO velocity difference ranges from 0.01 km s\({}^{-1}\) to 4.3 km s\({}^{-1}\) (median of 0.4 km s\({}^{-1}\)), while the WNM is offset from CO by 0.04-12.8 km s\({}^{-1}\) (median of 1.7 km s\({}^{-1}\)). This difference between the CNM and WNM becomes more significant when additional components are considered (e.g., including the first and second closest components to CO results in median velocity differences of 1.3 km s\({}^{-1}\) and 4.7 km s\({}^{-1}\) for the CNM and WNM), demonstrating that the CNM is kinematically more closely associated with CO emission. If we take velocity as a proxy for position (e.g., CNM components at different velocities would be located in different places), our result implies that CO-bright molecular gas likely forms in CNM environments.
### Individual H i Properties
Next we examined the properties of individual H i components (\(T_{s}\), \(\tau_{\rm CNM}\), \(N_{\rm CNM}\), and \(N_{\rm WNM}\)) in the presence of CO(1-0) emission. For our examination, we classified the observed Gaussian components into four groups: (1) CO non-detection (all components toward the 39 CO non-detected LOSs); (2) CO detection (all components toward the 19 CO-detected LOSs); (3) Case A (components whose central velocities fall between \(u_{\rm CO}-2\Delta v_{\rm CO}\) and \(v_{\rm CO}+2\Delta v_{\rm CO}\); subset of CO detection); (4) Case B (similar to Case A, but H i central velocities
Figure 4: Histogram of the CO-dark \(\rm H_{2}\) column density for the 39 CO non-detected LOSs.
Figure 5: CDF of the absolute velocity difference between H i and CO (CNM and CO in thick blue; WNM and CO in thin tan). For these CDFs, H i components that are closest to the observed CO emission in velocity were considered.
are within \(\pm\Delta\)\(\mathrm{\alpha_{\mathrm{CO}}}\) from \(v_{\mathrm{CO}}\); subset of CO detection and Case A). This classification is motivated to probe the individual H i properties required for the formation of CO-bright molecular gas, and in particular, Cases A and B are designed to select H i components that are kinematically closely associated with CO emission with small velocity differences (e.g., Figure 9). The number of CNM and WNM components for each group is summarized in Table 4.
For each group, we examined the distributions of spin temperature, optical depth, CNM and WNM column density and presented them in Figure 6 and Table 5. In general, we found that the CO non-detection and detection groups are almost indistinguishable in terms of their H i properties. On the other hand, Cases A and B have several distinctive features compared to the CO non-detection and detection groups. For example, they do not have CNM components with \(T_{\mathrm{s}}>200\) K and show a factor of 2-5 smaller dispersion in \(T_{\mathrm{s}}\) compared to the CO non-detection and detection groups. In addition, their minimum \(\tau_{\mathrm{CNM}}\) = 0.1, \(N_{\mathrm{CNM}}\) = 2 \(\times\) 10\({}^{19}\) cm\({}^{-2}\), and \(N_{\mathrm{WNM}}\) = 2 \(\times\) 20 cm\({}^{-2}\) are an order of magnitude higher than those for the CO non-detection and detection groups. These distinctive features of Cases A and B are not pronounced in the comparison between the CO non-detection and detection groups, mainly because Cases A and B are only a small fraction of the individual H i components (e.g., the Case B CNM and WNM are 25% and 12% of the CO detection CNM and WNM).
All in all, our result implies that CO-bright molecular gas forms in regions where individual CNM components evolve toward colder temperature and higher column density. However, only \(\sim\)20% of the CNM components with \(T_{\mathrm{s}}<200\) K, \(\tau_{\mathrm{CNM}}>0.1\), and \(N_{\mathrm{CNM}}>2\times 10^{20}\) cm\({}^{-2}\) are associated with CO emission (Cases A and B), suggesting that individual CNM components with low temperature and high column density are necessary, but not sufficient for the formation of CO-bright molecular gas. This conclusion is consistent with what Rybarczyk et al. (2022) found from H i and HCO\({}^{+}\) observations of diffuse Galactic LOSs (see Appendix B for details).
### Integrated H i Properties
Finally, we examined the integrated H i properties required for the formation of CO-bright molecular gas by comparing the four groups in terms of total CNM, WNM, CNM+WNM column densities, and CNM fraction (Figure 7 and Table 5). In contrast to the analysis in Section 6.2, these integrated H i properties were derived by considering all (CO detection and non-detection) or several (Cases A and B) Gaussian components. Specifically, we used Equation (6) and applied the relevant velocity limits for integration (whole LOS for the CO non-detection and detection groups and \(v_{\mathrm{CO}}\pm 2\Delta\)\(v_{\mathrm{CO}}\) and \(v_{\mathrm{CO}}\pm\Delta\)\(v_{\mathrm{CO}}\) for Cases A and B) to calculate the total CNM and WNM column densities. In addition, we defined the CNM fraction \(f_{\mathrm{CNM}}=N_{\mathrm{CNM}}/N(\mathrm{H\ i})\), where \(N\)(H i) is given by Equation (6).
Figure 7 shows that the CO detection group generally has slightly higher H i column densities than the CO non-detection group. For example, the median total CNM, WNM, CNM+WNM column densities of the CO detection group are a factor of 1.2-1.5 higher than those of the CO non-detection group. This difference in the integrated column densities is in contrast with the almost identical distributions of the individual H i properties for the two groups (Section 6.2) and implies that the total amount of gas along a LOS (and consequently associated dust extinction) could be one of the important factors for the formation of CO-bright molecular gas. An examination of the CO peak brightness temperature as a function of \(A_{V}\) (Figure 8) indeed reveals that CO emission is detected primarily toward LOSs with \(A_{V}\gtrsim\) 0.5-1 mag, which is comparable to the threshold dust extinction for CO formation in the solar neighborhood (e.g., Pineda et al., 2008; Lee et al., 2014, 2018).
Another interesting finding is that Cases A and B have systematically higher CNM fractions than the other groups (e.g., median CNM fraction of 0.4 for the CO non-detection and detection groups and 0.6 for Cases A and B). These higher CNM fractions could result from two cases: (1) an increase in the column density of individual CNM components; (2) an increase in the relative number of CNM components. As for the first case, Figure 6 and Table 5 confirm that the column density of individual H i components increases toward CO more in the CNM than in the WNM. For example, from the CO detection to Case B, the median CNM and WNM column densities increase by a factor of 2.2 and 1.5, respectively. To evaluate the second case, we then estimated the CO detection group by comparing the CO
CNM component density (\(f_{\rm HCNM}\)) by dividing the number of CNM components by the number of total H i components and compared its distribution between the four groups (Figure 7 and Table 5). Our analysis shows that the CNM component density is indeed systematically higher for Cases A and B than for the CO non-detection and detection groups, which is in line with our previous finding of the CNM being kinematically more closely associated with CO emission (Section 6.1). Based on these results, we conclude that an increase in both the individual CNM column density and the relative number of CNM components could contribute to the higher CNM fraction toward CO.
In summary, our comparison between H i and CO suggests that the formation of CO-bright molecular gas is favored in high column density environments that are able to provide significant shielding against dissociating UV radiation. In these environments, the CNM becomes colder (lower temperature) and more abundant (higher density), facilitating H\({}_{2}\) and consequently CO formation. We will discuss further on the conditions for the formation of molecular gas in Section 8.1.
## 7 Conditions for the formation of molecular gas: theoretical perspective
In this section, we compare the observed CNM properties to the prediction from the S14 model with the aim of investigating the fundamental principles of the H i-to-H\({}_{2}\) transition. Specifically, our approach is to estimate the density expected for H\({}_{2}\) formation from S14(\(n^{\rm exp}\)) and to confront it with the CNM density inferred from our observations (\(n^{\rm obs}_{\rm CNM}\)). As for the theoretically expected density, we recall that the total H i column density of a plane-parallel slab of gas and dust in the S14 model is controlled by the dimensionless parameter \(\alpha G\) (Equation 1). As \(\alpha G\) is a function of \(I_{\rm UV}\) and \(n\) (Equation 4; \(\sigma_{\rm g}\sim 1\) for our case of the solar neighborhood conditions), \(n^{\rm exp}\) can be expressed as
\[n^{\rm exp}\;({\rm cm^{-3}})=\frac{18.4\;I_{\rm UV}}{\exp\left(N_{\rm CNM}/8.4 \times 10^{20}\;{\rm cm^{-2}}\right)-1}, \tag{14}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{ Properties} & CO non-detection & CO detection & Case A & Case B \\ (1) & (2) & (3) & (4) & (5) \\ \hline \multicolumn{5}{c}{Individual Properties} \\ \hline \(T_{\rm s}\) (K) & 4.11\(-\)479.28 & 1.99\(-\)725.42 & 10.82\(-\)188.27 & 10.82\(-\)130.63 \\ & 53.83 & 48.13 & 46.21 & 46.21 \\ \(\tau_{\rm CNM}\) & 0.01\(-\)3.14 & 0.01\(-\)3.41 & 0.11\(-\)3.41 & 0.11\(-\)3.41 \\ & 0.27 & 0.29 & 0.68 & 0.78 \\ \(N_{\rm CNM}\) (\(10^{20}\;{\rm cm^{-2}}\)) & 0.02\(-\)8.59 & 0.01\(-\)13.70 & 0.17\(-\)10.02 & 0.17\(-\)10.02 \\ & 1.07 & 0.87 & 1.50 & 1.91 \\ \(N_{\rm WNM}\) (\(10^{20}\;{\rm cm^{-2}}\)) & 0.10\(-\)18.57 & 0.17\(-\)16.70 & 1.23\(-\)7.33 & 1.86\(-\)7.33 \\ & 2.22 & 2.49 & 3.52 & 3.79 \\ \hline \multicolumn{5}{c}{Integrated Properties} \\ \hline \(N_{\rm CNM}\) (\(10^{20}\;{\rm cm^{-2}}\)) & 0.58\(-\)19.90 & 1.97\(-\)22.45 & 0.17\(-\)9.93 & 0.01\(-\)8.31 \\ & 5.48 & 7.99 & 2.82 & 1.69 \\ \(N_{\rm WNM}\) (\(10^{20}\;{\rm cm^{-2}}\)) & 1.67\(-\)27.48 & 1.85\(-\)25.54 & 0.03\(-\)5.96 & 0.01\(-\)4.44 \\ & 9.58 & 11.90 & 2.41 & 1.29 \\ \(N\)(H i) (\(10^{20}\;{\rm cm^{-2}}\)) & 4.70\(-\)38.57 & 7.33\(-\)38.08 & 0.81\(-\)14.81 & 0.38\(-\)11.41 \\ & 16.99 & 24.21 & 5.51 & 2.96 \\ \(f_{\rm CNM}\) & 0.12\(-\)0.69 & 0.19\(-\)0.77 & 0.05\(-\) 0.99 & 0.01\(-\)0.99 \\ & 0.36 & 0.37 & 0.56 & 0.57 \\ \(f_{\rm HCNM}\) & 0.20\(-\)0.80 & 0.38\(-\)0.82 & 0.00\(-\)1.00a & 0.00\(-\)1.00a \\ & 0.06 & 0.57 & 0.75 & 1.00 \\ \hline \end{tabular} Note. β (1) Physical properties. Each row displays the range of values for each physical property, with the median value indicated below the range; (2) H i components toward the 39 CO non-detected LOSs; (3) H i components toward the 19 CO-detected LOSs; (4) H i components whose central velocities fall between \(n_{\rm CO}-2\Delta n_{\rm CO}\) and \(n_{\rm CO}+2\Delta n_{\rm CO}\); (5) H i components whose central velocities are in the range of \(n_{\rm CO}\pm\Delta n_{\rm CO}\).
* There are two (Case A) and three (Case B) CO peaks where there is no associated CNM component. \(f_{\rm HCNM}\) is set to zero accordingly, and these cases are still considered for the calculation of the median values.
\end{table}
Table 5: Physical Properties of H i Components
where \(N\)(H i) is substituted with \(N_{\rm CNM}\). This substitution is motivated by the fact that our _Planck_-based \(I_{\rm UV}\) estimates are mostly \(\sim\)1 (Section 4.2). The nearly uniform \(I_{\rm UV}\) values suggest isotropic UV radiation that is most likely attenuated by the widespread WNM. The impact of the WNM on the H i-to-H\({}_{2}\) transition is already taken into account in this manner, and we therefore proceeded by replacing \(N\)(H i) with \(N_{\rm CNM}\). As for the observationally inferred density, we took the thermal pressure log\({}_{10}\)(\(P\)/\(k_{\rm B}\) cm\({}^{-3}\) K) = 3.58 \(\pm\) 0.18 from Jenkins & Tripp (2011) (estimated for the CNM based on _Hubble Space Telescope_ observations of C i multiplets at UV wavelengths) and calculated \(n_{\rm CNM}^{\rm obs}\) by
\[n_{\rm CNM}^{\rm obs}\ ({\rm cm}^{-3})=\left(\frac{10^{3.58\pm 0.18}}{T_{\rm s} }\right). \tag{15}\]
Since \(n^{\rm exp}=n_{1}+2n_{2}\) is the total number density, \(n^{\rm exp}\) should be higher than \(n_{\rm CNM}^{\rm obs}\) for CO-detected LOSs.
### Density versus CNM column density
We estimated \(n^{\rm exp}\) and \(n_{\rm CNM}^{\rm obs}\) for the following three cases: (1) Entire LOS; (2) Case A; (3) Case B. For each LOS, all CNM components are considered for the Entire LOS, while CNM components within \(v_{\rm CO}\pm 2\Delta v_{\rm CO}\) and \(v_{\rm CO}\pm\Delta v_{\rm CO}\) are examined for Cases A and B. We assessed these three cases mainly because of a lack of knowledge on the geometry of the CNM. For example, the Entire LOS would correspond to a case where all CNM components at different velocities belong to one large structure and absorb dissociating UV photons (Figure 9(a)). Meanwhile, Cases A and B would be equivalent to a case where only CNM components near CO clumps provide shielding against UV photons (Figure 9(b)). While being simple pictures, these two scenarios cover small (Cases A and B) to large (Entire LOS) volume filling factors for the CNM. Finally, we used the _Planck_-based \(I_{\rm UV}\) values for Equation (14) and the opacity-weighted mean spin temperature (\(T_{\rm s,r}\)) for Equation (15):
\[T_{\rm s,r}\ ({\rm K})=\frac{\sum_{n=0}^{N-1}\tau_{0,n}T_{\rm s,n}}{\sum_{n=0}^{N -1}\tau_{0,n}}. \tag{16}\]
The derived \(n^{\rm exp}\) and \(n_{\rm CNM}^{\rm obs}\) values for the three cases are presented as a function of \(N_{\rm CNM}\) in Figure 10.
We found that the three cases show similar trends. For example, the total densities expected from S14 are higher than the inferred CNM densities at low column densities
Figure 6: CDFs of the spin temperature, optical depth, CNM and WNM column density. In each panel, the four groups are shown in different colors and line styles: CO non-detection (thick solid gray), CO detection (thin solid black), Case A (thick dashed blue), and Case B (thin dashed tan).
(\(N_{\rm CNM}\lesssim 10^{20}\) cm\({}^{-2}\)). In other words, the model is in agreement with the lower limits on the total densities constrained by our observations. On the contrary, the S14-based total densities are lower than the inferred CNM densities at high column densities (\(N_{\rm CNM}\gtrsim 10^{20}\) cm\({}^{-2}\)), resulting in a discrepancy between the model and our observations. This discrepancy becomes more significant from Case B to Case A to the Entire LOS case and reaches up to one or two orders of magnitude at the highest column density of \(\sim\)10\({}^{21}\) cm\({}^{-2}\).
### Limitations and Implications
Taken at face value, the discrepancy at high CNM column densities implies that only a small fraction of the total CNM along a LOS participates in H\({}_{2}\) formation (\(\lesssim 13\%\) on average; this fraction is estimated from the LOSs with large discrepan
Figure 7: CDFs of the CNM, WNM, CNM+WNM column density, CNM fraction, and CNM component density. As in Figure 6, the four groups are shown in different colors and line styles: CO non-detection (thick solid gray), CO detection (thin solid black), Case A (thick dashed blue), and Case B (thin dashed tan). The column densities of Cases A and B are lower than those of the CO non-detection and detection by design (integrated over smaller velocity ranges) and are shown here only for the sake of completeness.
cies at \(N_{\rm CNM}>10^{20}\) cm\({}^{-2}\) for the Entire LOS case). In other words, the CNM must be clumpy with a small volume filling factor. While this is a reasonable interpretation, however, our analysis is not without limitations. Below we discuss other sources of the discrepancy and their implications.
One possible source of the observed discrepancy is the assumed thermal pressure of 2500-5800 cm\({}^{-3}\) K for the CNM. While we considered this factor of two variation in the thermal pressure, Goldsmith et al. (2018) recently found a larger variation (\(\sim\)10\({}^{3}\)-10\({}^{4}\) cm\({}^{-3}\) K) from SOFIA [C ii] 158 \(\mu\)m observations of Galactic LOSs (3C131, one of our CO-detected LOSs, was found to have \(P/k_{\rm B}\sim 2500\)-3200 cm\({}^{-3}\) K). If the thermal pressure of the CNM varies by an order of magnitude as the SOFIA observations suggest, the discrepancy between the S14 prediction and our observations would certainly decrease, but it is likely that the discrepancy would still persist at the highest column density of \(\sim\)10\({}^{21}\) cm\({}^{-2}\).
Another possible source of the discrepancy is cosmic-rays. Cosmic-rays ionize atoms and molecules in collisions and can destruct H\({}_{2}\) as follows (e.g., Sternberg et al., 2021):
\[{\rm H_{2}}+{\rm CR}\longrightarrow{\rm H_{2}^{+}}+e, \tag{17}\] \[{\rm H_{2}^{+}}+{\rm H_{2}}\longrightarrow{\rm H_{3}^{+}}+{\rm H}. \tag{18}\]
In addition, cosmic-rays can directly dissociate H\({}_{2}\):
Figure 8: CO peak brightness temperature as a function of _Planck_-based \(A_{V}\). The 19 CO-detected LOSs are shown in tan, while upper limits based on 3\(\sigma\) values are indicated as the downward arrows for the 39 CO non-detected LOSs.
Figure 10: Comparison between the S14-based total number densities (\(n_{\rm CNM}^{\rm env}\); gray circles) and the observationally inferred CNM densities (\(n_{\rm CNM}^{\rm obs}\); blue squares) as a function of the CNM column density for the 19 CO-detected LOSs. Since the predicted \(n^{\rm env}\) corresponds to the total gas number density (\(n_{1}+2n_{2}\)) at which H\({}_{2}\) formation is expected to occur, it should be higher than \(n_{\rm CNM}^{\rm obs}\). Finally, the observed variation in the thermal pressure, \(\log_{10}(P/k_{\rm B}\) cm\({}^{-3}\) K) = 3.58 \(\pm\) 0.18, is indicated as the 1\(\sigma\) error bars for \(n_{\rm CNM}^{\rm obs}\). (Top) Entire LOS. (Middle) Case A. (Bottom) Case B.
Figure 9: Possible distributions of the WNM, CNM, and CO (tan, blue, and black, respectively). Based on our finding of the kinematic association between H i and CO (Section 6.1), the WNM is represented as the diffuse background, while the CNM and CO are shown as the smaller embedded structures. The velocities of the CNM and CO are indicated as the blue and black arrows with arbitrary sizes, and UV radiation is presented as the yellow arrows. (Left) Scenario for the Entire LOS, where CNM components at different velocities belong to one large structure and contribute to the shielding of the CO core. (Right) Scenario for Cases A and B, where CNM components close to CO only provide shielding against UV radiation.
\[\mathrm{H_{2}}+\mathrm{CR}\longrightarrow\mathrm{H}+\mathrm{H}. \tag{19}\]
A preliminary study of the impact of cosmic-rays on the H i-to-H\({}_{2}\) transition suggests that a combination of UV photons and cosmic-rays could increase the total H i column density by up to a factor of ten compared to the case with UV photons only (when examined over a reasonable parameter space with the density \(n=10^{1}\)-\(10^{3}\) cm\({}^{-3}\), total column density \(N\)(H) = (1-4) \(\times\) 10\({}^{21}\) cm\({}^{-2}\), UV radiation field \(I_{\mathrm{UV}}\) = 1, and cosmic-ray ionization rate \(\zeta\) = (0.5-2) \(\times\) 10\({}^{-16}\) s\({}^{-1}\); Sternberg & Bialy, in preparation). Interestingly, considering a realistic density increase by a factor of ten from the envelope to the core of a cloud reduces the impact of cosmic-rays significantly, making it almost negligible at the envelope density of \(\sim\)10\({}^{2}\) cm\({}^{-3}\) (comparable to the CNM densities inferred from our observations). These results suggest that detailed studies are needed to properly evaluate the impact of cosmic-rays on H\({}_{2}\) formation.
Finally, the steady-state approximation in S14 could be invalid. A wide range of dynamical processes operate in the ISM, producing continuous flows of gas. For example, dense molecular clouds can undergo gravitational collapse on the free-fall timescale \(t_{\mathrm{ff}}\sim\) 1 Myr (\(n/10^{4}\) cm\({}^{-3}\))\({}^{-1/2}\). Similarly, interstellar turbulence dissipates its kinetic energy on the eddy turnover timescale \(t_{\mathrm{turb}}\sim\) 1 Myr (\(L\)/pc)\({}^{1/2}\) where \(L\) is the eddy size (e.g., Chevance et al., 2022). For the CNM with \(n\sim 10^{2}\) cm\({}^{-3}\) distributed within \(\sim\)100 pc scale molecular clouds, \(t_{\mathrm{ff}}\) and \(t_{\mathrm{turb}}\) are approximately 10 Myr, which are comparable to the H\({}_{2}\) formation timescale \(t_{\mathrm{H_{2}}}\sim\) (10\({}^{9}\)/\(n\)) yr (e.g., Hollenbach et al., 1971). These rough estimates illustrate that the CNM could be heavily perturbed over time, making the steady-state approximation for H\({}_{2}\) formation inappropriate (e.g., Valdivia et al., 2016; Bialy et al., 2021).
In summary, we conclude that the CNM must be clumpy with a small volume filling factor if H\({}_{2}\) formation in the solar neighborhood is determined by a combination of UV radiation, gas density, and metallicity (\(\alpha G\)), as the simple steady-state S14 model predicts. Otherwise, missing elements in the S14 model, such as cosmic-rays and dynamical processes, could play an important role and need to be considered more comprehensively for a better understanding of H\({}_{2}\) formation.
## 8 Discussion
### Conditions for the Formation of Molecular Gas
In Section 6.1, we concluded that CO-bright molecular gas likely forms in CNM environments based on a close association between the CNM and CO in velocity. This conclusion is consistent with Savage et al. (1977), who measured an H\({}_{2}\) kinetic temperature of 45-128 K with a median of 77 K (comparable to our median spin temperature of \(\sim\)50 K) for the medium within 1 kpc of the Sun by analyzing _Copernicus_ UV absorption observations. Similarly, Belloni et al. (2020) compared UV measurements of the H i-to-H\({}_{2}\) transition to a suite of magnetohydrodynamic simulations and claimed that H\({}_{2}\) within 2 kpc of the Sun is built up in CNM structures with a size of \(\sim\)3-10 pc.
In Sections 6.2 and 6.3, we then went further and showed that the formation of CO-bright molecular gas is favored in high column density environments where the CNM becomes colder and more abundant (which is as expected). As to the conditions for more abundant CNM, Saury et al. (2014) examined a large set of hydrodynamic simulations and found that the fraction of the CNM increases with increasing initial density and decreasing turbulent velocity, implying that high densities (\(\gtrsim\) 2 cm\({}^{-3}\)) along with a moderate level of gas compression are required for the formation of the CNM and consequently molecular gas.
Last but not least, our finding of the minimum dust extinction \(A_{V}\gtrsim\) 0.5-1 mag for CO detection (Section 6.3) implies the importance of the total amount of gas available for the formation of CO-bright molecular gas. All things considered, we conclude that accumulating a large amount of atomic gas by dynamical processes (e.g., spiral arms, supernova explosions, and expanding shells that lead to gas compression) and building up cold and dense structures would be a key step in the formation of molecular gas. This conclusion is consistent with what previous observational and theoretical studies suggested (e.g., McKee & Ostriker, 2007; Chevance et al., 2022).
### H i Absorption as a Diagnostic Tool for Probing the Formation and Evolution of Molecular Clouds
While a range of dynamical processes (e.g., spiral arms on large scales and expanding shells and bubbles on small scales) certainly play a role in the formation and evolution of molecular clouds (e.g., McKee & Ostriker, 2007), it remains unclear exactly how they operate and which process dominates. As an accessible tracer of atomic gas, the raw ingredient of molecular clouds, H i emission has been frequently employed to address this issue. For example, Fukui et al. (2009) found that the H i mass of molecular clouds in the Large Magellanic Cloud (LMC) increases with evolutionary stages of star formation and estimated an H i accretion rate of 0.05 \(M_{\odot}\) yr\({}^{-1}\) based on H i line widths. In addition, Tahani et al. (2022) examined the difference in velocity between H i and CO emission for the Perseus molecular cloud and interpreted a systematic positive offset of \(v_{\mathrm{CO}}-v_{\mathrm{H}}\), as an indication of the formation of molecular gas behind compressed H i bubbles. In comparison to H i emission that traces all three phases of neutral atomic gas and often exhibits broad and featureless spectra, H i absorption mostly arises from the CNM (which is more closely associated with molecular gas) and shows relatively narrow and structured spectra, making it an excellent probe for the formation and evolution of molec
ular clouds. As an example, we showed that there is an absolute velocity difference of 0.01-4.3 km s\({}^{-1}\) with a median of 0.4 km s\({}^{-1}\) between the CNM and CO (Section 6.1). Considering that the CNM has a comparable velocity difference of 0.06-2.64 km s\({}^{-1}\) with a median of 0.5 km s\({}^{-1}\) with OH absorption as well (estimated from 10 of our 58 GNOMES LOSs where OH absorption is clearly detected; Petzler et al., 2023), this velocity difference between the CNM and CO is most likely real (not due to different beam sizes) and could suggest that the CNM and CO-bright molecular gas are in slightly different regions and/or dynamically decoupled (e.g., Soler et al., 2019; Beuther et al., 2020; Wang et al., 2020). Unfortunately, our GNOMES LOSs are scattered over a relatively large area of sky and cannot provide insights into how the CNM is distributed and moves about in individual molecular clouds.
The power of H i absorption as a diagnostic tool for probing the formation and evolution of molecular clouds could be harnessed by getting a large number of H i absorption spectra over a fine grid of continuum sources located behind molecular clouds. These spectra could then be analyzed with synthetic H i data from numerical simulations of multiphase gas (e.g., Kim and Ostriker, 2017; Seifried et al., 2022), enabling us to examine the signature of the formation and evolution process imprinted on the properties of the CNM (e.g., kinematics and distribution). Such observations as we propose will be routinely carried out by next generation radio telescopes with a wide field of view, including the Square Kilometre Array (SKA), as Dickey et al. (2022) recently demonstrated with the Australian Square Kilometre Array Pathfinder (ASKAP).
## 9 Summary
This paper presents a detailed study on the formation of molecular gas in the solar neighborhood. To probe the conditions for the H i-to-H\({}_{2}\) transition, H i emission and absorption spectra toward 58 LOSs at \(b<-5^{\circ}\) (Arecibo) were analyzed along with CO(1-0) and dust data (TRAO, PMO, and _Planck_). These multi-wavelength data were compared to the one-dimensional steady-state H\({}_{2}\) formation model of Sternberg et al. (2014) as well to provide insights into the fundamental principles of the H i-to-H\({}_{2}\) transition. Our key results are as follows.
1. Among the observed 58 sources, 19 sources show clear CO(1-0) emission, suggesting a detection rate of 33% at a rms level of 0.1 K (angular and spectral resolutions of 48\({}^{\prime\prime}\) and 0.32 km s\({}^{-1}\), respectively).
2. The decomposition of gas into atomic and molecular phases shows that the observed LOSs are mostly H i-dominated. In addition, the CO-dark H\({}_{2}\), not the optically thick H i, is found as a major constituent of the dark gas in the solar neighborhood.
3. The CNM shows a systematically smaller velocity difference from CO emission than the WNM. When CO-closest components are considered, a median value of the absolute velocity difference between the CNM and CO is 0.4 km s\({}^{-1}\), as opposed to 1.7 km s\({}^{-1}\) for the WNM and CO. This implies that the CNM is kinematically (and spatially if we take velocity as a proxy for position) more closely associated with CO.
4. When CO-associated components (ones within CO velocity ranges) are considered, the CNM and WNM exhibit distinctive properties. Namely, the CO-associated components have the spin temperature \(T_{\rm s}<\) 200 K, optical depth \(\tau_{\rm CNM}>\) 0.1, and column densities \(N_{\rm CNM}>\) 2 \(\times\) 10\({}^{19}\) cm\({}^{-2}\) and \(N_{\rm WNM}>\) 2 \(\times\) 10\({}^{20}\) cm\({}^{-2}\). This suggests that CO-bright molecular gas forms in environments where individual CNM components evolve toward colder temperature and higher column density.
5. The CO-associated components have higher total column densities (\(V\)-band dust extinction \(A_{V}\gtrsim\) 0.5 mag) and CNM fractions (median of 0.6) than those outside CO emission, indicating that high column density environments where the CNM becomes more abundant facilitate the formation of CO-bright molecular gas.
6. A comparison with the prediction from Sternberg et al. (2014) infers that the CNM must be clumpy with a small volume filling factor. An alternative possibility would be that missing ingredients in the model, such as cosmic-rays and dynamical processes, play an important role in the H i-to-H\({}_{2}\) transition in the solar neighborhood.
We thank Chang-Goo Kim, Jeong-Gyu Kim, and Amiel Sternberg for insightful discussions and the referee for helpful comments that improved this work. In addition, we acknowledge Interstellar Institute's program "With Two Eyes" and the Paris-Saclay University's Institut Pascal for hosting discussions that nourished the development of the ideas behind this work. Part of the CO data were obtained with the 13.7 m telescope of the Qinghai Station of Purple Mountain Observatory, and we appreciate the help from Dr. Sun, Jixian and all the staff members of the PMO-13.7m telescope. S.B. thanks the Physics department at the Technion, Israel, and the Center for Theory and Computations (CTC) at the University of Maryland, College Park, for financial support. B.B. acknowledges support from NSF grant AST-2009679 and NASA grant No. 80NSSC20K0500 and is grateful for the generous support of the David and Lucile Packard Foundation and the Alfred P. Sloan Foundation. D.L. acknowledges support from the National Natural Science Foundation of China project NSFC11988101. D.R.R. acknowledges support by the NSF through award SOSPA6-023 from the NRAO. S.S. acknowledges the support by the National Aeronautics and Space Administration under Grant No. 4200766703 and the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.
## Appendix A Comparison between the Gnomes H i and CO spectra
In Figure 1, we compare the H i absorption and CO emission spectra toward the 19 CO-detected LOSs. The measured optical depth (\(\tau_{\rm{CNM}}\)) and main-beam brightness temperature (\(T_{\rm{MB,CO}}\)) are aligned at peaks for ease of comparison and are presented in blue and gray, respectively.
## Appendix B Comparison to the ALMA-NOEMA/21-SPONGE survey
Recently, Rybarczyk et al. (2022) observed 20 LOSs from the 21-SPONGE survey (21 cm Spectral Line Observations of Neutral Gas with the Karl G. Jansky Very Large Array; Murray et al., 2015, 2018) using the Atacama Large Millimeter/submillimeter Array (ALMA) and the Northern Extended Millimeter Array (NOEMA) and obtained HCO\({}^{+}\), HCN, HNC, and C\({}_{2}\)H absorption spectra. By comparing the observed molecular species with the existing H i and dust properties, the authors found that molecular absorption is clearly detected toward LOSs with \(A_{V}\gtrsim 0.25\) mag. In addition, they revealed that molecular gas is preferentially associated with cold (\(T_{\rm{s}}<80\) K) and optically thick (\(\tau_{\rm{CNM}}>0.1\)) CNM structures.
While both the GNOMES and ALMA-NOEMA/21-SPONGE surveys attempt to probe the conditions for the formation of molecular gas, our approach is uniquely different. First, unlike the ALMA-NOEMA/21-SPONGE survey targeting random LOSs throughout the Milky Way, we focus on individual molecular clouds and their surrounding environments. Second, we employ the most commonly used tracer of molecular gas, CO(1-0) emission, for our analyses. To examine the difference between the two surveys, we selected 11 LOSs at \(b<-5^{\circ}\) from the ALMA-NOEMA/21-SPONGE survey and extracted 64 CNM and 67 WNM components. In addition, we identified H i components that are kinematically closest (minimum absolute velocity difference) to the observed CO emission (GNOMES; 19 LOSs) and HCO\({}^{+}\) absorption (ALMA-NOEMA/21-SPONGE; 8 LOSs) and compared their properties in Figure 2 and Table 1.
Figure A1: (_continued_)
Figure 14 shows that the CNM and WNM at \(b<-5^{\circ}\) from the two surveys have in general systematically different properties: i.e., the GNOMES components have lower spin temperatures, higher optical depths, and higher column densities (black and gray CDFs). This difference persists for \(T_{\rm s}\) and \(N_{\rm WNM}\), when we focus on H i components that are closely associated with molecular gas (tan and pink CDFs). One of the likely reasons for this difference is that the GNOMES survey probes higher column density environments by concentrating on molecular clouds and their surroundings. For example, the CO-detected GNOMES LOSs have a median \(A_{V}\) of 2 mag, which is a factor of two higher than that for the HCO\({}^{+}\)-detected 21-SPONGE LOSs. Another possibility is that CO and HCO\({}^{+}\) formation requires different conditions. In summary, bearing in mind the small number of LOSs in the ALMA-NOEMA/21-SPONGE survey, we conclude that the two surveys sample slightly different populations of the CNM and WNM, while showing consistent results regarding the evolution of H i properties toward molecular gas. The two surveys are thus highly complementary to each other.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline (1) & Properties & \(T_{\rm s}\) & \(\tau_{\rm CSM}\) & \(N_{\rm CSM}\) & \(N_{\rm WNM}\) \\ & & (K) & & (\(10^{20}\) cm\({}^{-2}\)) & (\(10^{20}\) cm\({}^{-2}\)) \\ \hline \multicolumn{5}{c}{GNOMES} \\ \hline (2) & All & 1.99\(-\)725.42 & 0.01\(-\)3.41 & 0.01\(-\)13.70 & 0.10\(-\)18.57 \\ & & 52.28 & 0.27 & 1.00 & 2.32 \\ (3) & CO-closest & 10.82\(-\)228.91 & 0.01\(-\)1.38 & 0.07\(-\)2.94 & 0.20\(-\)16.11 \\ & & 40.11 & 0.45 & 0.81 & 2.13 \\ \hline \multicolumn{5}{c}{ALMA-NOEMA/21-SPONGE} \\ \hline (4) & All & 7.19\(-\)1551.67 & 9.37E\(-\)4\(-\)1.68 & 3.63E-3\(-\)8.86 & 2.32E-18\(-\)5.37 \\ & & 65.47 & 0.10 & 0.37 & 0.75 \\ (5) & HCO\({}^{*}\)-closest & 20.46\(-\)619.19 & 0.02\(-\)1.65 & 0.28\(-\)5.36 & 0.16\(-\)0.95 \\ & & 65.07 & 0.62 & 3.88 & 0.70 \\ \hline \end{tabular} Note. β(1) Physical properties. The ranges are given, and the median values are provided below the ranges; (2) H i components toward the 58 GNOMES LOSs; (3) H i components that are closest to the measured 19 \(T_{\rm peak,CO}\); (4) H i components toward the 11 ALMA-NOEMA/21-SPONGE LOSs; (5) H i components that are closest to the observed 8 \(\tau_{\rm peak,HCO^{*}}\).
\end{table}
Table 10: Physical Properties of Individual H i Components |
2305.07347 | Music Rearrangement Using Hierarchical Segmentation | Music rearrangement involves reshuffling, deleting, and repeating sections of
a music piece with the goal of producing a standalone version that has a
different duration. It is a creative and time-consuming task commonly performed
by an expert music engineer. In this paper, we propose a method for
automatically rearranging music recordings that takes into account the
hierarchical structure of the recording. Previous approaches focus solely on
identifying cut-points in the audio that could result in smooth transitions. We
instead utilize deep audio representations to hierarchically segment the piece
and define a cut-point search subject to the boundaries and musical functions
of the segments. We score suitable entry- and exit-point pairs based on their
similarity and the segments they belong to, and define an optimal path search.
Experimental results demonstrate the selected cut-points are most commonly
imperceptible by listeners and result in more consistent musical development
with less distracting repetitions. | Christos Plachouras, Marius Miron | 2023-05-12T09:50:54Z | http://arxiv.org/abs/2305.07347v1 | # Music rearrangement using hierarchical segmentation
###### Abstract
Music rearrangement involves reshuffling, deleting, and repeating sections of a music piece with the goal of producing a standalone version that has a different duration. It is a creative and time-consuming task commonly performed by an expert music engineer. In this paper, we propose a method for automatically rearranging music recordings that takes into account the hierarchical structure of the recording. Previous approaches focus solely on identifying cut-points in the audio that could result in smooth transitions. We instead utilize deep audio representations to hierarchically segment the piece and define a cut-point search subject to the boundaries and musical functions of the segments. We score suitable entry- and exit-point pairs based on their similarity and the segments they belong to, and define an optimal path search. Experimental results demonstrate the selected cut-points are most commonly imperceptible by listeners and result in more consistent musical development with less distracting repetitions.
Christos Plachouras, Marius Miron Music Technology Group, Universitat Pompeu Fabra, Spain music rearrangement, music structure analysis, music segmentation, spectral clustering, path finding
## 1 Introduction
Altering the duration of music pieces is an important part of audiovisual content creation for advertisements, documentaries, film, short-form videos, vlogs. Rearranging music is often the preferred approach for music retiming, because the alternatives come with inherent disadvantages: time-stretching may reduce audio quality and alter the intended feeling of the composition, while fades may remove the intended start and end of a piece, resulting in a more abrupt or unpolished experience. Music rearrangement is, however, a tedious process often delegated to expert music engineers, who have to spend time listening to the recording, understanding its structure, and performing suitable edits.
The democratization of video editing has brought even more interest in music rearrangement, as reflected by the rearrangement systems video editing software are starting to integrate [1]. In spite of its commercial appeal, the task of automatic music rearrangement has only sparing been explored in scientific research and has appeared with a variety of names used almost synonymously, including music retargeting [2, 3], resynthesis [4], retining [1], and remixing [1].
To avoid ambiguity, we define a rearrangement of the recording of a piece of music to be a standalone piece that has a different duration to the original and that is constructed solely from segments of the original piece. It should have the same beginning and ending as the original and have smooth transitions between the reshuffled segments without unnatural discontinuities of music information such as melodies, chords, and instrumentation. In contrast to music summarization [5] and thumbnailing [6] the rearrangement needs to stand as a music piece on its own. It is designed to be experienced by listeners, rather than be used as a proxy representation for other tasks such as music classification [7]. Furthermore, unlike in music remixing [8], a rearranged piece must solely include uncedited segments from the original piece.
Our primary contribution in this work is the introduction of a novel automatic music rearrangement approach leveraging the hierarchical structure of the music piece. Unlike previous approaches which frame rearrangement as a suitable cut-point identification task and do not have inherent considerations for musical development (see Sec. 2), we anchor rearrangement and cut-point identification to the extracted segment boundaries and musical functions. Additionally, we introduce the use of deep audio features from a music auto-tagging model to estimate the perceptibility of segment transitions. Finally, we provide a freely accessible, open-source, modular implementation of our method [9].
## 2 Related work
Previous approaches frame music rearrangement as the task of identifying suitable cut-points, pairs of entry- and exit-frames in the audio between which a jump can be performed without any perceived discontinuity [10, 3, 4, 2, 11]. While smooth transitions are critical for creating a consistent rearrangement, we argue that this approach does not give adequate consideration to the musical development of the rearrangement. A segment of the recording might get selected to succeed another because it shares a small singulet of audio with it, but there is no guarantee that it will feel like the natural continuation of the current segment, nor that it contributes to a consistent development of musical ideas in the rearrangement. This means that inevitably these systems [10, 3, 4] can lead to unnatural, distracting repetitions [2], or inconsistent, rushed, or dull evolution of musical ideas given the rearrangement's duration.
Wenner et al. [2] partially try to address this issue by analyzing the music piece's structure and ensuring jump-points don't occur between segments of the same type. While this approach alleviates some of the potentially unnatural repetitions, it does not fully address musical development in the rearrangement. Unlike our approach where segments are automatically reshuffled, the authors instead present some ideas for manually editing the resulting structure of the rearrangement.
Towards improving the imperceptibility of transitions, a Convolutional Neural Network was used to classify whether a frame contains a good transition or cut-point [11].
We instead opt to use embeddings from a model that has been trained on a variety of styles [12], is robust to recording conditions [12], and has been shown to improve hierarchical music segmentation [13]. We use these embeddings both for analyzing the hierarchical structure of pieces, and for identifying smooth transitions across and within segments.
To test the relevance of musical development and the suitability of the deep embeddings, we run a listening test (see Sec. 4) on rearrangements of five songs from different music traditions and record
ing conditions. In contrast to previous approaches who only evaluate transition quality, we evaluate development (e.g. no abrupt changes in musical ideas) and balance (e.g. lack of repetitions), along with consistency (e.g. good cut-points).
Conceptually, our work is also related to that of Thalmann et al. [14], where dynamic music objects existing at multiple hierarchies are used as modifiable and reusable music content for the web. To that extent, our task is defined by constraints regarding the input modality (audio), the fixed duration, and the properties of the resulting piece (good development, consistency, and balance).
## 3 Methods
Our proposed method has the following steps: (1) we construct an encoding capturing the global patterns formed by various musical elements as well as their evolution over time, (2) we decompose the encoding hierarchically to uncover groupings of musical ideas at various scales, (3) we identify and rank suitable transition points across and within structural segments, and (4) we define rearrangement as an optimal path search using the extracted transition points.
### Structure encoding
When rearranging a music recording, a sound engineer inevitably changes its structure by removing, repeating, or reshuffling musical ideas such as melodic phrases, chord progressions, lyrics, and others. Towards avoiding their interruption, important care is given to choosing cut-points that give good transitions between groupings of these ideas [14]. These groupings exist at various temporal scales and may have hierarchical relationships between them; for example, a group of elements forming the chorus of a song can have a repeated chord sequence, which in turn can have melodic phrases each of which can be contained in a measure. We will try to uncover these hierarchical groupings using the hierarchical structure analysis method proposed by McFee and Ellis [15], with the subsequent enhancements proposed by Salamon et al. [13].
We first compute the beat times \(\mathbf{b}=\{b_{1}\..\ b_{M}\}\) and downbeat times \(\mathbf{o}=\{o_{1}\..\ o_{M}\}\) from the audio, with \(M\) and \(N\) the total number of beats and downbeats respectively. To do this, we replace the beat tracking used by the previous works with BeatNet [16], which uses a Recurrent Convolutional Neural Network and particle filtering to improve beat tracking performance and also provide downbeat and meter tracking. We aim to create an encoding that captures global patterns in various music elements such as harmony, instrumentation, mood, and energy, but also the homogeneity of successive frames. To do this we use the features proposed by Salamon et al [13]: deep embeddings \(\mathbf{X}^{\mathbf{T}\times\mathbf{E}}\) learned from a music auto-tagging model [12] and the Constant-Q Transform \(\mathbf{Y}^{\mathbf{T}\times\mathbf{F}}\) of the audio as repetition features, and deep embeddings \(\mathbf{Z}^{\mathbf{T}\times\mathbf{H}}\) from a few-shot sound event detection model [17] to encode homogeneity, where \(T\) refers to total number of time frames and \(E,F,H\) to the respective feature dimensions. We beat-synchronize all features to the beat track \(\mathbf{b}\) by aggregating the feature vectors of frames \(T\) belonging to each beat, and we obtain the corresponding beat-synchronized feature matrices \(\mathbf{\hat{X}}^{\mathbf{N}\times\mathbf{E}},\mathbf{\hat{Y}}^{\mathbf{N} \times\mathbf{F}},\mathbf{\hat{Z}}^{\mathbf{N}\times\mathbf{H}}\).
As proposed by McFee and Ellis [15], we compute weighted, undirected recurrence graphs from the repetition features such that
\[\mathbf{R}_{\mathbf{X}}(i,j)=\left\{\begin{array}{ll}exp(-\sqrt{|\mathbf{ \hat{x}}_{i}-\mathbf{\hat{x}}_{j}|^{2}}/\mu)&\mathbf{\hat{x}}_{i},\mathbf{ \hat{x}}_{j}\ \text{mutual kNN}\\ 0&\text{otherwise}\end{array}\right., \tag{1}\]
where \(\mathbf{\hat{x}}_{i}\) represents the \(E\)-dimensional column of \(\mathbf{\hat{X}}^{N\times E}\) at beat \(\mathbf{b}_{i}\) and \(\mu\) represents the median distance between furthest nearest neighbors. The recurrence matrix \(\mathbf{R}_{\mathbf{Y}}\) is computed in the same way from \(\mathbf{\hat{Y}}^{N\times F}\). Importantly, since similar beats are connected, diagonals in these matrices are consecutive connected beats that indicate a repeated pattern (see Fig. 1).
To encode homogeneity, we construct a sequence matrix from \(\mathbf{\hat{Z}}^{N\times H}\) of the distances of each beat with its immediate neighbors such that
\[\mathbf{R}_{\mathbf{Z}}(i,j)=\left\{\begin{array}{ll}exp(-|\mathbf{\hat{Z}} _{i}-\mathbf{\hat{Z}}_{j}|^{2}/\sigma^{2})&|i-j|=1\\ 0&\text{otherwise}\end{array}\right., \tag{2}\]
where \(\sigma\) is the median distance between beats.
We then compute a weighted sum of the matrices as such:
\[\mathbf{R}=0.25\mathbf{R}_{\mathbf{X}}+0.25\mathbf{R}_{\mathbf{Y}}+0.5\mathbf{ R}_{\mathbf{Z}} \tag{3}\]
We refer to Salamon et al. [13] for further details of their implementation, combination weights, and other parameter values that resulted in an improvement in hierarchical structure analysis.
### Hierarchical segmentation
As proposed in the related works [15, 13], we first compute the symmetric normalized Laplacian \(\mathbf{L}\) of \(\mathbf{R}\):
\[\mathbf{L}=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{R}\mathbf{D}^{-1/2}, \tag{4}\]
where \(\mathbf{D}\) is the diagonal matrix of \(\mathbf{R}\) and \(\mathbf{I}\) is the identity matrix. We define \(\mathcal{E}_{k}\) as the set containing the first \(k\) eigenvectors of \(\mathbf{L}\), where \(k=\{1\..\ 12\}\). Sets with a higher \(k\) will contain a more granular representation of \(\mathbf{L}\). We do spectral clustering [18] on each set \(\mathcal{E}_{k}\) using \(k\) clusters, therefore increasing the granularity of analysis as the representation granularity increases. The result is a multi-level segmentation, where in each level every beat is assigned a segment type, and the cluster change-points determine the segment boundaries (see Fig. 2). We construct the global set of segments \(\mathcal{S}\) across all levels, where each of the generated segments is defined by its level \(k\) and a pair of beat indices \(\{p,q\}\) denoting its starting and ending boundary.
While the beat-level analysis was important for making granular comparisons, musically the measure (delimited by two consecutive downbeats) is too fundamental to interrupt. We therefore opt to quantize the beat-level segmentation by replacing boundary beats in \(\mathcal{S}\) with their closest downbeat from the downbeat track \(\mathbf{o}\) we computed. For a music piece with 120 beats per minute (BPM), quantization limits the duration precision of the rearrangement to \(\pm 2\) seconds. Precision also depends on the duration of the shortest segment
Figure 1: Recurrence matrix \(\mathbf{R}\) with segment overlay in light yellow
extracted from the segmentation; if the user requires predicable precision within a certain threshold, they might set the segmentation to stop when for a level \(k\) a segment of a certain duration is produced, rather than using a fixed number of levels.
### Transition identification
We categorize our suitable transition search into two types: segment transition points, which refer to transitions between structural segments, and internal transition points, which refer to transitions that happen within a single structural segment.
#### 3.3.1 Segment transition points
We conceptualize rearrangement as the task of selecting and ordering segments from \(S\) so that the sum of their durations in seconds is close to the target duration for the rearrangement. One key problem of previous approaches is the resulting distracting repetitions [2]. To avoid these, we take advantage of the segment labels assigned from spectral clustering. Segments of the same type will have similar music information, and as the cluster number and hence segmentation level \(k\) increases, segments with the same label will likely have even stronger similarity. When considering segments to succeed the current segment, we will therefore eliminate segments with the same label.
While the segmentation helps avoid interrupting groupings of ideas, this does not guarantee that jumps between segment boundaries are smooth. Effects such as reverb or other musical elements may "leak" into the succeeding segment, or the succeeding segment may be too different to the current one to feel like its natural continuation. For this reason, we propose an algorithm for identifying smooth transitions around segment boundaries. Given segment \(\alpha\) with boundaries \(\{p_{\alpha},q_{\alpha}\}\) and a candidate segment \(\beta\) with boundaries \(\{p_{\beta},q_{\beta}\}\), we search in an area with a radius of \(4\) measures (or equal to the length of either segment if it is shorter than 4 measures) around \(q_{\alpha}\) and \(p_{\beta}\) for the best entry- and exit-point respectively.
To do this, we use the combined recurrence matrix \(\mathbf{R}\) we used for encoding structure in Section 3.1. We can conceptualize the column indices of this matrix as current beats, and the row indices as the target beats, meaning the search area for the transition between \(\alpha\) and \(\beta\) can be expressed as the square submatrix of \(\mathbf{R}\) defined by columns \(q_{\alpha-4}\) to \(q_{\alpha+4}\) and rows \(p_{\beta-4}\) and \(p_{\beta+4}\). We search this area for diagonals of connected beats that would indicate a repeated pattern. Given the number of beats \(g\) in each measure of a given music piece, we only consider diagonals whose elements' indices are an integer multiple of \(g\) apart, so that we retain the position within the metrical structure during a transition.
We select the diagonal that is the longest and the closest to the boundary as long as its length is longer than 1 measure. From this diagonal, we select the midpoint, and store the column index as the entry-point and row index as the exit-point (see Fig. 1). We prioritize long patterns and only consider those that are at least a measure long to increase the confidence that a musical idea is repeated and not simply a sole beat being similar to another, a possible issue we found with another system in our evaluation (see Sec. 4). We use this algorithm for all combinations of non-overlapping current and candidate segments to extract a set of suitable transition points \(\mathcal{T}\) anchored to the piece's structural segments.
To prioritize the best transition points in \(\mathcal{T}\), as explained further in 3.4, we define a metric for the cost \(\mathcal{C}(i,j)\) of each transition \(\mathcal{T}_{i,j}\). For points on segment boundaries that did not have a more ideal neighbor point transition, we set \(\mathcal{C}(i,j)=1\). For other points, we infer the cost from \(\mathbf{R}\), which reflects the feature similarity between two points. If it's a forward transition (i.e. \(i>j\)), we set \(\mathcal{C}(i,j)=1-\mathbf{R}_{i,j}\), while if it's a backward transitions (i.e. \(i<j\)), we penalize it such that \(\mathcal{C}(i,j)=1-(\mathbf{R}_{i,j})/4\) so that backwards transitions are discouraged but still an option if the rearrangement aims to extend the music piece.
#### 3.3.2 Internal transitions points
While spectral clustering allowed us to separate musical ideas and group them by similarity, it is likely that consecutive repetitions will simply be grouped in a single segment. This means that in cases such as a chorus having 4 almost identical repetitions of a chord progression or maybe even lyrics, we may miss the cut-points for skipping some of those repetitions in order to make a short rearrangement. To alleviate this, we traverse some of the middle segmentation levels (\(k\in\{4,5,6\}\)) that are likely comprised of segments large enough to contain repetitions. On a segment-per-segment basis, we search for diagonals in \(\mathbf{R}\) graphs with the same restrictions used for identifying segment transitions, only this time the search area for a segment \(\alpha\) with boundaries \(\{p_{\alpha},q_{\alpha}\}\) will be the submatrix defined by columns \(p_{\alpha-4}\) to \(p_{\alpha+4}\) and rows \(p_{\alpha-4}\) and \(p_{\alpha+4}\). We add the midpoint of the best diagonal to the set \(\mathcal{T}\) of transition points with a cost of \(\mathcal{C}(i,j)=1-\mathbf{R}_{i,j}\).
### Optimization
Our goal is to produce a music rearrangement of a determined duration, with the additional restriction that we select a first and a last segment from any levels for the start and end of the rearrangement respectively. Although we initially conceptualized rearrangement as the reshuffling of the hierarchical segmentation, we extracted a set of transition points along with their transition cost that we can now use to reframe rearrangement as a path finding problem. We use the approach proposed by Stoller et al. [11] for finding a solution under similar constraints. We want to construct a path defined by a sequence of beats \(\mathbf{A}=\{a_{1},a_{2},...,a_{L}\}\) with \(a\in B\) where \(L\) is the number of beats corresponding to the desired piece length. \(\mathbf{A}\) is constructed by minimizing the cost between two consecutive beats \(\mathcal{C}(a_{i},a_{i+1})\) subject to:
1. \(a_{1}=b_{1}\) and \(a_{L}=b_{N}\), meaning the rearrangement starts and ends from the same beats as the original;
2. a point \(a_{l}=b_{i}\) only being succeeded by \(b_{i+1}\) or \(b_{j}\) if the transition \(\mathcal{T}_{i,j}\) exists; and
3. the duration of \(A\) being within a radius equal to the mean measure duration from the target duration.
Then, the final cost is computed as the sum over all consecutive beat pairs in \(\mathbf{A}\): \(\sum_{i=1}^{L-1}\mathcal{C}(a_{i},a_{i+1})\).
Stoller et al. [19] frame this problem as a single-source shortest-paths problem in a directed weighted graph \(G=(V,W,a_{i})\) with
Figure 2: Hierarchical structure with \(k=\{1\..\ 12\}\) levels
vertex \(v_{i,j}\in V\) representing the selection of \(b_{i}\) as \(a_{j}\) and edge \(w_{i,j}=(v_{i,k},v_{j,k+1})\), \(w_{i,j}\in W\) for every pair of \((a_{i},a_{j})\). The authors consider all possible transitions between \(N\) beats in an \(N\times N\) cost matrix. In our case, we only considered transitions that are adequately smooth, with a fallback of transitions during segment boundaries and the possibility of dynamically adjusting the segmentation level. This means that we draw edges only between beats pairs for which a transition exists, including when \(b_{i+1}\) succeeds \(b_{i}\), resulting in a much smaller number of possible paths and thus a faster optimization. Similarly to Stoller et al. [19], we use Dijkstra's shortest path algorithm to find the optimal path in \(G\).
## 4 Evaluation
We conducted a listening study to assess the quality of different music rearrangement systems. Unfortunately, to our best knowledge, no other rearrangement approach is publicly available and open-source. We therefore decided to use the popular Remix tool in Adobe software [1], a widely-used commercially-available rearrangement tool, through Adobe Audition version 23.0. We compare it with our own rearrangement system, as well as a manual rearrangement produced by a sound engineer, who was asked to make no further edits apart from deleting, repeating, and rearranging sections and optionally using cross-fades during transitions.
While previous surveys [11, 20, 2, 4] focus solely on assessing how perceivable transitions are, we argue for the importance of other musical elements in the rearrangement, such as the structural coherence and lack of repetitions. We therefore provide users with complete rearrangements of 5 songs to rate on a 5-point scale. We use a likert multi-scale experimental design where the participants are asked to rate the songs presented in random order on 4 axes: (1) **consistency**: the audio feels consistent and well put together, without any noticeable, distracting, or abrupt discontinuities in rhythm, dynamics, or melody, or other concatenation errors; (2) **development**: the structure and musical content are arranged in a sensible order, without abrupt changes in musical ideas, dynamics, or mood; (3) **balance**: the piece has a sensible balance of novelty and repetitiveness given its length, without excessive repetitions nor a continuously changing theme; (4) **overall quality**: for its duration, the piece can be considered a good, standalone piece of music, with good consistency, development, and balance.
We choose songs from different cultural traditions and styles: 2 popular songs, 1 western classical song, 1 Bollywood song with low recording quality, and 1 Greek island dance song with a noisy phone recording. By varying recording quality we wanted to test the robustness of the automatic methods to audio distortions. We restrict the length of the 5 rearrangements to 38, 42, 46, 50, and 54 seconds to keep the experiment around 20 minutes long. At the start of the experiment, we provide users with short, negative examples for each rating axis. We used the webMUSHRA web listening study interface [21] to conduct the study, presenting the 3 rearrangements of each song (Human, Adobe Remix, and Ours) one after the other in random order, without ever giving any information about the rearrangement systems. A total of 17 participants completed the study.
The mean ratings for all 5 songs are presented for each system in Fig 3. The error bars represent \(95\%\) confidence intervals. The manual rearrangement did not receive a perfect rating, although it did collect the highest overall mean rating. Although the sound engineer was satisfied with this rearrangement, manually rearranging a piece requires a lot of subjective judgment, some of which might not be shared by the listeners. This is especially evident in the axis of development, where it is hard to judge what is possible in rearrangements with very small durations.
Overall, the closed-source Remix tool has lower mean ratings than our open-source system in all axes. A noteable example of a failure is in the case of "Dancing Queen" by Abba (ID: 1), where the Remix tool seems to consider some beats with the same chord a viable transition point. This leads to noticeable discontinuities, as the surrounding content does not match, a behavior that inspired the longest diagonal search in our method. Since the error bars are quite large, more participants are required to assess the differences between the three rearrangement versions, and a further listening study should propose more challenging constraints that can stress-test the systems. We note that the Bollywood and the Greek song had lower ratings in terms of quality and this may be explained by the unfamiliarity of the participants with the song, lower audio quality or simply that they are difficult to rearrange. Further experiments are needed to disentangle between preference, recording quality, and rearrangement quality.
## 5 Conclusions
In this paper we propose a method for rearranging music recordings that is anchored to their hierarchical structure. We use a semantically and acoustically rich input feature representation to segment pieces and identify smooth transitions points across and within structural segments. Experimental results show that on average our structure-oriented approach can produce more consistent musical development, less noticeable cuts, and overall better quality rearrangements, close to what a sound engineer would produce. However, we refrain from generalizing our conclusions because of the limited rearrangements evaluated on a larger scale. Unlike previous approaches that evaluate rearrangement solely on audio snippets that contain transitions [11, 20, 2, 4], we evaluate axes such as consistency and balance that require the whole rearrangement to be played, therefore increasing the survey time by a lot. Future work includes a larger-scale user evaluation with a larger variety of songs and more qualitative feedback, as well as the investigation of quantitative approaches for evaluating rearrangements on the axes of consistency, development, and balance. With that said, to our best knowledge our system is the only freely-accessible open-source implementation for music rearrangement, so we encourage readers to experiment with the rearrangement Python package and the listening examples [9].
Figure 3: Ratings across 4 axes for each system and song |
2305.18267 | Analysis of the (1+1) EA on LeadingOnes with Constraints | Understanding how evolutionary algorithms perform on constrained problems has
gained increasing attention in recent years. In this paper, we study how
evolutionary algorithms optimize constrained versions of the classical
LeadingOnes problem. We first provide a run time analysis for the classical
(1+1) EA on the LeadingOnes problem with a deterministic cardinality
constraint, giving $\Theta(n (n-B)\log(B) + n^2)$ as the tight bound. Our
results show that the behaviour of the algorithm is highly dependent on the
constraint bound of the uniform constraint. Afterwards, we consider the problem
in the context of stochastic constraints and provide insights using
experimental studies on how the ($\mu$+1) EA is able to deal with these
constraints in a sampling-based setting. | Tobias Friedrich, Timo KΓΆtzing, Aneta Neumann, Frank Neumann, Aishwarya Radhakrishnan | 2023-05-29T17:40:52Z | http://arxiv.org/abs/2305.18267v1 | # Analysis of the (1+1) EA on LeadingOnes with Constraints
###### Abstract.
Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years. In this paper, we study how evolutionary algorithms optimize constrained versions of the classical LeadingOnes problem. We first provide a run time analysis for the classical (1+1) EA on the LeadingOnes problem with a deterministic cardinality constraint, giving \(\Theta(n(n-B)\log(B)+n^{2})\) as the tight bound. Our results show that the behaviour of the algorithm is highly dependent on the constraint bound of the uniform constraint. Afterwards, we consider the problem in the context of stochastic constraints and provide insights using experimental studies on how the (\(\mu\)+1) EA is able to deal with these constraints in a sampling-based setting.
Evolutionary algorithms, chance constraint optimization, run time analysis, theory. +
Footnote β : journal: Computer Physics Communications
1
## 1. Introduction
Evolutionary algorithms (Kolmogorov, 1954) have been used to tackle a wide range of combinatorial and complex engineering problems. Understanding evolutionary algorithms from a theoretical perspective is crucial to explain their success and give guidelines for their application. The area of run time analysis has been a major contributor to the theoretical understanding of evolutionary algorithms over the last 25 years (Bauer, 1997; Goyal and Goyal, 2000; Goyal and Goyal, 2000; Goyal, 2000). Classical benchmark problems such as OneMax and LeadingOnes have been analyzed in a very detailed way, showing deep insights into the working behaviour of evolutionary algorithms for these problems. In real-world settings, problems that are optimized usually come with a set of constraints which often limits the resources available. Studying classical benchmark problems even with an additional simple constraint such as a uniform constraint, which limits the number of elements that can be chosen in a given benchmark function, poses significant new technical challenges for providing run time bounds of even simple evolutionary algorithms such as the (1+1) EA.
OneMax and the broader class of linear functions (Bauer, 1997) have played a key role in developing the area of run time analysis during the last 25 years, and run time bounds for linear functions with a uniform constraint have been obtained (Bauer, 1997; Goyal and Goyal, 2000). It has been shown in (Bauer, 1997) that the (1+1) EA needs exponential time optimize OneMax under a specific linear constraint which points to the additional difficulty which such constraints impose on the search process. Tackling constraints by taking them as additional objectives has been shown to be quite successful for a wide range of problems. For example, the behaviour of evolutionary multi-objective algorithms has been analyzed for submodular optimization problems with various types of constraints (Kolmogorov, 1954; Goyal and Goyal, 2000). Furthermore, the performance of evolutionary algorithms for problems with dynamic constraints has been investigated in (Kolmogorov, 1954; Goyal and Goyal, 2000).
Another important area involving constraints is chance constrained optimization, which deals with stochastic components in the constraints. Here, the presence of stochastic components in the constraints makes it challenging to guarantee that the constraints are not violated at all. Chance-constrained optimization problems (Goyal and Goyal, 2000; Goyal, 2000) are an important class of the stochastic optimization problems (Goyal and Goyal, 2000) that optimize a given problem under the condition that a constraint is only violated with a small probability. Such problems occur in a wide range of areas, including finance, logistics and engineering (Goyal and Goyal, 2000; Goyal, 2000; Goyal, 2000). Recent studies of evolutionary algorithms for chance-constrained problems focused on a classic knapsack problem where the uncertainty lies in the probabilistic constraints (Kolmogorov, 1954; Goyal and Goyal, 2000). Here, the aim is to maximise the deterministic profit subject to a constraint which involves stochastic weights and where the knapsack capacity bound can only be violated with a small probability of at most \(\alpha\). A different stochastic version of the knapsack problem has been studied in (Kolmogorov, 1954). Here profits involve uncertainties and weights are deterministic. In that work, Chebyshev and Hoeffding-based fitness functions have been introduced and evaluated. These fitness functions discount expected profit values based on uncertainties of the given solutions.
Theoretical investigations for problems with chance constraints have gained recent attention in the area of run time analysis. This includes studies for montone submodular problems (Kolmogorov, 1954) and special instances of makespan scheduling (Kolmogorov, 1954). Furthermore, detailed run time analyses have been carried out for specific classes of instances for the chance constrained knapsack problem (Kolmogorov, 1954; Goyal and Goyal, 2000).
### Our contribution
In this paper, we investigate the behaviour of the (1+1) EA for the classical LeadingOnes problem with additional constraints. We first study the behaviour for the case of a uniform constraint which
limits the number of 1-bits that can be contained in any feasible solution. Let \(B\) be the upper bound on the number of 1-bits that any feasible solution can have. Then the optimal solutions consists of exactly \(B\) leading 1s and afterwards only 0s. The search for the (1+1) EA is complicated by the fact that when the current solution consists of \(k<B\) leading 1s, additional 1-bits not contributing to the fitness score at positions \(k+2,\ldots,n\) might make solutions infeasible. We provide a detailed analysis of such scenarios in dependence of the given bound \(B\).
Specifically, we show a tight bound of \(\Theta(n^{2}+n(n-B)\log(B))\) (see Corollary 6). Note that (Bang and Zhai, 2016) shows the weaker bound of \(O(n^{2}\log(B))\), which, crucially, does not give insight into the actual optimization process at the constraint. Our analysis shows in some detail how the search progresses. In the following discussion, for the current search point of the algorithm, we call the part of the leading 1s the _head_ of the bit string, the first 0 the _critical bit_ and the remaining bits the _tail_. While the size of the head is less than \(B-(n-B)\), optimization proceeds much like for unconstrained LeadingOnes; this is because the bits in the tail of size about \(2(n-B)\) are (almost) uniformly distributed, contributing roughly a number of \(n-B\) many 1s additionally to the \(B-(n-B)\) many 1s in the head. This stays in sum (mostly) below the cardinality bound \(B\), occasional violations changing the uniform distribution of the tail to one where bits in the tail are 1 with probability a little less than \(1/2\) (see Lemma 3).
Once the threshold of \(B-(n-B)\) many 1s in the head is passed, the algorithm frequently runs into the constraint. For a phase of equal LeadingOnes value, we consider the random walk of the number of 1s of the bit string of the algorithm. This walk has a bias towards the bound \(B\) (its maximal value), where the bias is light for LeadingOnes-values just a bit above \(B-(n-B)\) and getting stronger as this value approaches \(B\). Since progress is easy when not at the bound of \(B\) many 1s in the bit string (by flipping the critical bit and no other) and difficult otherwise (additionally to flipping the critical bit, a 1 in the tail needs to flip), the exact proportion of time that the walk spends in states of less than \(B\) versus exactly \(B\) many 1s is very important. In the final proofs, we estimate these factors and have corresponding potential functions reflecting gains (1) from changing into states of less than \(B\) many 1s and (2) gaining a leading 1. Bounding these gains appropriately lets us find asymptotically matching upper and lower bounds using the additive drift theorem (Kolmogorov, 1992).
In passing we note that two different modifications of the setting yield a better time of \(O(n^{2})\). First, this time is sufficient to achieve a LeadingOnes-values of \(B-c(n-B)\) for any \(c>0\) (see Corollary 7). Second, considering the number of 1s as a secondary objective (to be minimized) gives an optimization time of \(O(n^{2})\) (see Theorem 8).
Afterwards, we turn to stochastic constraints and investigate an experimental setting that is motivated by recent studies in the area of chance constraints. We consider LeadingOnes with a stochastic knapsack chance constraint, where the weights of a linear constraint are chosen from a given distribution. In the first setting, the weight of each item is chosen independently according to a Normal distribution \(N(\mu,\sigma^{2})\). A random sample of weights is feasible if the sum of the chosen sampled weights does not exceed a given knapsack bound \(B\). In any iteration, all weights are resampled independently for all evaluated individuals. Our goal is to understand the maximal stable LeadingOnes value that the algorithm obtains. In the second setting which we study empirically, the weights are deterministically set to 1 and the bound is chosen uniformly at random within an interval \([B-\epsilon,B+\epsilon]\), where \(\epsilon>0\) specifies the uncertainty around the constraint bound. For both settings, we examine the performance of the \((1+1)\) EA and \((10+1)\)-EA for different values of \(B\) and show that a larger parent population has a highly positive effect for these stochastic settings.
The paper is structured as follows. In Section 2, we introduce the problems and algorithms that we study in this paper. We present our run time analysis for the LeadingOnes problem with a deterministic uniform constraint in Section 3. In section 4, we discuss a way to obtain \(\Theta(n^{2})\) bound on the run time for the same problem and report on our empirically investigations for the stochastic settings in Section 5. Finally, we finish with some concluding remarks. Note that some proofs are ommitted due to space constraints.
## 2. Preliminaries
In this section we define the objective function, constraints and the algorithms used in our analysis. With \(|x|_{1}\) we denote the number of 1s in a bit string \(x\in\{0,1\}^{n}\).
### Cardinality Constraint
Let \(f:\{0,1\}^{n}\to\mathbb{R}\), \(B\leq n\) and for \(x\in\{0,1\}^{n}\), let \(x_{i}\) denote the \(i\) -th bit of \(x\). In this paper, optimizing \(f\) with cardinality constraint \(B\) means finding, \(\max_{x\in\{0,1\}^{n}}f(x)\) st \(\sum_{i=1}^{n}x_{i}\leq B\).
### Stochastic Constraint
Let \(f:\{0,1\}^{n}\to\mathbb{R}\), \(B\leq n\) and for \(x\in\{0,1\}^{n}\), let \(x_{i}\) denote the \(i\) -th bit of \(x\). In this paper we empirically analyse the following normal stochastic constraint with uncertainty in the weights optimization problem,
\[\max_{x\in\{0,1\}^{n}}f(x)\text{ s.t }\sum_{i=1}^{n}w_{i}\cdot x_{i}\leq B, \text{ where }w_{i}\sim N(\mu,\sigma^{2}).\]
Let \(f:\{0,1\}^{n}\to\mathbb{R}\), \(B\leq n\) and for \(x\in\{0,1\}^{n}\), let \(x_{i}\) denote the \(i\) -th bit of \(x\). In this paper we also empirically analyse the following uniform stochastic constraint with uncertainty in the bound optimization problem,
\[\max_{x\in\{0,1\}^{n}}f(x)\text{ s.t }|x|_{1}\leq y,\text{ where }y\sim U(B-\epsilon,B+\epsilon).\]
### Objective Function
We consider the LeadingOnes function as our objective with cardinality and stochastic constraints for our analysis.
\(\text{LeadingOnes}:\{0,1\}^{n}\to\mathbb{R}\), is a function which maps a bit string of length \(n\) to number of 1s before the first 0 in the bit string. For every \(x\in\{0,1\}^{n}\), \(\text{LeadingOnes}(x)=\sum_{i=1}^{n}\prod_{j=1}^{i}x_{j}\).
### (\(\mu\)+1) EA
The (\(\mu\)+1) EA on a real valued fitness function \(f\) with constraint \(B\) is given in Algorithm 1. The (\(\mu\)+1) EA at each iteration maintains a population of size \(\mu\). The initial population \(P_{0}\) has \(\mu\) random bit strings chosen uniformly. At each iteration \(t>0\), a bit string is chosen uniformly at random from \(P_{t}\) followed by a mutation operation which flips each bit of the chosen bit string with probability \(\frac{1}{n}\). The mutated bit string is added to \(P_{t}\) and the bit string with the least
fitness among the \(\mu+1\) individuals is removed. Since we can also sample a bit string which violates the constraint, we consider the following function for optimization.
\[g(x)=\begin{cases}f(x),&\text{if }|x|_{1}<B;\\ B-|x|,&\text{otherwise}.\end{cases}\]
```
1\(P_{0}\leftarrow\mu\) individuals from \(\left\{0,1\right\}^{n}\) chosen u.a.r.;
2\(t=0\);
3whilestopping criterion not metdo
4\(x\leftarrow\) uniform random bit string from \(P_{t}\);
5\(y\leftarrow\) flip each bit of \(x\) independently with probab. \(1/n\);
6\(P_{t}=P_{t}\cup\left\{y\right\}\);
7\(P_{t+1}=P_{t}\setminus\left\{\text{an individual }x\in P_{t}\text{ with least }g(x)\text{ value}\right\}\);
8\(t=t+1\);
```
**Algorithm 1**(\(\mu\)+1) EA on fitness function \(f\) with constraint B
## 3. Unmodified Setting
In this section we give a tight analysis of the (1+1) EA on the objective LeadingOnes with cardinality constraint \(B\).
We start with a technical lemma which we need for our proof of the upper bound.
Lemma 1 ().: _For \(t\geq 0\), let \(x_{t}\) denote the parent bit string at \(t\)-th iteration while (1+1) EA is optimizing LeadingOnes with the cardinality constraint \(B\). And for \(t>0\), let \(A_{t}\) denote the event that \(|x_{t+1}|_{1}=B\) and \(LO(x_{t+1})=LO(x_{t})\). Then \(Pr(A_{t}\bigm{|}|x_{t}|_{1}<B)\leq\frac{n-B}{n}\)._
Proof.: First note that, if \(|x_{t}|_{1}=k<B\) and \(C_{t}\) denote the event that \(x_{t+1}\) is formed by flipping \(B-k\) number of \(0\) bits to \(1\) out of \(n-k-1\) (except the left most \(0\)) number of \(0\) bits, then
\[Pr(A_{t}\bigm{|}|x_{t}|_{1}<B)\leq Pr(C_{t}\bigm{|}|x_{t}|_{1}<B).\]
The event \(A_{t}\) is a sub-event of \(C_{t}\), since in the event \(C_{t}\) we do not have any restriction on the bits other than \(B-k\) number of \(0\) bits out of \(n-k-1\) number of them and we have to flip at least \(B-k\) number of \(0\) bits to \(1\) to get the desired \(x_{t+1}\) in the event \(A_{t}\). Hence,
\[Pr(C_{t}) =\begin{pmatrix}n-k-1\\ B-k\end{pmatrix}\left(\frac{1}{n}\right)^{B-k}\] \[=\frac{(n-k-1)\cdot(n-k-2)\cdots(n-B)}{1\cdot 2\cdots(B-k)} \cdot\left(\frac{1}{n}\right)^{B-k}\leq\frac{n-B}{n}.\]
The last inequality holds because, for every \(r>0\), \(\frac{n-k-r}{n}\leq 1\).
In the Theorem 2 below we give an upper bound on the expected run time of the (1+1) EA on LeadingOnes with cardinality constraint \(B\). Later we show that this bound is tight by proving a matching lower bound.
Theorem 2 ().: _Let \(n,B\in\mathbb{N}\) and \(B<n\). Then the expected optimization time of the (1+1) EA on LeadingOnes with cardinality constraint \(B\) is \(O\left(n^{2}+n(n-B)\log B\right).\)_
Proof.: From (8, Lemma 3), we know that the (1+1) EA is expected to find a feasible solution within \(O(n\log(n/B))\) iterations. Now we calculate how long it takes in expected value to find the optimum after a feasible solution is sampled.
To do this, we construct a potential function that yields an drift value greater than \(1\) at each time \(t\) until the optimum is found. For \(i\in\left\{0,\cdots,B\right\}\), let \(g_{B}(i)\) be the potential of a bit string \(x\in\left\{0,1\right\}^{n}\) with exactly \(B\) number of \(1\)s and \(LO(x)=\textsc{LeadingOnes}(x)=i\). For \(i\in\left\{0,\cdots,B-1\right\}\), let \(g_{<B}(i)\) be the potential of a bit string \(x\in\left\{0,1\right\}^{n}\) with less than \(B\) number of \(1\)s and \(LO(x)=i\).
Let \(g_{B}(0)=0\) and \(g_{<B}(0)=\frac{en}{B}\). And for every \(i\in\left\{1,\cdots,B\right\}\), let
\[g_{B}(i)=en\left(1+\frac{e\cdot(n-B)}{B-i+1}\right)+g_{<B}(i-1),\]
and for every \(i\in\left\{1,\cdots,B-1\right\}\), let
\[g_{<B}(i)=\frac{en}{B-i}+g_{B}(i).\]
For \(t>0\), let \(X_{t}\) be the parent bit string of (1+1) EA at iteration \(t\). and let \(T\) be the iteration number at which (1+1) EA finds the optimum for the first time. Let
\[f(X_{t})=\begin{cases}g_{B}(LO(X_{t}))&\text{if }|X_{t}|_{1}=B,\\ g_{<B}(LO(X_{t})&\text{if }|X_{t}|_{1}<B.\end{cases} \tag{1}\]
We consider two different cases, \(|X_{t}|_{1}=B\) and \(|X_{t}|_{1}<B\) and show in both the cases the drift is at least \(1\). Suppose we are in an iteration \(t<T\) with \(LO(X_{t})=i\) and \(|X_{t}|_{1}=B\). Then the probability that the number of \(1\)s in the search point can decrease by \(1\) in the next iteration is at least \(\frac{B-i}{en}\). This is because we can get a desired search point by flipping only one of the \(1\) bits of \(B-i\), excluding the leading \(1\)s, and not flipping any other bit. Therefore,
\[E[f(X_{t+1})-f(X_{t})\bigm{|}LO(X_{t})=i,\ |X_{t}|_{1}=B]\] \[\geq\left(g_{<B}(i)-g_{B}(i)\right)\cdot Pr(|X_{t+1}|_{1}<B)\] \[\geq\left(\frac{en}{B-i}+g_{B}(i)-g_{B}(i)\right)\cdot\left(\frac {B-i}{en}\right)=1.\]
Suppose we are in an iteration \(t<T\) with \(LO(X_{t})=i\) and \(|X_{t}|_{1}<B\). Then in the next iteration the value of LeadingOnes can increase when the leftmost \(0\) is flipped to \(1\) as this does not violate the constraint. This happens with probability at least \(\frac{1}{en}\). Since \(|X_{t}|_{1}<B\), we can also stay in the same level (same number of leading \(1\)s) and the number of \(1\)s can increase to \(B\) with probability at most \(\frac{n-B}{n}\) (see Lemma 1). This implies that the potential can decrease by \(\frac{en}{B-i}\) with probability at most \(\frac{n-B}{n}\).
\[E[f(X_{t+1})- f(X_{t})\bigm{|}LO(X_{t})=i,|X_{t}|_{1}<B]\] \[\geq\left(g(i+1,B)-g_{<B}(i)\right)\cdot\frac{1}{en}-\left(\frac{ en}{B-i}\cdot\frac{n-B}{n}\right)\] \[\geq\left(en\left(1+\frac{e\cdot(n-B)}{B-i}\right)+g_{<B}(i)-g_{<B }(i)\right)\cdot\left(\frac{1}{en}\right)\] \[\quad-\left(\frac{e\cdot(n-B)}{B-i}\right)\] \[=1.\]
This results in an expected additive drift value greater than \(1\) in all the cases, so according to the additive drift theorem (10, Theorem 5),
\[E[T] \leq f(X_{T})=g_{B}(B)\] \[=\sum_{i=0}^{B-1}(g_{<B}(i)-g_{B}(i))+\sum_{i=1}^{B}(g_{B}(i)-g_{<B }(i-1))\] \[=\sum_{i=0}^{B-1}\frac{en}{B-i}+\sum_{i=1}^{B}en\left(1+\frac{e \cdot(n-B)}{B-i+1}\right)\] \[=en\sum_{i=1}^{B}\left(\frac{1}{i}\right)+enB+e^{2}\cdot n(n-B) \sum_{i=1}^{B}\left(\frac{1}{i}\right)\] \[\leq en(\log B+1)+enB+e^{2}\cdot n(n-B)(\log B+1)\] \[=O(n^{2}+n(n-B)\log B).\]
We now turn to the lower bound. When (1+1) EA optimizes LeadingOnes in unconstrained setting the probability that a bit which is after the left-most \(0\) is \(1\) is exactly \(\frac{1}{2}\). But this is not true in the constrained setting. The following lemma gives an upper bound on this probability during the cardinality constraint optimization.
**Lemma 3**: _For any \(t\geq 0\), let \(x^{t}\) denote the search point at iteration \(t\) when (1+1) EA is optimizing LeadingOnes with the cardinality constraint \(B\). Then for any \(t\geq 0\) and \(i>LO(x^{t})\), \(Pr(x^{t}_{i}=1)\leq 1/2\)._
We will prove this by induction. The base case is true because we have an uniform random bit string at \(t=0\). Lets assume that the statement is true for \(t\), i.e. for any \(i>LO(x_{t})\), \(Pr(x^{t}_{i}=1)\leq 1/2\). Let \(A\) be the event that the offspring is accepted. Then, for \(i>LO(x_{t+1})\),
\[Pr(x^{t+1}_{i}=1) =Pr((x^{t}_{i}=0)\cap(i^{th}\text{ bit is flipped})\cap A)\] \[+Pr((x^{t}_{i}=1)\cap(i^{th}\text{ bit is flipped})\cap A^{e})\] \[+Pr((x^{t}_{i}=1)\cap(i^{th}\text{ bit is not flipped}).\]
Let \(Pr(x^{t}_{i}=1)=p\), \(Pr(A\mid(i^{th}\text{ bit is flipped}\cap x^{t}_{i}=0))=a\) and \(Pr(A\mid(i^{th}\text{ bit is flipped}\cap x^{t}_{i}=1))=b\). Then note that \(a\leq b\) (because we have at least as many events as in probability \(a\) contributing to the probability \(b\)) and by induction hypothesis,
\[Pr(x^{t+1}_{i}=1) =(1-p)\cdot 1/n\cdot a+p\cdot 1/n\cdot(1-b)+p\cdot(1-1/n)\] \[=a/n-(p\cdot a)/n+p/n-(p\cdot b)/n+p-p/n\] \[=a/n-p\cdot(1-a/n-b/n)\] \[\leq a/n+1/2\cdot(1-a/n-a/n)=1/2.\]
We use the previous lemma to prove the \(\Omega(n^{2})\) lower bound on the expected time in the next theorem.
**Theorem 4**: _Let \(n,B\in\mathbb{N}\). Then the expected optimization time of the (1+1) EA on the LeadingOnes with cardinality constraint \(B\) is \(\Omega\left(n^{2}\right).\)_
We use the _fitness level method with visit probabilities_ technique defined in (3, Theorem 8) to prove this lower bound. Similar to (3, Theorem 11), we also partition the search space \(\{0,1\}^{n}\) based on the LeadingOnes values. For every \(i\leq B\), let \(A_{i}\) contain all the bit strings with the LeadingOnes value \(i\). If our search point is in \(A_{i}\), then we say that the search point is in the state \(i\). For every \(i\in\{1,\cdots,B-1\}\), we have to find the visit probabilities \(v_{i}\) and an upper bound for \(p_{i}\), the probability to leave the state \(i\).
The best case scenario for the search point to leave the state \(i\) is when the number of \(1\)s in the search point is less than \(B\). In this case, we have to flip the \((i+1)^{th}\) bit to \(1\) and should not flip any of the first \(i\) bits to \(0\). This happens with the probability \(\frac{1}{n}\cdot\left(1-\frac{1}{n}\right)^{i}\).
Therefore, for every \(i\in\{1,\cdots,B-1\}\), \(p_{i}\leq\frac{1}{n}\cdot\left(1-\frac{1}{n}\right)^{i}\).
Next, we claim that, for each \(i\in\{1,\cdots,B-1\}\), \(v_{i}\) - the probability to visit the state \(i\) is at least \(\frac{1}{2}\). We use (3, Lemma 10) to show this. Suppose the initial search point is in a state greater than or equal to \(i\), then the probability for it to be in state \(i\) is equal to the probability that the \((i+1)^{th}\) bit is \(0\). Since the initial bit string is chosen uniformly at random the probability that the \((i+1)^{th}\) bit is \(0\) is \(\frac{1}{2}\). This shows the first required bound on the probability for the lemma in (3, Lemma 10). Suppose the search point is transitioning into a level greater than or equal to \(i\), then the probability that it transition into state \(i\) is equal to the probability that \((i+1)^{th}\) bit is \(0\). From Lemma 3, we know that this probability is at least \(1/2\). This gives the second bound required for the (3, Lemma 10), therefore \(v_{i}\) is at least \(\frac{1}{2}\).
By using fitness level method with visit probabilities theorem (3, Theorem 8), if \(T\) is the time taken by the (1+1) EA to find an individual with \(B\) number of LeadingOnes for the first time then, we have, \(E[T]\geq\sum_{i=0}^{B-1}\frac{v_{i}}{p_{i}}=\frac{n}{2}\cdot\sum_{i=0}^{B-1} \left(1-\frac{1}{n}\right)^{-i}\geq\frac{n^{2}}{2}=\Omega(n^{2}).\)
We aim to show the \(\Omega(n^{2}+n(n-B)\log B)\) lower bound and Theorem 4 gives the \(\Omega(n^{2})\) lower bound. Therefore, next we consider the case where \(B\) is such that \(n(n-B)\log B\neq O(n^{2})\) to prove the desired lower bound.
**Theorem 5**: _Let \(n,B\in\mathbb{N}\) and suppose \(n(n-B)\log B=\omega(n^{2})\). Then the expected optimization time of the (1+1) EA on the objective LeadingOnes with cardinality constraint \(B\) is \(\Omega\left(n(n-B)\log B\right).\)_
We consider the potential function \(g\) such that, for all \(x\in\{0,1\}^{n}\),
\[g(x)=\frac{n\cdot|x|_{1}}{B-LO(x)+1}+\sum_{i=LO(x)}^{B-1}\frac{n(n-B)}{32e^{2}( B-i)}.\]
The first term appreciates progress by reducing the number of \(1\)s. This is scaled to later derive constant drift in expectation from such a reduction whenever \(|x|_{1}=B\), the case where progress by increasing the number of leading \(1\)s is not easy. The second term appreciates progress by increasing the number of leading \(1\)s, scaled to derive constant drift in case of \(|x|_{1}<B\).
The idea of the proof is as follows. We show that the potential decreases by at most \(10\) in expectation. Then the lower bound of additive drift theorem will give the desired lower bound on the expected run time (see (10, Theorem 5)).
We start by calculating the expected potential at \(t=0\). Since the initial bit string is chosen uniformly at random the probability that the first bit is \(0\) is \(\frac{1}{2}\). Therefore \(Pr(LO(x_{0})=0)=\frac{1}{2}\), which implies
\[E[g(x_{0})] \geq\frac{1}{2}\cdot E\left[\sum_{i=LO(x_{0})}^{B-1}\frac{n(n-B)}{32 e^{2}(B-i)}\bigm{|}LO(x_{0})=0\right]\] \[=\frac{n(n-B)}{64e^{2}}\sum_{i=0}^{B-1}\frac{1}{B-i}\] \[=\frac{n(n-B)}{64e^{2}}\cdot\sum_{i=1}^{B}\frac{1}{i}\geq\frac{n (n-B)\ln(B)}{64e^{2}}.\]
Therefore, there exits a constant \(c>0\) such that \(E[g(x_{0})]\geq cn(n-B)\log B\). The optimum has a potential value of \(nB\); thus, we can find a lower bound on the optimization time by considering the time to find a potential value of at most \(nB\). Let \(T=\min\{t\geq 0\mid g(x_{t})\leq nB\}\). Note that \(T\) may not be the time at which we find the optimum for the first time. From \(n(n-B)\log B=\omega(n^{2})\) we get, for \(n\) large enough, that \(E[g(x_{0})]>nB\), which implies that the expected optimization time is at least \(E[T]\).
In order to show the lower bound on the drift, we consider two different cases, \(|x_{t}|_{1}=B\) and \(|x_{t}|_{1}<B\) and show in both the cases drift is at most \(10\). First, we examine the case where the algorithm has currently \(B\) number of \(1\)s. For any \(t\), let \(A_{t}\) be the event that \(|x_{t}|_{1}=B\) and let \(\Delta_{t}=g(x_{t})-g(x_{t+1})\) and
\[\Delta_{t}^{s} =\sum_{i=LO(x_{t})}^{B-1}\frac{n(n-B)}{32e^{2}(B-i)}-\sum_{i=LO(x _{t+1})}^{B-1}\frac{n(n-B)}{32e^{2}(B-i)}\] \[=\sum_{i=LO(x_{t})}^{LO(x_{t+1})-1}\frac{n(n-B)}{32e^{2}(B-i)}.\]
Then, \(E[\Delta_{t}\bigm{|}A_{t}]=n\cdot E\left[\frac{|x_{t}|_{1}}{B-LO(x_{t})+1}- \frac{|x_{t+1}|_{1}}{B-LO(x_{t+1})+1}\bigm{|}A_{t}\right]\)
\[\qquad\qquad+E[\Delta_{t}^{s}\bigm{|}A_{t}]\] \[\leq n\cdot E\left[\frac{|x_{t}|_{1}-|x_{t+1}|_{1}}{B-LO(x_{t+1}) +1}\bigm{|}A_{t}\right]+E[\Delta_{t}^{s}\bigm{|}A_{t}].\]
Now we calculate the bounds for all the required expectations in the above equation.
First we calculate a bound for \(n\cdot E\left[\frac{|x_{t}|_{1}-|x_{t+1}|_{1}}{B-LO(x_{t+1})+1}\bigm{|}A_{t}\right]\) by using the definition of the expectation. Let \(I=\{0,\cdots,B-LO(x_{t})\}\) and \(J=\{-1,0,\cdots,B-LO(x_{t+1})-1\}\). Then the possible values the random variable \(|x_{t}|_{1}-|x_{t+1}|_{1}\) can have are the values in \(I\). And the possible values \(B-LO(x_{t+1})+1\) can have are \(\{B-LO(x_{t})-j\mid j\in J\}\). For \(i\in\{1,\cdots,B-LO(x_{t})\}\), the probability \(Pr((|x_{t}|_{1}-|x_{t+1}|_{1}=i)\cap(B-LO(x_{t+1})+1=B-LO(x_{t})+1))\leq(B-LO(x _{t}))\left(\frac{1}{2}\right)^{\frac{B-LO(x_{t})}{2}}\leq\frac{B-LO(x_{t})}{ 2i}\) and for \(i\in\{1,\cdots,B-LO(x_{t})\}\) and \(j\in\{0,\cdots,B-LO(x_{t+1})-1\}\), the probability \(Pr((|x_{t}|_{1}-|x_{t+1}|_{1}=i)\cap(B-LO(x_{t+1})+1=B-LO(x_{t})))\leq(B-LO(x _{t}))\left(\frac{1}{2}\right)^{\frac{B-LO(x_{t})}{2}}\leq\frac{B-LO(x_{t})}{ 2i}\frac{1}{2j}\) (see Lemma 3). For \(i\in I\) and \(j\in J\), let \(p_{t}^{ij}=Pr((|x_{t}|_{1}-|x_{t+1}|_{1}=i)\cap(B-LO(x_{t+1})+1=B-LO(x_{t})-j))\) and \(K=J\left(J\left(B-LO(x_{t})+1\right)\right.\) and \(K^{c}=\{B-LO(x_{t})+1\}\). Then, \(n\cdot E\left[\frac{|x_{t}|_{1}-|x_{t+1}|_{1}}{B-LO(x_{t+1})+1}\bigm{|}A_{t}\right]\)
\[=n\cdot\sum_{i\in I}\sum_{j\in J}\frac{i\cdot p_{t}^{ij}}{B-LO(x _{t})-j}\] \[=n\cdot\sum_{i\in I}\sum_{j\in K^{c}}\frac{i\cdot p_{t}^{ij}}{B- LO(x_{t})-j}+n\cdot\sum_{i\in I}\sum_{j\in K}\frac{i\cdot p_{t}^{ij}}{B-LO(x _{t})-j}\] \[=\sum_{i\in I}\frac{i(B-LO(x_{t}))}{i!(B-LO(x_{t})+1)}+\sum_{i\in I }\frac{i(B-LO(x_{t}))}{i!n}\sum_{j\in K}\frac{\frac{1}{2j}}{B-LO(x_{t})-j}\] \[\leq\sum_{i\in I\setminus\{0\}}\frac{1}{(i-1)!}+\sum_{i\in I} \frac{1}{(i-1)!}\sum_{j\in K}\frac{1}{2j}\] \[\leq e+2e=3e\leq 9. \tag{2}\]
We used the infinite sum values \(\sum_{i=1}^{\infty}\frac{1}{(i-1)!}=e\), \(\sum_{i=0}^{\infty}\frac{1}{2^{i}}=2\), to bound our required finite sums in the above calculation.
Now, we calculate \(E[\Delta_{t}^{s}\bigm{|}A_{t}]\), to get an upper bound for \(E[\Delta_{t}\bigm{|}A_{t}]\). When \(|x_{t}|_{1}=B\), the probability to gain in the LeadingOnest values is at most \(\frac{B-LO(x_{t})}{n}\cdot\frac{1}{n}\). Therefore we have
\[E[\Delta_{t}^{s}\bigm{|}A_{t}] =\frac{n(n-B)}{32e^{2}}\cdot E\left[\sum_{i=LO(x_{t})}^{LO(x_{t+1} )-1}\frac{1}{B-i}\bigm{|}A_{t}\right]\] \[\leq\frac{n(n-B)}{32e^{2}}\cdot E\left[\frac{LO(x_{t+1})-LO(x_{t} )}{B-LO(x_{t+1})+1}\bigm{|}A_{t}\right].\]
We calculate an upper bound for \(E\left[\frac{LO(x_{t+1})-LO(x_{t})}{B-LO(x_{t+1})+1}\bigm{|}A_{t}\right]\). The probability that \(LO(x_{t+1})-LO(x_{t})=i\) given that we gain at least a leading one is the probability that next \(i-1\) bits after left-most \(0\) bit is \(1\) followed by a \(0\) bit. This implies that the probability that \(LO(x_{t+1})-LO(x_{t})=i\) given that we gain at least a leading one is at most \(\frac{1}{2^{i-1}}\). Therefore, we have \(E\left[\frac{LO(x_{t+1})-LO(x_{t})}{B-LO(x_{t+1})+1}\bigm{|}A_{t}\right]\)
\[\leq\frac{B-LO(x_{t})}{n}\cdot\frac{1}{n}\cdot\sum_{i=1}^{B-LO(x_{t})-1}\frac{i \cdot 2^{1-i}}{B-LO(x_{t})-i}. \tag{4}\]
Equations 3 and 4 imply that, \(E[\Delta_{t}^{s}\bigm{|}A_{t}]\)
\[\leq\frac{n(n-B)}{32e^{2}}\cdot\frac{B-LO(x_{t})}{n}\cdot\frac{1}{n} \cdot\sum_{i=1}^{B-LO(x_{t})-1}\frac{i\cdot 2^{1-i}}{B-LO(x_{t})-i}\] \[\leq\frac{1}{16e^{2}}\cdot\sum_{i=1}^{B-LO(x_{t})-1}\frac{(B-LO(x _{t}))\cdot i}{(B-LO(x_{t})-i)\cdot 2^{i}}\] \[=\frac{1}{16e^{2}}\cdot\sum_{i=1}^{B-LO(x_{t})-1}\frac{(B-LO(x _{t})-i+i)\cdot i}{(B-LO(x_{t})-i)\cdot 2^{i}}\] \[\leq\frac{1}{16e^{2}}\left(\sum_{
From Equations 2 and 5, we have \(E[\Delta_{t}\ \big{|}\ A_{t}]\leq 10\) which concludes the first case (when \(|x_{t}|_{1}=B\)). Next we calculate the bound for the drift conditioned on the event \(A_{t}^{c}\) (when \(|x_{t}|_{1}<B\)).
\[E[\Delta_{t}\ \big{|}\ A_{t}^{c}] =n\cdot E\left[\frac{|x_{t}|_{1}}{B-LO(x_{t})+1}-\frac{|x_{t+1}|_{ 1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}\right]\] \[\qquad+E[\Delta_{t}^{s}\ \big{|}\ A_{t}^{c}]\] \[\leq n\cdot E\left[\frac{|x_{t}|_{1}-|x_{t+1}|_{1}}{B-LO(x_{t+1} )+1}\ \big{|}\ A_{t}^{c}\right]+E[\Delta_{t}^{s}\ \big{|}\ A_{t}^{c}].\]
Similar to the previous case, for this case also we start by finding a bound for \(n\cdot E\left[\frac{|x_{t}|_{1}-|x_{t+1}|_{1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}\right]\). Let \(\Delta_{t}^{1}=|x_{t}|_{1}-|x_{t+1}|_{1}\). Then \(n\cdot E\left[\frac{\Delta_{t}^{1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}\right]\)
\[=n\cdot E\left[\frac{\Delta_{t}^{1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}, \Delta_{t}^{1}>0\right]\cdot Pr(\Delta_{t}^{1}>0)\] \[\qquad+n\cdot E\left[\frac{\Delta_{t}^{1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}, \Delta_{t}^{1}<0\right]\cdot Pr(\Delta_{t}^{1}<0).\]
Now we find upper bounds for both the quantities in the above equation. By doing calculations similar to the calculations which lead to the Equation (2), we get \(n\cdot E\left[\frac{\Delta_{t}^{1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}, \Delta_{t}^{1}>0\right]\leq 9\). Since there are at least \(n-B\) number of \(0\) bits, the probability to gain a \(1\) bit is at least \(\frac{n-B}{n}\). And the probability that \(LO(x_{t})=LO(x_{t+1})\) is at least \(\frac{1}{2e}\). For a large enough. Therefore, \(n\cdot E\left[\frac{\Delta_{t}^{1}}{B-LO(x_{t+1})+1}\ \big{|}\ \Delta_{t}^{1}<0\right]\cdot Pr (\Delta_{t}^{1}<0)\leq-\frac{(n-B)}{2e^{2}(B-LO(x_{t})+1)}\). By combining these two bounds we have
\[n\cdot E\left[\frac{\Delta_{t}^{1}}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}\right]\leq 9- \frac{(n-B)}{2e^{2}(B-LO(x_{t})+1)}. \tag{6}\]
Next we calculate \(E[\Delta_{t}^{s}\ \big{|}\ A_{t}^{c}]\), to get an upper bound for \(E[\Delta_{t}\ \big{|}\ A_{t}^{c}]\). When \(|x_{t}|_{1}<B\), the probability to gain in LeadingOnes-value is at most \(\frac{1}{n}\). Therefore,
\[E[\Delta_{t}^{s}\ \big{|}\ A_{t}^{c}]\leq\frac{n(n-B)}{32e^{2}} \cdot E\left[\sum_{i=LO(x_{t})}^{LO(x_{t+1})-1}\frac{1}{B-i}\ \big{|}\ A_{t}^{c}\right]\] \[\qquad\leq\frac{n(n-B)}{32e^{2}}\cdot E\left[\frac{LO(x_{t+1})- LO(x_{t})}{B-LO(x_{t+1})+1}\ \big{|}\ A_{t}^{c}\right]\] \[\qquad\leq\frac{n(n-B)}{32e^{2}}\cdot\frac{1}{n}\cdot\sum_{i=1}^ {B-LO(x_{t})-1}\frac{i}{(B-LO(x_{t})-i)\cdot 2^{i-1}}\] \[\qquad=\frac{n-B}{32e^{2}(B-LO(x_{t}))}\cdot\sum_{i=1}^{B-LO(x_{t })-1}\frac{(B-LO(x_{t}))\cdot i}{(B-LO(x_{t})-i)\cdot 2^{i-1}}\] \[\qquad=\frac{n-B}{16e^{2}(B-LO(x_{t}))}\cdot\sum_{i=1}^{B-LO(x_{t })-1}\frac{(B-LO(x_{t})-i+i)\cdot i}{(B-LO(x_{t})-i)\cdot 2^{i}}\] \[\qquad\leq\frac{n-B}{16e^{2}(B-LO(x_{t}))}\left(\sum_{i=1}^{B-LO( x_{t})-1}\frac{i}{2^{i}}+\sum_{i=1}^{B-LO(x_{t})-1}\frac{i^{2}}{2^{i}}\right)\] \[\qquad\leq\frac{n-B}{16e^{2}(B-LO(x_{t}))}(2+6)=\frac{n-B}{2e^{2 }(B-LO(x_{t}))}. \tag{7}\]
Since \(B-LO(x_{t})\geq 1\), we have \(\frac{n-B}{2e^{2}(B-LO(x_{t}))}\leq\frac{n-B}{e^{2}(B-LO(x_{t})+1)}\). From Equations 6 and 7,we have
\[E[\Delta_{t}\ \big{|}\ A_{t}]\leq 9-\frac{(n-B)}{2e^{2}(B-LO(x_{t})+1)}+\frac{n-B }{2e^{2}(B-LO(x_{t}))}\leq 10.\]
Which concludes the second case (when \(|x_{t}|_{1}<B\)). Now we have \(E[\Delta_{t}\ |\ g(x_{t})]\leq 10\). Therefore, by the lower bounding additive drift theorem ((10, Theorem 5),
\[E[T]\geq\frac{E[g(x_{0}))-nB}{10}=\Omega(n(n-B)\log B).\]
Corollary 6 ().: _Let \(n,B\in\mathbb{N}\). Then the expected optimization time of the (1+1) EA on the LeadingOnes with cardinality constraint \(B\) is \(\Theta\left(n^{2}+n(n-B)\log B\right).\)_
Proof.: From Theorem 4 and Theorem 5 we have the required lower bound and we have the upper bound from Theorem 2. Therefore the expected optimization time is \(\Theta\left(n^{2}+n(n-B)\log B\right).\)
## 4. Better Run Times
In this section we discuss two ways to obtain the (optimal) run time of \(O(n^{2})\). First, we state a corollary to the proof of Theorem 2, that we can almost reach the bound within \(O(n^{2})\) iterations.
Corollary 7 ().: _Let \(n,B\in\mathbb{N}\) and \(c>0\). Then the (1+1) EA on LeadingOnes with the cardinality constraint \(B\) finds a search point with \(B-c(n-B)\) leading \(1s\) within \(O(n^{2})\) in expectation._
With the next theorem we show that incorporating the number of \(0\)s of a bit string as a secondary objective gives an expected run time of the (1+1) EA of \(\Theta(n^{2})\) to optimize cardinality constrained LeadingOnes.
Theorem 8 ().: _Let \(B\leq n-1\) and for any \(x\in\{0,1\}^{n}\), let_
\[f(x)=\begin{cases}(LO(x),|x|_{0})&|x|_{1}\leq B,\\ -|x|_{1}&\text{otherwise}.\end{cases}\]
_Then (1+1) EA takes \(\Theta(n^{2})\) in expectation to optimize \(f\) in the lexicographic order with the cardinality constraint \(B\)._
Proof.: For any \(x\in\{0,1\}^{n}\), let \(g(x)=3eLO(x)+|x|_{0}\), where \(|x|_{0}\) represents the number of \(0\)s in \(x\). Intuitively, we value both progress in decreasing the number of (unused) \(1\)s, as well as an increase in leading \(1\)s, but we value an increase in leading \(1\)s higher (since this is the ultimate goal, and typically comes at the cost of increasing the number of \(1\)s by a constant). Now we will show that \(g(y)=3eB+n-B\) if and only if \(y\) is the optimum of \(f\). Suppose for some \(y\in\{0,1\}^{n}\), \(g(y)=3eB+n-B\). Then \(3eLO(y)+|y|_{0}=3eB+n-B\), which implies that \(3eLO(y)=3eB+n-B-|y|_{0}\). Since \(LO(y)\leq B\) and \(|y|_{0}\leq n-LO(y),3eLO(y)=3eB+n-B-|y|_{0}\) implies that \(LO(y)=B\). Therefore, \(y\) is optimal.
Let \(T=\min\{t\geq 0\ \big{|}\ g(x_{t})\geq 3eB+n-B\}\). We will examine the drift at two different scenarios, \(|x_{t}|_{1}<B\) and \(|x_{t}|_{1}=B\) and show that in both the cases the drift is at least \(1/n\). Let \(\Delta_{t}=g(x_{t+1})-g(x_{t})\) and \(A_{t}\) be the event that the left-most \(0\) in \(x_{t}\) is flipped. Then
\(E[\Delta_{t}\bigm{|}A_{t}^{c}]\geq 0\), because, if the number of LeadingOnes does not increase then \(|x_{t+1}|_{0}-|x_{t}|_{0}\geq 0\) which in turn implies \(\Delta_{t}\geq 0\). Therefore, for any \(0\leq t<T\),
\[E[\Delta_{t}\bigm{|}|x_{t}|_{1}<B]=E[\Delta_{t}\bigm{|}A_{t},|x_{t }|_{1}<B]\cdot Pr[A_{t}]\] \[\qquad\qquad+E[\Delta_{t}\bigm{|}A_{t}^{c},|x_{t}|_{1}<B]\cdot Pr [A_{t}^{c}]\] \[\geq\frac{1}{n}\cdot E[g(x_{t+1})-g(x_{t})\bigm{|}A_{t},|x_{t}|_{1 }<B]+0\] \[=\frac{1}{n}\cdot 3eE[LO(x_{t+1})-LO(x_{t})\bigm{|}A_{t},|x_{t}|_{1 }<B]\] \[\qquad+\frac{1}{n}\cdot E[|x_{t+1}|_{0}-|x_{t}|_{0}\bigm{|}A_{t},| x_{t}|_{1}<B].\]
Note that \(E[LO(x_{t+1})-LO(x_{t})\bigm{|}A_{t},|x_{t}|_{1}<B]\) is greater than or equal to the probability of not flipping any other bits, since it increases the number of LeadingOnes by at least one. And \(E[|x_{t}|_{0}-|x_{t+1}|_{0}\bigm{|}A_{t},|x_{t}|_{1}<B]\)) is upper bounded by the sum \(1+\sum\limits_{i=1}^{|x_{t}|_{0}-1}Pr(\text{flipping the $i^{th}$ 0 bit})\). This is because we lose one \(0\) bit by flipping the left-most \(0\) bit and we flip each other \(0\)-bit independently with probability \(\frac{1}{n}\). And \(\frac{|x_{t}|_{0}-1}{n}\leq 1\), therefore,
\[E[\Delta_{t}\bigm{|}|x_{t}|_{1}<B]\geq\frac{1}{n}\left(3e\left(1-\frac{1}{n} \right)^{n-1}-\left(1+\frac{|x_{t}|_{0}-1}{n}\right)\right)\geq\frac{1}{n}.\]
This concludes the first case. Now, lets consider the case \(|x_{t}|_{1}=B\). Let \(D\) be the event that the mutation operator flips exactly one \(1\) bit which lies after the left-most \(0\) bit and flips no other bits. Since \(|x_{t}|_{1}=B\) and \(LO(x_{t})<B\), there is at least one such \(1\) bit, which implies \(E[|x_{t+1}|_{0}-|x_{t}|_{0}\bigm{|}|x_{t}|_{1}=B,D]\geq 1\). Also note that \(Pr(D)\geq\frac{1}{en}\). If a search point is accepted, then the number of \(1\) bits is at most \(B\) and the LeadingOnes value cannot decrease; thus, \(LO(x_{t+1})\geq LO(x_{t})\) and \(|x_{t+1}|_{0}\geq n-B\). Overall we have \(g(x_{t+1})=3eLO(x_{t+1})+|x_{t+1}|_{0}\geq 3eLO(x_{t})+n-B=g(x_{t})\). Therefore, \(E[\Delta_{t}\bigm{|}|x_{t}|_{1}=B,D^{c}]\geq 0\) and
\[E[\Delta_{t}\bigm{|}|x_{t}|_{1}=B] =E[|x_{t+1}|_{0}-|x_{t}|_{0}\bigm{|}|x_{t}|_{1}=B,D]\cdot Pr(D)\] \[\qquad+E[\Delta_{t}\bigm{|}|x_{t}|_{1}=B,D^{c}]\cdot Pr(D^{c})\] \[\geq\frac{1}{en}.\]
The expected number of \(0\)s in the initially selected uniform random bit string is \(\frac{n}{2}\) and the expected number of LeadingOnes is at least zero, therefore \(E[g(x_{0})]\geq\frac{n}{2}\). We have an drift of at least \(\frac{1}{en}\) in both the cases, therefore we get the required upper bound by the additive drift theorem ((Friedman and Goyal, 2016, Theorem 5),
\[E[T]\leq en\cdot(3eB+n-B-E[g(x_{0})])\leq 3e^{2}nB+\frac{en^{2}}{2}-enB=O(n^{2}).\]
This proves the upper bound. And the lower bound follows from Theorem 4.
## 5. Empirical Analysis
We want to extend our theoretical work on deterministic constraint the case of stochastic constraint models (as defined in Section 2.2). For the first model we use parameters \(\mu=1\) and \(\sigma=0.1\) and for the second model we use \(\epsilon=\sqrt{3}\). Note that in the second model \(U(B-\sqrt{3},B+\sqrt{3})\) has variance \(1\). For both the models we considered two different \(B\) values \(75\) and \(95\) (also \(B=85\) in the Appendix). As we will see, the (1+1) EA struggles in these settings; in order to show that already a small parent population can remedy this, we also consider the \((10+1)\) EA in our experiments.
We use the following lemma for discussing certain probabilities in this section.
**Lemma 9**.: _Let \(k\in\{1,\cdots,B-1\}\), \(x\in\{0,1\}^{n},B\in[n]\), \(W_{\mathbf{x}}=\sum_{i=1}^{n}x_{i}\cdot Y_{i}\) where \(Y_{i}\sim N(1,\sigma^{2})\) and \(x_{i}\) be the \(i-\)th bit of \(x\) and \(|x|_{1}\leq B-k\). Then \(Pr(W_{\mathbf{x}}>B)\leq\frac{1}{\sqrt{\sigma}}\frac{e^{-k^{2}}}{2\pi^{2} \sigma^{2}}\) and \(Pr(W_{\mathbf{x}}>B\mid|x|_{1}=B)=\frac{1}{2}\)._
In Figure 1 we have a single sample run of (1+1) EA on the first model. We observe that if the (1+1) EA finds a bit string with \(B\) number of \(1\)s it violates the constraint with probability \(\frac{1}{2}\) (see Lemma 9) and accepts a bit string with a lower number of LeadingOnes. This process keeps repeating whenever the (1+1) EA encounters an individual with a number of \(1\)s closer to \(B\).
Figures 2 and 3 are about the first model in which we have the LeadingOnes-values of the best individual (bit string with the maximum fitness value) in each iteration of the (10+1) EA, the LeadingOnes values of the second-worst individuals (bit string with the second-smallest fitness value) in each iteration of the (10+1) EA and the LeadingOnes values at each iteration of the (1+1) EA. Each curve is the median of thirty independent runs and the shaded
Figure 1. (1+1) EA sample run with \(n=100\), \(B=85\) and \(N(1,0.1)\) chance constraint for \(10000\) iteration.
Figure 2. (10+1) EA and (1+1) EA on LeadingOnes with \(n=100\), \(B=75\) and \(N(1,0.1)\) chance constraint for \(40000\) iterations.
area is the area between the \(25-\)th and the \(75-\)th quantile values. For all three \(B\)-values, after initial iterations, all the individuals except the worst individual in the (10+1) EA population have \(B-2\) number of leading \(1\)s. This is because, for this model, the probability that an individual with \(B-2\) number of \(1\)s violates the constraint is at most \(\frac{\epsilon^{-2}}{\sqrt{\pi}}\) (from Lemma 9).
Figures 4 and 5 are about the second model and the curves represent the same things as in the previous figures but with respect to the second model. In these figures we can see that the best and the second worst individuals of the (10+1) EA are not the same because of the changing constraint values.
Figures 4 and 5 are about the second model and the curves represent the same things as in the previous figures but with respect to the second model. In these figures we can see that the best and the second worst individuals of the (10+1) EA are not the same because of the changing constraint values.
## 6. Conclusions
Understanding how evolutionary algorithms deal with constrained problems is an important topic of research. We investigated the classical LeadingOnes problem with additional constraints. For the case of a deterministic uniform constraint we have carried out a rigorous run time analysis of the (1+1) EA which gives results on the expected optimization time in dependence of the chosen constraint bound. Afterwards, we examined stochastic constraints and the use of larger populations for dealing with uncertainties. Our results show a clear benefit of using the (10+1) EA instead of the (1+1) EA. We regard the run time analysis of population-based algorithms for our examined settings of stochastic constraints as an important topic for future work.
## 7. Acknowledgements
Frank Neumann has been supported by the Australian Research Council (ARC) through grant FT200100536. Tobias Friedrich and Timo Kotzing were supported by the German Research Foundation (DFG) through grant FR 2988/17-1.
|
2302.13395 | Reentrant semiconducting behavior in polymerized fullerite structures
with increasing sp3-carbon content | The electronic behavior of polymerized fullerite structures, ranging from
one-dimensional to three-dimensional polymers, was studied using density
functional theory. The bandgap across these structures decreases with the rise
of sp3-carbon content until metallic behavior is observed. A further increase
induces a reopening of the bandgap, revealing a reentrant semiconducting
behavior in this class of materials. This behavior is understood in terms of
the new electronic states originated by polymeric bonding and the effect of the
volume reduction on the dispersion of sp2-states. This study highlights the
fullerite polymers as a magnificent platform to tune electronic properties. | Jorge Laranjeira, Leonel Marques, Manuel Melle-Franco, Karol Strutynski | 2023-02-26T19:42:29Z | http://arxiv.org/abs/2302.13395v2 | # Reentrant semiconducting behavior in polymerized fullerites with increasing sp\({}^{3}\) content
###### Abstract
Density functional theory calculations with the hybrid Heyd-Scuseria-Ernzerhof (HSE) functional were used to study the electronic structure of polymerized fullerites, ranging from one-dimensional to three-dimensional polymerized structures. We found that the bandgap across these structures decreases with the rise of the number of sp\({}^{3}\) carbons until metallic behavior is observed. Further increase of the sp\({}^{3}\) carbon content induces a reopening of the bandgap, showing a reentrant semiconducting behavior in this class of materials.
DFT Calculations, Carbon Nanostructures, Bandgap Engineering
## I Introduction
The electronic bandgap is responsible for the optical and electronic properties of materials, making it an important parameter to tune in view of possible applications [1; 2]. Bandgap engineering of materials is, therefore, a powerful tool to obtain new physical and chemical properties with possible high technological impact [3; 4].
Carbon materials present distinct electronic properties, from semi-metallic behavior in graphite [5; 6] to the wide bandgap in diamond [7]. Several other carbon allotropes such as peapods, nanotubes and nanoribbons, present a wide range of electronic bandgaps, depending on their structures [8; 9; 10]. Solid C\({}_{60}\) (fullerite) is a semiconductor and its bandgap decreases under pressure up to 20 GPa, followed by a sudden increase on further compression [11; 12; 13; 14]. Saito and coworkers suggested that the initial bandgap decrease is the result of the increasing interaction between \(\pi\) electrons belonging to neighboring molecules induced by the reduction of the molecular distance [11]. Conversely, the sudden rise of the bandgap at 20 GPa is due to the molecular collapse with the formation of an amorphous carbon phase having a high content of sp\({}^{3}\) hybridized atoms [11; 12; 13]. Similar behavior has been recently observed in m-xylene solvated C\({}_{60}\)[15].
A family of carbon allotropes that have been less studied regarding the electronic structure are the fullerene C\({}_{60}\) polymers. Most of these phases have been produced recurring to high-pressure high-temperature (HPHT) treatments of fullerite monomer. Low-dimensional polymers are formed at pressures below 8 GPa. In particular, one-dimensional (1D) orthorhombic polymer and two two-dimensional (2D), tetragonal and rhombohedral, polymers are synthesized with all of them containing 66/66 2+2 cycloaddition bonds, with van der Waals interactions remaining in the non-bonding directions [16; 17]. Above 8 GPa, three-dimensional (3D) polymerized phases are synthesized [17; 18; 19; 20; 21; 22; 23]. Although several 3D phases were reported, very few crystalline structures have been proposed so far. A face-centered cubic (fcc) 3D polymer was synthesized at 9.5 GPa and 550 \({}^{\circ}\)C, where the molecules, being in either one of two C\({}_{60}\) standard orientations, are bonded through 56/56 2+2 cycloaddition, the cubic structure resulting from the frustrated arrangement of these bonds in the fcc lattice [18; 19]. A cuboidal 3D polymerized phase was synthesized by subjecting the 2D tetragonal polymer to 15 GPa and 600 \({}^{\circ}\)C; its proposed orthorhombic structure involves 6/6 3+3 cycloaddition bonds between neighboring molecules belonging to adjacent (a,b) planes and double 66/66 4+4 cycloadditions along the shortest lattice parameter of the (a,b) planes [21]. At the same HPHT conditions, an fcc 3D polymer was synthesized from the fullerite monomer, but it was proposed that it results from the disordering of rhombohedral domains, in which hexagonal planes of molecules are bonded via 5/5 3+3 cycloadditions and 56/65 2+2 cycloadditions are formed between these hexagonal planes [22].
In addition to these fullerite structures, a computationally hypothesized polymerized fullerite clathrate [24] is also to be mentioned since its lattice parameter matches that of an fcc 3D polymer obtained at 12.5 GPa by Brazhkin and coworkers [25]. This clathrate structure is constructed by bonding each molecule, adopting a standard orientation, to the twelve nearest neighbors in the fcc lattice through double 5/5 2+3 cycloaddition bonds.
Here, we report a systematic electronic structure calculations of polymerized fullerite structures ranging from the 1D to 3D polymers, expanding our previous study [26] and performing more accurate electronic structure calculations via the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional [27]. We show that initially, the bandgap progressively decreases with the increasing number of sp\({}^{3}\) carbons until metallic behavior is observed. Then, a further increase in the number of the sp\({}^{3}\) atoms leads to a bandgap reopening in a reentrant behavior, fundamentally similar to the
behaviour observed with oxygen functionalized carbon nanotubes [28].
## Methods
Polymerized fullerite structures were first optimized without any constrain with the Perdew-Burke-Ernzerhof (PBE) [29; 30] exchange-correlation functional augmented with the classical D3 dispersion term [31] with Becke-Johnson damping and a 6-31G(d,p) atomic Gaussian basis set, PBE-6-31G(d,p)-D3, as implemented in CRYSTAL17 [32]. For this, the Coulomb and exchange infinite lattice series is controlled by five numerical thresholds (Ti), which were set to 12 (T1-T4) and 24 (T5). The convergence threshold for the self-consistent field (scf) cycle was set to be smaller than \(10^{-8}\) Hartree and an energy difference of \(10^{-4}\) Hartree was enforced between consecutive geometric steps. A k-point grid with at least 6\(\times\)6\(\times\)6 points was used for the calculations in bulk systems. In addition, when we computed lower dimensionality systems, namely 1D or 2D self-standing chains, a minimum of 6 points along each periodic direction was used.
As PBE functional systematically underestimates experimental bandgaps [33; 34], the electronic density of states (DOS) and band structures calculations were computed at the HSE06-6-31G(d,p) level [27], which produces improved bandgap values yet with at a feasible computational cost [34]. A denser \(24\times 24\times 24\) Monkhorst-Pack [35] grid and a lower scf convergence threshold, \(10^{-10}\) Hartree, were employed for these calculations. Then, band structure calculations were performed with paths from AFLOW [36].
## Results and Discussion
Table 1 gives a summary of the optimized structural properties and electronic bandgap values of the polymerized fullerites considered in this study. The optimized polymerized structures are shown in figure S1; their Wyckoff positions are also given in the Supporting Information (SI), section B. The van der Waals fullerite monomer at room conditions is also considered, mainly as a reference, and for simplicity is denoted as 0D [37]. The 3D-AuCu-type and 3D-CuPt-type are ordered structures that have been proposed to be present in the frustrated fcc 3D polymer synthesized at 9.5 GPa and that are described in detail elsewhere [38].
The electronic band structures and densities of states (DOS) for all the polymerized structures being studied are presented in section C of the SI. The low dimensionality polymers bandgap show a strong dependence on the distance between polymeric chains (or layers). We have explicitly investigated the influence of these van der Waals distances on the calculated bandgap for the 1D orthorhombic polymer. Whenever the distance between the chains is reduced, the bandgap is also reduced and, similarly, when the distance between chains increases the bandgap increases, this being illustrated in figure S11. Moreover, the electronic properties also depend on the orientation adopted by the chains but this was not investigated in this study [17]. To further evaluate the extent to which the van der Walls description influences the bandgaps of low-dimensional polymers, they were computed for the corresponding self-standing polymers, namely: for one molecule, one chain, one quadratic polymerized plane and one hexagonal polymerized plane. The bandgap evolution with the number of sp\({}^{3}\) carbons in the self-standing polymers is shown at the top of figure 1. There is an increase in the bandgaps of the self-standing chains with respect to the bulk system but this difference is reduced when the number of van der Waals interactions decreases. Thus, for the 1D polymerized structure, this has a high impact with its bandgap, 1.85 eV, being now larger than that of the 2D tetragonal polymerized structure, 1.55 eV. This indicates that the smaller bandgap in the bulk 1D polymer is probably an artifact from the chosen theory level description of the van der Waals interaction used here and, indeed, the bandgap decreases monotonously with the number of sp\({}^{3}\) carbons in the low-dimensional polymers.Furthermore, it also allows one to conclude that the observed reduction in the electronic bandgaps of the low-dimensional polymers with the number sp\({}^{3}\) atoms rise is primarily a consequence of the reduction of the intermolecular distance induced by the formation of covalent bonds between the monomers. The reduction in the intermolecular distance leads to a stronger interaction of the molecular wave functions of neighboring molecules leading to an increased dispersion of the bands and the concomitant reduction in the bandgap. In fact, a similar effect has been observed experimentally in compressed monomeric C\({}_{60}\) at room temperature [11].
The binary-alloy type structures, 3D-CuPt-type and 3D-AuCu-type, that have been proposed as ordered configurations of the frustrated 3D fcc polymer synthesized at 9.5 GPa, both display metallic behavior at the PBE level [38]. Relevantly, the 3D-CuPt-type polymerized structure has the same number of sp\({}^{3}\) carbons as the 2D-rhombohedral polymerized structure but displays a much smaller bandgap. This is due to the fact that the bonding in this structure being three-dimensional yields a higher band dispersion and a lower molecular volume than that of the 2D-rhombohedral structure, which in turn, induces a stronger interaction of the molecular wave functions, as discussed above.
The 3D-AuCu-type structure, where each molecule is bonded to eight nearest neighbors and has sixteen sp\({}^{3}\) carbons, displays metallic behavior. It is noteworthy to compare this electronic behavior with that of monomeric C\({}_{60}\) fullerite compressed at 20 GPa at room temperature, which has about the same volume but is a low-gap semiconductor with a gap of 0.35 eV [14]. This simple comparison indicates that the sp\({}^{3}\) bonding states, absent
in the compressed C\({}_{60}\), are fundamental for closing the bandgap in polymerized structures driving its metallic behavior. This is further corroborated by the results shown in figure S12 where the bandgap of the monomeric fullerite closes at a lattice length of 11.9 A. However, this gap closure, and the corresponding metallicity, cannnot be observed experimentally on compressions at room temperature because the cages collapse into an amorphous phase at 20 GPa for a lattice parameter of about 13 A [12]. In the frustrated 3D fcc polymer each molecule has on average 7.6 bonded near-neighbors, indicating that the molecules adopted more frequently the 3D-AuCu-type configuration (eight bonded nearest-neighbors) than the 3D-CuPt-type configuration (six bonded nearest-neighbors) [19]. Considering this, a metallic behavior would be expected for this 3D fcc phase prepared at 9.5 GPa and 550 \({}^{\circ}\)C and, indeed, it was actually experimentally found by Buga and coworkers [39; 40].
Another 3D polymerized fullerite showing metallic behavior is the so-called cuboidal structure [21]. However, the 3D-rhombohedral polymerized structure is semiconducting, despite having the same number of sp\({}^{3}\) carbons [22]. The optimized 3D-cuboidal structure has a smaller volume per molecule, which is an important parameter to induce metallic behavior through a strong interaction of wave functions from neighboring molecules. These two 3D polymerized structures have been proposed for the C\({}_{60}\) polymers synthesized from the 2D tetragonal polymer compressed at 15 GPa and 600 \({}^{\circ}\)C (the 3D-cuboidal) and from the fullerite monomer subjected to the same conditions (the 3D-rhombohedral) [21; 22]. Yet, the lattice parameters from calculations are significantly different from the experimental ones for both cases, as it was thoroughly discussed for the 3D-cuboidal case [41; 42]. Nevertheless, the metallic nature of the 3D-cuboidal
\begin{table}
\begin{tabular}{c|c|c|c|c|c} structure & space group & DFT cell constants (Γ
) & V(Γ
\({}^{3}\)/C\({}_{60}\)) & \#sp\({}^{3}\) atoms/C\({}_{60}\) & Bandgap (eV) \\ \hline
0D & Fm\(\bar{3}\) &
\begin{tabular}{c} a=14.08 \\ b=14.08 \\ c=14.08 \\ \(\alpha=\beta=\gamma=90\) \\ \end{tabular} & 698.26 & 0 & 1.5407 \\ \hline
1D & Immm &
\begin{tabular}{c} a=9.11 \\ b=10.29 \\ c=13.99 \\ \(\alpha=\beta=\gamma=90\) \\ \end{tabular} & 655.68 & 4 & 1.2580 \\ \hline
2D-tetragonal & Immm &
\begin{tabular}{c} a=9.05 \\ b=9.15 \\ c=14.91 \\ \(\alpha=\beta=\gamma=90\) \\ \end{tabular} & 617.96 & 8 & 1.3771 \\ \hline
2D-rhombohedral & R\(\bar{3}\)m &
\begin{tabular}{c} a=9.19 \\ b=9.19 \\ c=24.65 \\ \(\alpha=\beta=90;\gamma=120\) \\ \end{tabular} & 601.45 & 12 & 1.1479 \\ \hline
3D-CuPt-type & R\(\bar{3}\)c &
\begin{tabular}{c} a=9.45 \\ b=9.45 \\ c=2x22.34 \\ \(\alpha=\beta=90;\gamma=120\) \\ \end{tabular} & 576.22 & 12 & 0.0628 \\ \hline
3D-AuCu-type & P4\({}_{2}\)/mmm &
\begin{tabular}{c} a=9.27 \\ b=9.27 \\ c=12.87 \\ \(\alpha=\beta=\gamma=90\) \\ \end{tabular} & 553.10 & 16 & - \\ \hline
3D-cuboidal & Immm &
\begin{tabular}{c} a=8.50 \\ b=8.62 \\ c=13.16 \\ \(\alpha=\beta=\gamma=90\) \\ \end{tabular} & 481.92 & 24 & - \\ \hline
3D-rhombohedral & R\(\bar{3}\) &
\begin{tabular}{c} a=8.92 \\ b=8.92 \\ c=22.35 \\ \(\alpha=\beta=90;\gamma=120\) \\ \end{tabular} & 513.48 & 24 & 1.3796 \\ \hline
3D-clathrate & Pm\(\bar{3}\) &
\begin{tabular}{c} a=12.41 \\ b=12.41 \\ c=12.41 \\ \(\alpha=\beta=\gamma=90\) \\ \end{tabular} & 478.31 & 48 & 1.2806 \\ \hline \end{tabular}
\end{table}
Table 1: Optimized lattice parameters and volume per molecule for the polymerized fullerite structures, at the PBE-6-31G(d,p)-D3 level. The space group of each structure, number of sp\({}^{3}\) carbons per molecule and the electronic bandgap for each structure calculated at the HSE-6-31G(d,p) theory level, are also presented. The hexagonal 3R lattice parameters are used for the rhombohedral structures.
polymerized phase and the semiconducting behavior of the 3D-rhombohedral polymer were experimentally observed [21, 22].
The last polymerized structure being addressed is the polymerized fullerite clathrate, 3D-clathrate, where most of the atoms are sp\({}^{3}\)- hybridized, forty-eight out of sixty. Thus, likely the \(\sigma\)-like states start to dominate the electronic structure while the few remaining \(\pi\)-like states are highly localized contributing to the semiconducting nature of this structure [24]. It is to be noted that other polymerized structures with higher sp\({}^{3}\) content, having fifty-two, fifty-six and sixty sp\({}^{3}\)-hybridized atoms per molecule, proposed by Burgos and coworkers [43] are also semiconductors, confirming thus the reentrant semiconducting behavior in polymerized fullerites. These polymerized structures were not investigated by us because they are derived from the body-centered cubic packing, while the experimental structures are based on the fcc packing.
The overall electronic behavior of the polymerized fullerite structures is given in figure 1, bottom panel, where their bandgaps are plotted against their number of sp\({}^{3}\) carbons on each molecule. Initially, the bandgap decreases with the rise of sp\({}^{3}\)-hybridized atoms until its closure, as has been discussed. However, further increase in the number of sp\({}^{3}\) atoms, and concomitantly on the number of polymeric bonds, drives the electronic structure to become dominated by \(\sigma\)-like states, while the remaining \(\pi\)-like states become highly localized, leading to a reopening of the bandgap and a reentrant semiconducting behavior. Although the evolution of the electronic bandgap is dominated by the number of sp\({}^{3}\) carbons on each molecule, the molecular volume alone is also seen to influence the electronic behavior. Polymerized structures with the same number of sp\({}^{3}\) carbons show a considerable reduction in the bandgap, or even its closure, for structures with smaller volumes.
## IV Conclusion
Polymerized fullerite structures present a wide range of electronic behavior from semiconducting to metallic. This behavior is clearly dependent on the number of sp\({}^{3}\) hybridized carbons present in each structure.
The initial reduction of the bandgap with the increment in the number of sp\({}^{3}\) carbons should be a consequence of the C\({}_{60}\) intermolecular distance reduction, and concomitantly of the increase in the interaction of the wave functions of neighboring molecules. The bandgap closure is seen to be also determined by the formation of a 3D network of polymeric bonding. Increasing further the number of sp\({}^{3}\) carbons induces a bandgap reopening in a reentrant fashion. This reentrant semiconducting behavior is due to the fact that \(\sigma\)-like states start to dominate the electronic structure, while the remaining \(\pi\)-like states become highly localized.
###### Acknowledgements.
This work was developed within the scope of the project CICECO-Aveiro Institute of Materials, UIDB/50011/2020, UIDP/50011/2020 & LA/P/0006/2020, financed by national funds through the FCT/MCTES (PIDDAC) and IF/00894/2015 financed by FCT. J. Laranjeira acknowledges a PhD grant from FCT (SFRH/BD/139327/2018).
Figure 1: Top panel: electronic bandgap of the polymerized fullerite structures against the number of sp\({}^{3}\) carbons on each molecule for the self-standing low-dimensional polymers. Bottom panel: same plot for the bulk polymeric systems. The blue line serves only as a guide to the eye. |
2305.17154 | On convex decision regions in deep network representations | Current work on human-machine alignment aims at understanding machine-learned
latent spaces and their correspondence to human representations.
G{\"a}rdenfors' conceptual spaces is a prominent framework for understanding
human representations. Convexity of object regions in conceptual spaces is
argued to promote generalizability, few-shot learning, and interpersonal
alignment. Based on these insights, we investigate the notion of convexity of
concept regions in machine-learned latent spaces. We develop a set of tools for
measuring convexity in sampled data and evaluate emergent convexity in layered
representations of state-of-the-art deep networks. We show that convexity is
robust to basic re-parametrization and, hence, meaningful as a quality of
machine-learned latent spaces. We find that approximate convexity is pervasive
in neural representations in multiple application domains, including models of
images, audio, human activity, text, and medical images. Generally, we observe
that fine-tuning increases the convexity of label regions. We find evidence
that pretraining convexity of class label regions predicts subsequent
fine-tuning performance. | Lenka TΔtkovΓ‘, Thea BrΓΌsch, Teresa Karen Scheidt, Fabian Martin Mager, Rasmus Γrtoft Aagaard, Jonathan Foldager, Tommy Sonne AlstrΓΈm, Lars Kai Hansen | 2023-05-26T10:33:03Z | http://arxiv.org/abs/2305.17154v2 | # On convex conceptual regions in deep network representations
###### Abstract
The current study of human-machine alignment aims at understanding the geometry of latent spaces and the correspondence to human representations. Gardenfors' conceptual spaces is a prominent framework for understanding human representations. Convexity of object regions in conceptual spaces is argued to promote generalizability, few-shot learning, and intersubject alignment. Based on these insights, we investigate the notion of convexity of concept regions in machine-learned latent spaces. We develop a set of tools for measuring convexity in sampled data and evaluate emergent convexity in layered representations of state-of-the-art deep networks. We show that convexity is robust to basic re-parametrization, hence, meaningful as a quality of machine-learned latent spaces. We find that approximate convexity is pervasive in neural representations in multiple application domains, including models of images, audio, human activity, text, and brain data. We measure convexity separately for labels (i.e., targets for fine-tuning) and other concepts. Generally, we observe that fine-tuning increases the convexity of label regions, while for more general concepts, it depends on the alignment of the concept with the fine-tuning objective. We find evidence that pre-training convexity of class label regions predicts subsequent fine-tuning performance.
## 1 Introduction
Understanding the barriers to human-machine alignment is as important as ever (see, e.g., [10; 45]). Representational alignment is a first step towards a greater aim of understanding value alignment [17]. For the understanding of alignment, it is fundamental to establish a common language for regularities observed in human and machine representations. Here we motivate and introduce the concept of convexity of object regions in machine learned latent spaces.
Representational spaces in the brain can be described in several ways, for example, geometric psychological spaces informed by similarity judgments or, as based in the neurosciences, derived from the measurement of neural activity [5; 57]. Conceptual spaces proposed by Gardenfors are a mature approach to the former, i.e., human-learned geometrical representations of semantic similarity
[32]. The geometrical approach is rooted in work by Shepard [54], who opens with the important observation: "Because any object or situation experienced by an individual is unlikely to recur in exactly the same form and context, psychology's first general law should, I suggest, be a law of generalization". This leads Shepard to favor geometrical representations in which concepts are represented by extended regions rather than single points, to allow for robust generalization. This is the view that has been comprehensively expanded and quantified by Gardenfors and coworkers [32]. The cognitive science insights are complemented by extant work investigating alignment between learned representations in machine and human conceptual spaces [18; 37; 59], and numerous specific properties of the latent geometrical structure have been studied, such as the emergence of semantic separability in machine latent representations [46]. New insights in the representational geometry are found using the intrinsic dimension measure [59]. The relevant geometries are not necessarily flat Euclidean spaces, but often better described as general manifolds [40; 1]. In fact, Henaff et al. [39] suggest that semantic separability emerges by flattening or straightening trajectories in latent spaces, such as was earlier proposed for machine representations [15]. Similar reasoning was crucial for early methodological developments like ISOMAP [58] and general kernel methods [48].
### Convexity in conceptual spaces
Based on Shephard's idea of objects as extended regions, Gardenfors formulated the hypothesis that _natural_ concepts form convex regions in human geometrical representations [30; 32; 63; 27]. Stroosner [55] elaborated on the notion of natural concepts as a social construct: "[Natural concepts] are often found in the core lexicon of natural languages--meaning that many languages have words that (roughly) correspond to such concepts--and are acquired without much instruction during language acquisition." One way to interpret the naturalness notion is to link it to independent physical mechanisms with macroscopic effects, i.e., effects that will be visible to all, hence, likely to appear in joint vocabularies. Such independent mechanisms play a core role in causality [50]. A more low-level interplay between human and machine conceptual representations was discussed in [7] with a specific focus on grounding shape spaces. The work reports good correspondences between human shape representations as obtained by pairwise similarity judgments and machine representations of the shape obtained from supervised and unsupervised learning, however, without touching the question of the convexity of object regions in machines.
Convexity is closely related to generalization in cognitive systems [31; 33]. The defining property of convexity (see Definition 1) implies that categorization can be extended by interpolation. We also note that simple generalization based on closeness to prototypes leads to convex decision regions (Voronoi tesselation induces convex regions) [34]. Interestingly, convexity is also claimed to support few-shot learning [31]. When basic concepts are learned as convex regions, new labels can be formed by geometrically guided composition, leading to new convex regions (e.g., by conjunction) or by other inductions leading to sets of convex regions. Finally, it is claimed that convexity supports communication and interaction and thus the negotiation of meaning between subjects and the emergence of socially universal concepts, i.e., natural concepts [63].
The geometry-driven cognitive science insights motivate our investigation here: _Are generalizable, grounded concepts implemented as convex regions in machine-learned representations?_
In supervised learning, i.e., self-supervised pretraining followed by downstream fine-tuning, classification labels used as supervision signals clearly play a special role. We will carefully measure convexity for classes and other concepts separately in our experimental investigation.
The convexity of conceptual regions in machine-learned representations has not been addressed before, and we, therefore, first need to develop the required investigative tools. Our contributions include
* the introduction of convexity as a new dimension in human-machine alignment.
* recapitulation of salient properties of convex sets in flat and curved spaces.
* proofs that convexity is stable to relevant latent space re-parametrization.
* an efficient workflow to measure convexity in flat and curved latent spaces.
* evidence of pervasive convexity of conceptual regions in self-supervised models for images, audio, movement, text, and brain images.
* evidence that convexity of a conceptual region in a pretrained model predicts labelling accuracy following fine-tuning, see Figure 1.
### Properties of convex sets
Let us first formalize classical convexity in Euclidean spaces.
**Definition 1** (Euclidean Convexity).: _A subset \(S\subset\mathbb{R}^{D}\) is convex iff \(\forall\mathbf{x},\mathbf{y}\in S\ \forall t\in[0,1]\), \(\mathbf{z}(t)\)= \(t\mathbf{x}\) + \((1-t)\mathbf{y}\) is also in \(S\)[14]._
From the definition, it follows that the intersection of two convex sets is also a convex set [14]. Hence, conceptual _conjunction_ ('AND' operation) induces convexity. _Disjunction_ ('OR' operation), however, does not, since the union of convex sets is not necessarily convex (it is trivial to construct such examples) [14]. Euclidean convexity is conserved under affine transformations, hence convexity is robust to re-parametrization in deep networks (see proof in appendix A). Euclidean convexity is closely related to conjunctions of linear classifiers. In fact, a convex set can alternatively be defined as the intersection of linear half-spaces (possibly infinite), e.g., implemented by a set of linear decision functions resulting in a polyhedron [14].
The relevant geometric structure of deep networks is not necessarily Euclidean, hence we need to generalize to manifolds. In a Riemannian manifold \(M\) with metric tensor \(g\), the length \(L\) of a continuously differentiable curve \(\mathbf{z}:[0,1]\to M\) is defined by \(L(\mathbf{z})=\int_{0}^{1}\sqrt{g_{\mathbf{z}(t)}(\hat{\mathbf{z}}(t),\hat{ \mathbf{z}}(t))}dt\), where \(\hat{\mathbf{z}}(t):=\frac{\partial}{\partial t}\mathbf{z}(t)\). A geodesic is then a curve connecting \(\mathbf{z}(0)=\mathbf{x}\) and \(\mathbf{z}(1)=\mathbf{y}\), minimizing this length, i.e. \(\mathrm{geodesic}(\mathbf{x},\mathbf{y})=\mathrm{argmin}_{\mathbf{z}}L( \mathbf{z})\). While geodesics are unique for Euclidean spaces, they may not be unique in manifolds. We can now generalize to geodesic convexity in manifolds:
**Definition 2** (Geodesic Convexity).: _A region \(S\in M\) is geodesic convex, iff \(\forall\mathbf{x},\mathbf{y}\in S\), there exists at least one geodesic \(\mathbf{z}(t)\) connecting \(\mathbf{x}\) and \(\mathbf{y}\), that is entirely contained in \(S\)._
When modelling latent spaces with sampled data, we must further transform the above definitions to data-driven estimators, such efforts are reported, e.g., in [40, 1]. In this paper, we choose a simple approach based on graph convexity, that applies to both Euclidean and Riemannian spaces: A subset of nodes \(S\) in a graph is convex if for all nodes in \(S\) the shortest path connecting each pair of nodes is entirely within \(S\). Graph convexity has for instance been used to describe complex networks [47]. For sampled data, we can form a graph from Euclidean nearest neighbors (as manifolds per definition are locally Euclidean), this approach parallels the ISOMAP procedure.
We note two important properties of this estimator, first, the graph-based approximate convexity measure is invariant to isometric transformation and uniform scaling, and second, the sample-based
Figure 1: We measure the graph convexity of each class in the pretrained model and evaluate the recall of each class after fine-tuning. \(H_{\theta}\) is the pretrained model, \(H_{\theta^{*}}\) is the fine-tuned model, and \(\sigma\) is the classification head trained during fine-tuning. Fine-tuning involves all layers of the encoder. We present evidence (the inset and Figure 5) that higher convexity of downstream classes in the pretrained model is indicative of higher recall in the fine-tuned model.
estimator of convexity is consistent. Both aspects are discussed further in the appendix A. The invariance to isometry and uniform scaling means that the approximate convexity property is robust to certain network re-parametrization [42].
As we will measure convexity in concept sub-graphs within larger background graphs, Dijkstra's algorithm is preferred over ISOMAP's Floyd-Warshall algorithm. Dijkstra's algorithm finds the shortest path from a given node to each of the other \(N\) nodes in the graph with \(E\) edges in \(\mathcal{O}(N\log N+E)\)[26; 29], while Floyd-Warshall efficiently finds the shortest distance between all vertices in the graph in \(\mathcal{O}(N^{3})\)[20; 28]. Since we have a sparse graph with \(E\ll N^{2}\), Dijkstra's algorithm will be more efficient. With these approximations, we are in a position to create a graph-based workflow for quantifying convexity in Euclidean and manifold-based structures. Note, for sampled data, we expect a certain level of noise, hence, convexity will be graded.
### Neural networks and convexity
Should we expect convexity of 'categorical' regions in neural networks? Indeed, there are several mechanisms that could contribute to promoting convexity. First, the ubiquitous softmax is essentially a convexity-inducing device, hence, typical classification heads will induce convexity in the immediate representation. This is most easily seen by noting that softmax decision regions (maximum posterior decisions) are identical to the decision regions of a linear model, and linear models implement convex decision regions (see appendix A for formal proof). Secondly, several of the models we investigate in the present study are based on transformer architectures with attention heads. These heads contain softmax functions and are thus inducing convexity in their weighing of attention. Thirdly, typical individual artificial neurons are latent half-space detectors, and half-spaces are convex as noted above. Note that multi-layer perceptrons can approximate any non-convex decision region, including disconnected decision regions [12].
### Concept based explainability
Linear-classifier-based probes are widely used in natural language processing to understand the presence of concepts in latent spaces [9; 41]. A prominent example of probe-based explanation in image classification is the TCAV scheme proposed by Kim et al. [42], in which auxiliary labelled data sets are used to identify concept directions with linear classifiers (concept class versus random images). Regions defined by linear classifiers are convex, but more general convexity is possible, c.f. above, regions defined by the intersection of a number of linear classifiers (possibly infinite). More general regional concept analysis is discussed in [21]. Following [35], three aspects are of particular importance for concept-based explainability: i) _Meaningfulness_ - an explanatory concept is semantically meaningful on its own. Meaningfulness is social, i.e., individuals should associate similar meanings to the concept; ii) _Coherence_ - the examples of a concept should be perceptually similar and different from examples of other concepts; iii) _Importance_ - a concept is "important" for the prediction of a class if its presence is necessary... the object whose presence is being predicted is necessary while the background is not. In our analyses, we will consider these aspects when we discuss the relevance of convex concept regions.
## 2 Methods
### Convexity measurement workflow
We are interested in measuring the approximate convexity of a conceptual region, here, a subset of nodes in a graph. A conceptual region is a set of annotated points. Annotations can be related to _class labels_, i.e., targets for fine-tuning or _concepts_, i.e., entities that can be used for a conceptual explanation but are not directly used as targets in fine-tuning. We note that a concept can be evidence for some class labels and counter-evidence for other classes.
We first create a graph that contains all the data points of interest (comprising a region of interest and a background of data points of other classes/concepts and unannotated data). The points are nodes in the graph and the Euclidean distances between the nodes are the weights of the edges. To handle manifold-based representation, for each node, we create an undirected edge only to the nearest neighbors (\(K=10\)). This procedure creates a sparse undirected weighted graph with positive weights only.
We now sample pairs of points within the given concept/class label and compute the shortest path in the graph between the pairs using Dijkstra's algorithm [26]. For each path, we compute a score between 0 and 1. The path score is defined as the proportion of nodes on the shortest path, without the endpoints, inside the concept. If an edge directly connects the pair of nodes, the score is 1. If the points are not connected, the score is 0. We average the scores for all paths and all concepts/classes and get one number for concepts and one for classes per layer. Error bars in the results show the standard error of the mean, where we set \(n\) to the number of points in the given concept/class. This is likely a conservative estimate since the mean is based on much more pairs than there are points. The score depends on the number of concepts and classes, and also on the number of data points per class/concept. It follows that our results are not directly comparable across modalities. To mitigate this problem, we balance the class and concept data and add background points not belonging to any class or concept.
Measurements based on neighbors in high dimensional data can be sensitive to the so-called hubness problem [51]. We evaluate the hubness of the latent representations in terms of k-skewness and the Robinhood score. Results are deferred to the appendix C.1-C.5, since only mild hubness issues were detected for most domains. We decided to analyse convexity without adjustment for hubness, to avoid possible biases introduced by normalization schemes [51].
### Domains and data
**Image domain**. We used the ImageNet-1k images and class labels [53; 24] in our experiments. The validation set contains 50 images per class. We used additional _concepts_ of two types: texture and colors. Texture data comes from Describable Textures Dataset (DTD) [19]. It contains 47 textures with 120 images per texture. Color data is a combination of images from [2] and Digikala Products Color Classification.1. We have 9 colors and for each color, we used all images from the Color classification dataset and supplied them with randomly sampled images of the corresponding color from Digikala. For this experiment, we balanced the number of concepts and classes and the number of image examples in each. We randomly chose 56 ImageNet classes and 50 images from each concept. In total, we have 2800 "concept" images and 2800 "class" images. To simulate random background images, we added 5600 images from the validation set of Places365 [44].
Footnote 1: [https://www.kaggle.com/datasets/masouduut94/digikala-color-classification](https://www.kaggle.com/datasets/masouduut94/digikala-color-classification)
The network model is data2vec-base [3]. For details on architecture and training, see appendix B.1. We extracted the input embedding together with 12 layers of dimension 768 for geometric analysis.
**Human activity domain**. We used a pretrained human activity model from [67] to extract the latent representations. The model is pretrained in a self-supervised manner on a large unlabelled dataset from the UK Biobank. The model follows the architecture of ResNet-V2 with 1D convolutions and a total of 21 convolutional layers. The resulting feature vector after the final pretrained layer is of dimension 1024. For additional information on the network and the data see appendix B.2. For testing the convexity of the model, we use the Capture24 dataset [65]. In [62], the original set of 213 labels was divided into 4 coarse class labels, namely; sleeping, sedentary behavior, light physical activity behaviors, and moderate-to-vigorous physical activity behaviors based on the metabolic equivalent of task (MET) scores. These 4 labels are used as classes when fine-tuning the model. In [65] the MET score is similarly used to divide the labels into 11 categories. One of the categories overlaps with the'sleep' label from the classes and is thus excluded. The remaining 10 categories are used as human activity _concepts_. Since all data points are associated with both a label and a concept, the convexity analysis is done separately for classes and concepts. For each concept, we sample \(1000\) points (or the maximum number available), we then sample \(1000\) points (or maximum available) from each of the remaining concepts (as well as the class label'sleep') to use as background. This yields a total of \(N=9139\) points. We sample \(N\) points from the WISDM human activity dataset [64] to add as background points. We then analyze the convexity of each concept. We subsequently map all points to their corresponding class and analyze the convexity of each class on the same graph.
**Audio domain**. We use the pretrained wav2vec2.0 model [4], pretrained on the Librispeech corpus (960h of unlabeled data) [49]. The model consists of a CNN-based feature encoder, a transformer-based context network, and a quantization module. We are especially interested in the latent space representation in the initial embedding layer and the 12 transformer layers. After each layer, we extract the feature vector of dimension 768.
We fine-tuned the model to perform digit classification based on the AudioMNIST dataset [8], which consists of 30000 audio recordings of spoken digits (0-9) in English of 60 different speakers (with 50 repetitions per digit). For additional information see the appendix B.3. For the convexity analysis, we compare the latent representations of different classes and concepts. For the AudioMNIST dataset, the classes are the 10 digits. As concepts we use gender and speaker id, which are provided as metadata of the dataset. Additionally, we transcribe the phonemes of each audio file using "WebMAUS Basic" [56]. The phonemes of each audio file are cut into individual audio files and the latent representations are extracted for each phoneme. To evaluate the robustness of the results, relevant background points are added to the dataset, that do not overlap with the classes. In the audio case, the Speech Commands dataset [8] was used to provide these background points. Ten words ('yes', 'no', 'up', 'down', 'left', 'right', 'on', 'off','stop', 'go') with 600 repetitions each (matching the data amount of the test set) were randomly chosen from the dataset.
**Text domain**. Our NLP case study is the base version of RoBERTa [43], which is pretrained to perform Masked Language Modelling [25] in order to reconstruct masked pieces of text. The model consists of an embedding layer followed by 12 transformer encoder layers. The pretraining of RoBERTa was performed on 160GB of uncompressed English-language text in order to learn latent representations of text which are expressed as 768-dimensional vectors.
Fine-tuning class labels were the simple positive or negative sentiment annotations of the TweetEval dataset [6; 52]. The GoEmotions dataset [23] is used to provide related concepts. The dataset is a filtered collection of texts from reddit.com which are labelled in accordance with the emotions they express. There is a total of 27 different categories of emotions which are labelled in the dataset. Texts can be annotated with multiple labels and are labelled multiple times by different annotators. In order to have a simple text-label relationship we only consider a single random instance of an annotation of each text and a single label that will define the text's concept. In order to balance and minimize the dataset, we only consider a subset of 12 emotions with an even split between emotions that are associated with positive and negative sentiment. Of this subset, we sample 300 texts from each of the 12 emotions. Additionally, we also include 300 "neutral" texts, which are also labelled in the GoEmotions dataset, to act as background data points for some experiments. All data were obtained from HuggingFace. For additional details see appendix B.4.
**Medical Imaging domain**. Anatomically (T1) weighted 3T Magnetic Resonance Imaging (MRI) data with gender labels are obtained from the Human Connectome Project S1200 Subjects Release [60]. We use the structurally preprocessed data as described in [36] of 1113 subjects. We train a self-supervised model using a modified version of the Masked-Auto-Encoder (MAE) [38], for details see appendix B.5. We use anatomical brain regions as concepts and gender as class labels (for fine-tuning). To define brain regions, the original segmentation mask consisting of multiple cortical and sub-cortical regions is downsampled to 13 distinct major regions: White Matter, Ventricles, Cerebellum, Thalamus, Caudate, Putamen, Pallidum, Brain-Stem, Hippocamp, Amygdala, Accumbens, Corpus, Callosum and Cortex. To obtain the feature vector \(\mathbf{v}\) for a particular concept \(c\) in layer \(l\), we calculate the weighted sum over all embedding vectors \(\mathbf{u}_{j}\) over the subvolume \(j\), such that \(\boldsymbol{v}_{l,c}=\sum_{j=1}^{J}w_{c,j}\boldsymbol{u}_{l,j}\), where \(w_{c,j}\) is the ratio of voxels labelled with concept \(c\) in subvolume \(j\) and the total number of voxels in \(j\) and \(\sum_{j=1}^{J}w_{c,j}=1\). To retrieve the feature vector \(\mathbf{v}_{l}\) for the class label gender in layer \(l\), we calculate the mean over all subvolumes, i.e. \(w=\frac{1}{J}\) For a total of \(B\) subjects, this results in \(B\cdot(13+2)\) feature vectors for 13 concepts and 2 classes. For the convexity analysis, we sample 200 subjects for each class and concept as well as 200 subjects used as background points, none of which were used in the training set for pretraining or fine-tuning.
## 3 Results
### Structure of latent representations of the image domain before and after fine-tuning
To gain intuition on the effect of fine-tuning, we first inspect t-SNE plots for a subset of classes and concepts (Figure 11) of the image domain. Interestingly, while the pretrained model never'saw' the labels, it clusters both the classes and concepts towards the end of the network. The _background_ points are well mixed with both concepts and classes, presenting evidence that they are in scope for the network although they originate from a completely different source. The observed structure in
these low-dimensional representations adds motivation to our investigation of the convexity of both class labels and concept regions in the high dimensional latent spaces.
After fine-tuning, we find, as expected, that the model representation clearly separates the classes in the last layer. Figure 11 shows that color concepts are mostly clustered in the input layer, whereas texture concepts get more clustered towards the end of the model.
The left panel in Figure 3 shows the convexity scores across all layers for the image domain and the right panel for the audio domain. For images, we notice that the convexity of classes increases throughout the network. The convexity of concepts initially increases but decreases in the second half of the model. Similar results are obtained when the analysis does not include background points (details in appendix C.1). The scores for both concepts and classes increase after fine-tuning for the image domain. For the audio data, the figure shows that the concepts phonemes and speaker ID react different to the fine-tuning. Phonemes, being relevant for recognition increase their convexity upon fine-tuning, while the irrelevant speaker ID decreases in convexity as expected.
Figure 2: Images: t-SNE plots for a subset of classes (top) and concepts (bottom), with 50 images per concept or class and 1000 background points, both before and after fine-tuning.
### Cross domain analysis
We measure convexity for classes and concepts within the pretrained networks and after fine-tuning with class labels as targets. Figure 4 shows the results for all modalities. The convexity is pervasive both before and after fine-tuning. Generally, class convexity is increased by fine-tuning, however, for the concepts, the picture is more mixed. This is expected as concepts can be differentially associated with labels or a subset of labels, as we saw exemplified in the audio domain.
For detailed results of the analysis of individual data modalities, see appendix C.1-C.5.
To test the cognitive science motivated hypothesis that convexity facilitates categorization, we plot the post-fine-tuning accuracy (recall) as a function of pretraining convexity in Figure 5. The link is non-linear but there is a pronounced relation so that high convexity in the pretrained model predicts high accuracy after fine-tuning. The link is stronger the more data enters the fine-tuning.
Ghorbani et al. [35] listed core dimensions for our understanding of concept-based understanding of machine-learned representations. We find that these dimensions are well aligned with both the relevant cognitive science foundation of our investigation and with the new results. _Meaningfulness_,
Figure 4: Graph convexity scores for all modalities. Convexity is pervasive in all networks. The number of layers differs across models but the most-left layer is the first layer that we observe and the right-most layer is the last layer in each model. For audio, the concept phonemes is displayed. Error bars are omitted in this plot for clarity (see the figures of individual modalities in the appendices C.1-C.5). Note that the results are not directly comparable across modalities (see Section 2.1).
Figure 3: Graph convexity scores for images (left) and audio (right). The light color bands show the standard error of the mean as discussed in Section 2.1. The image results were computed on 56 classes + 56 concepts (50 images per concept/class) + 5600 background points. The audio results were computed on 10 classes, 12 speaker ids and 20 phonemes + 6000 background points.
i.e., an explanatory concept is semantically meaningful on its own. We note that such construction of meaning is social, i.e., individuals should associate similar meanings to the concept. This is related to the notion of _naturalness_ of concepts in cognitive science. Convexity of category regions in human and machine representations is consistent with the concept of _coherence_, i.e., examples of a concept should be perceptually similar, and different from examples of other concepts. Finally, the _importance_ of a category label can be linked to objectives such as downstream fine-tuning accuracy. We found that pretraining label region convexity predicts subsequent generalization following fine-tuning indicating that _convexity could be an importance signal for categorization_.
## 4 Conclusion
Understanding machine and human representational spaces is important for alignment and trust. Such investigations can be furthered by aligning the vocabularies of investigations into human and machine representations. Inspired by the conceptual spaces research of Gardenfors and coworkers, we introduce the idea of convexity as a new dimension in human-machine alignment. Convexity is closely related to generalization in cognitive systems. Machine representations can be best described by curved spaces, hence, we recapitulated salient properties of convex sets in flat and curved spaces. We found that convexity is stable to relevant latent space re-parametrizations for deep networks. We developed a workflow for the estimation of approximate convexity based on graph methods which can be used to measure convexity in flat and curved latent spaces. We carried out extensive experiments in multiple domains including visual object recognition, human activity data, audio, text, and medical imaging. Our experiments included networks trained by self-supervised learning and next fine-tuned on domain-specific labels. We found evidence of pervasive convexity of conceptual regions in pretrained models. On fine-tuning, we found that class region convexity generally increases, while for other (untrained) concepts, convexity may increase or decrease reflecting such concepts' relevance to labels. Importantly, we found evidence across the data domains that label region convexity in the pretrained networks predict accuracy following fine-tuning. Remind, the pretrained networks have not'seen' the labels. The latter observation is evidence that the convexity signal is an indicator of how well pretrained networks are prepared for future generalizations by the self-supervised learning procedure.
## 5 Limitations and broader impact
**Limitations:** Our graph convexity workflow is quite general. However, it is limited by uncertainty related to the number of sampled points, the number of concepts and classes, etc. Although we
Figure 5: Convexity of a subset of classes in the pretrained models vs. recall of these classes in the fine-tuned models for all data domains. The standard errors of the mean for the graph convexity scores are on average (for images, human activity, audio, text, and brain, respectively) 0.85, 0.33, 0.11, 3.06, 2.36, and the standard errors of the mean for the accuracy are on average 4.55, 0.82, 0.31, 1.37, 3.36 (see appendix C for the plot with included error bars). The correlation coefficient is 0.52.
evaluate convexity in several modalities, there are fundamental limitations when directly comparing results across modalities because of the differences in data, label structures, and models available.
**Broader impact:** Human-machine alignment is important for human-centered AI. The development of common vocabularies and representational analyses can support the understanding of alignment. The field of explainable AI has developed a plethora of methods to better understand black-box models and avoid undesirable biases. It is an aim of the present work to contribute towards this goal. We acknowledge that such understanding can also be used by adversaries and increase the risk of misuse.
## Acknowledgments and Disclosure of Funding
This work was supported by the DIREC Bridge project Deep Learning and Automation of Imaging-Based Quality of Seeds and Grains, Innovation Fund Denmark grant number 9142-00001B. This work was supported by the Pioneer Centre for AI, DNRF grant number P1 and the Novo Nordisk Foundation grant NNF22OC0076907 "Cognitive spaces - Next generation explainability". This work was supported by the Danish Data Science Academy, which is funded by the Novo Nordisk Foundation (NNF21SA0069429) and VILLUM FONDEN (40516). This work was partially supported by DeiC National HPC (g.a. DeiC-DTU-N5-20230028) and by the "Alignment of human and machine representations" project (g.a. DeiC-DTU-N5-20230033) and by the "self-supervised brain model" project (g.a. DeiC-DTU-S5-202300105). We acknowledge Danish e-infrastructure Cooperation (DeiC), Denmark, for awarding this project access to the LUMI supercomputer, owned by the EuroHPC Joint Undertaking, hosted by CSC (Finland) and the LUMI consortium through Danish e-infrastructure Cooperation (DeiC), Denmark, "Alignment of human and machine representations", DeiC-DTU-N5-20230028.
|
2308.09396 | Unveiling Causalities in SAR ATR: A Causal Interventional Approach for
Limited Data | Synthetic aperture radar automatic target recognition (SAR ATR) methods fall
short with limited training data. In this letter, we propose a causal
interventional ATR method (CIATR) to formulate the problem of limited SAR data
which helps us uncover the ever-elusive causalities among the key factors in
ATR, and thus pursue the desired causal effect without changing the imaging
conditions. A structural causal model (SCM) is comprised using causal inference
to help understand how imaging conditions acts as a confounder introducing
spurious correlation when SAR data is limited. This spurious correlation among
SAR images and the predicted classes can be fundamentally tackled with the
conventional backdoor adjustments. An effective implement of backdoor
adjustments is proposed by firstly using data augmentation with
spatial-frequency domain hybrid transformation to estimate the potential effect
of varying imaging conditions on SAR images. Then, a feature discrimination
approach with hybrid similarity measurement is introduced to measure and
mitigate the structural and vector angle impacts of varying imaging conditions
on the extracted features from SAR images. Thus, our CIATR can pursue the true
causality between SAR images and the corresponding classes even with limited
SAR data. Experiments and comparisons conducted on the moving and stationary
target acquisition and recognition (MSTAR) and OpenSARship datasets have shown
the effectiveness of our method with limited SAR data. | Chenwei Wang, Xin Chen, You Qin, Siyi Luo, Yulin Huang, Jifang Pei, Jianyu Yang | 2023-08-18T08:54:02Z | http://arxiv.org/abs/2308.09396v1 | # Unveiling Causalities in SAR ATR: A Causal Interventional Approach for Limited Data
###### Abstract
Synthetic aperture radar automatic target recognition (SAR ATR) methods fall short with limited training data. In this letter, we propose a causal interventional ATR method (CIATR) to formulate the problem of limited SAR data which helps us uncover the ever-elusive causalities among the key factors in ATR, and thus pursue the desired causal effect without changing the imaging conditions. A structural causal model (SCM) is comprised using causal inference to help understand how imaging conditions acts as a confounder introducing spurious correlation when SAR data is limited. This spurious correlation among SAR images and the predicted classes can be fundamentally tackled with the conventional backdoor adjustments. An effective implement of backdoor adjustments is proposed by firstly using data augmentation with spatial-frequency domain hybrid transformation to estimate the potential effect of varying imaging conditions on SAR images. Then, a feature discrimination approach with hybrid similarity measurement is introduced to measure and mitigate the structural and vector angle impacts of varying imaging conditions on the extracted features from SAR images. Thus, our CIATR can pursue the true causality between SAR images and the corresponding classes even with limited SAR data. Experiments and comparisons conducted on the moving and stationary target acquisition and recognition (MSTAR) and OpenSARShip datasets have shown the effectiveness of our method with limited SAR data.
SAR, ATR, Causal Graph, Interventional Training
## I Introduction
Synthetic aperture radar (SAR) is a flexible remote sensing technology, useful in multiple civil and military contexts, offering high-resolution imaging regardless of time or weather [1, 2, 3]. Its key application, automatic target recognition (ATR), has evolved over the past fifty years [4, 5, 6]. Notably, the last decade has seen considerable improvements in ATR's performance, largely driven by advancements in deep learning technology [7, 8, 9, 10, 11, 12, 13, 14].
While current deep learning-based SAR ATR methods exhibit encouraging performance, they inherently rely on extensive information drawn from a large collection of SAR images. However, the process of generating a substantial volume of SAR images, coupled with the requirement for accurate labelling, is both resource-intensive and time-consuming [15, 16]. As a result, there is often a shortage of information available for supervised training, which ultimately impacts the effectiveness of these methods. This issue underscores a critical disconnect between the theoretical design of ATR methods and their practical applications, with existing methods often falling short when deployed in real-world contexts [17, 18, 19]. This problem, termed as SAR ATR with limited training data, has recently become a focus in research [20, 21, 22, 23].
The primary challenge in ATR with limited training data is the weak performance caused by the sensitivity of SAR images to imaging conditions, as shown in the causal graph in Fig. 1 (a). The causal graph is constructed based on the assumption of the causalities among the SAR images \(I\), the imaging conditions \(IC\), the extracted feature \(X\), and the classification \(Y\). When the imaging conditions IC are changing, the inner-class SAR images have obvious variance of scattering characteristics. Besides, it is inevitable that the features X contains some feature IF effected by IC. As shown in Fig. 1 (b), even only a change in the azimuth angle causes inner-class SAR images to display distinct scattering characteristics. In the process of recognition, the ATR method aims to model \(P(Y|X)\), but the imaging conditions \(IC\) act as a confounder that is the common cause of the features via \(IC\to X\) and the classification Y via \(M\to IF\to Y\). As a results, the imaging conditions introduce spurious correlation in the process of modeling \(P(Y|X)\), thus leading to the weak performance of ATR method with limited SAR data.
Therefore, in this letter, we propose a causal interventional ATR method (CIATR) with limited SAR data that not only fundamentally analysis the role of imaging conditions in the recognition, but also provides a principled solution to improve the recognition performance. Specifically, our contributions are
Fig. 1: Causal graph for ATR with limited SAR data. (a) \(I\) is SAR images, \(IC\) is the imaging conditions, \(X\) is the features affected by \(IC\), and \(Y\) is the predicted classes. (b) Even only the azimuth angle is changing, the scattering characteristics of SAR images varies. (c) is our CIATR as a solution of the decoupling training for ATR with limited SAR data.
summarized as follows.
1) Section 2.1 introduces a Structural Causal Model (SCM) that elucidates why imaging conditions, while negligible with ample SAR data, act as a confounder introducing spurious correlation into the ATR model when data is limited.
2) Section 2.2 then outlines specific effective implementations using backdoor adjustment [24], which mainly consist two steps: data augmentation with spatial-frequency domain hybrid transformation, and the feature discrimination with hybrid similarity measurement.
3) Thanks to the causal intervention, the CIATR achieves state-of-the-art recognition performances on MSTAR and OpenSARship data set with different numbers of training samples. The ablation experiments have validated the effectiveness of the CIATR.
The rest of this letter is organized below. Section II presents the causal graph and solution for ATR with limited SAR data. Section III verifies the effectiveness of the proposed method with experiments, and Section IV gives our conclusion.
## II Causal Interventional ATR Method
This section starts with the introduction of a Structural Causal Model (SCM), detailing the cause-and-effect relationships among variables in SAR ATR. A core solution is proposed to mitigate the false correlations arising due to imaging conditions, thereby enhancing recognition performance even when SAR data is scarce.
### _Structural Causal Model_
To systematically analyze ATR with limited SAR data, the SCM is a directed acyclic graph that elucidates the influence of variables of interest \(\mathbf{I}\), \(\mathbf{X}\), \(\mathrm{IC}\) on the ATR model's recognition results, \(Y\). Each arrow represents a causal relationship between two nodes. The fundamental reasoning behind SCM is detailed below.
\(\mathbf{I}\to Y\). The ATR modal aims to finish the precise recognition \(Y\) condition on \(\mathbf{I}\), \(P(Y|\mathbf{I})\). There are two paths to determine \(Y\) by \(\mathbf{I}\): 1) \(\mathbf{I}oY\) is the direct path which means \(\mathbf{I}\) has a direct effect on \(Y\). 2) \(\mathbf{I}\rightarrow\mathbf{X}_{v}\to Y\) is the mediation path which means that the features \(\mathbf{X}_{v}\) extracted from \(\mathbf{I}\) play the mediator role in the recognition process.
\(\mathrm{IC}\rightarrow\mathbf{I}\to Y\). The sensitivity of SAR images \(\mathbf{I}\) to the imaging conditions \(\mathrm{IC}\) leads to the scattering characteristic varying when \(\mathrm{IC}\) is changing. The imaging conditions contain many aspects, for example, the azimuth angle and the parameters of the imaging platform. Even if one factor of imaging conditions changes, i.e., the azimuth angle, the scattering characteristic of SAR image varies obviously. As a result, the imaging conditions \(\mathrm{IC}\) affect the scattering characteristics of the entire SAR image \(\mathbf{I}\), thereby influencing the classification \(Y\).
\(\mathrm{IC}\rightarrow\mathbf{X}_{v}\leftarrow\mathbf{I}\). The features \(\mathbf{X}_{v}\) are denoted as the low-dimensional representations of SAR image \(\mathbf{I}\). 1) \(\mathrm{IC}\rightarrow\mathbf{X}_{v}\). The features \(\mathbf{X}_{v}\) contains the features from the targets and the background, the imaging conditions \(\mathrm{IC}\) have an obvious effect on the background, like cluster and shadow regions. Thus, the imaging conditions \(\mathrm{IC}\) and the SAR image \(\mathbf{I}\) have the causality for the feature \(\mathbf{X}_{v}\). When modeling the recognition \(P(Y|\mathbf{I})\), the information of \(\mathrm{IC}\) affects not only the SAR images but also the extracted features \(\mathbf{X}_{v}\).
An ideal ATR model with limited SAR data should rely on the true causality between \(\mathbf{I}\) and \(Y\) to achieve precise recognition. However, as mentioned above, the conventional modeling \(P(Y|\mathbf{I})\) fails to be ideal, because the likelihood of \(Y\) given \(\mathbf{I}\) is not only due to \(\mathbf{I}\to Y\) and \(\mathbf{I}\rightarrow\mathbf{X}_{v}\to Y\), but also the spurious correlation introduced by \(\mathrm{IC}\) via \(\mathbf{D}\rightarrow\mathbf{I}\) and \(\mathbf{D}\rightarrow\mathbf{X}_{v}\to Y\).
Therefore, to obtain an ideal ATR model with limited SAR data, it is necessary to pursue the true causality between the \(I\) and \(Y\) without the spurious correlation introduced by \(\mathrm{IC}\). Fortunately, the backdoor adjustment can be used to implement the causal intervention \(P(Y|do(\mathbf{I}))\) to mitigate the spurious correlation introduced by \(IC\):
\[P(Y|do(\mathbf{I})) =\sum_{\mathbf{X}_{v}}\sum_{\mathrm{IC}}P(Y|\mathbf{I},\mathbf{X} _{v},\mathrm{IC})P(X_{v}|\mathbf{I},\mathrm{IC})P(\mathrm{IC}) \tag{1}\] \[=\sum_{\mathbf{X}_{v}}\sum_{\mathrm{IC}}P(Y|\mathbf{I},\mathbf{X} _{v})P(X_{v}|\mathbf{I},\mathrm{IC})P(\mathrm{IC})\] (2) \[=\sum_{\mathrm{IC}}P(Y|\mathbf{I},\mathrm{IC},\mathbf{X}_{v}=g( \mathrm{I},\mathrm{IC}))\mathrm{P}(\mathrm{IC}) \tag{3}\]
Due to the rule 1 of do-Calculus [25], \(\mathbf{D}\) does not affect \(Y\) directly, so \(P(Y|\mathbf{I},\mathbf{C},\mathbf{D})\) in Eq. (1) can be replaced by
Fig. 2: Specific implement of backdoor adjustments in our CIATR.
\(P(Y|\mathbf{I},\mathbf{X}_{v})\), yielding Eq. (2). Eq. (3) is because, in our SCM, \(X_{v}\) takes a deterministic value given by function \(g(I,IC)\). These equations above means that if the imaging conditions are observable, it is possible to employ the physical intervention by stratifying \(\mathrm{IC}\) to mitigate the spurious correlation introduced by \(\mathrm{IC}\). Stratifying \(\mathrm{IC}\) means producing an integrated set of \(\mathbf{X}_{v}\)) using every value of \(IC\) for any given SAR image \(i\). Therefore, in the process of modeling \(P(Y|do(\mathbf{I}))\), eliminating the influence of \(\mathrm{IC}\) and \(\mathbf{X}_{v})\) can achieve Eq. (2).
In the following sections, based on the solution above, we present the specific effectiveness implementation to improve recognition performance with limited SAR data.
### _Interventional Augmentation and Discrimination_
Fig. 2 illustrates our approach. Initially, we augment data using transformations in image and frequency domains, simulating varied imaging conditions, a process akin to \(\mathrm{IC}\) accumulation in Eq. (2) [25]. Next, we apply a hybrid similarity measure for feature discrimination, calculating the effect of different conditions on a SAR image. Using the invariant risk minimization (IRM) concept, we provide a loss \(L_{d}\) to enable \(\mathbf{X}_{v}\) accumulation in Eq. (2). By minimizing \(L_{d}\), our CIATR models achieve precise recognition with limited SAR data. The details of the data augmentation with spatial-frequency domain hybrid transformation and the feature discrimination with hybrid similarity measurement are as follows.
Given the limited SAR training set \(\mathbf{D}^{tr}=\{\mathbf{D}_{1}^{tr},\mathbf{D}_{2}^{tr},..,\mathbf{D}_{C}^{ tr}\}\), where \(\mathbf{D}_{i}^{tr}=\{\mathbf{x}_{i1},\ldots,\mathbf{x}_{in}\}\) represents the training samples of the \(\mathrm{ith}\) class, and \(\mathbf{x}_{ij}\in\mathbb{R}^{h\times w}\) is the \(j\)th sample in \(\mathbf{D}_{i}^{tr}\), with \(n\) being the sample number of each class.
As illustrated in Fig. 2, the spatial-frequency transformation augmentation comprises two aspects: random frequency mask and spatial transformation. Firstly, each image in \(\mathbf{D}^{tr}\), such as \(\mathbf{x}_{ij}\), undergoes a fast Fourier transform (FFT) to derive its spectrum. We then apply a random mask to maximize potential imaging condition estimations. The random resolution, \(rm_{re}\), denotes the smallest resolution unit in the augmentation, implying \(\mathbf{x}_{ij}\) is split into \(h\times w/(rm_{re})^{2}\) patches. The mask ratio, \(rm_{ra}\), indicates the number of zeroed patches, while the location, \(rm_{l}\), represents the index of these patches. Thus, the process of random frequency mask can be presented as \(\mathbf{x}_{ij}^{f}=RFM(fft(\mathbf{x}_{ij}),rm_{re},rm_{ra},rm_{rl})\), where \(\mathbf{x}_{ij}^{f}\) is the masked version of \(\mathbf{x}_{ij}\), \(RFM(\cdot)\) is the operation of random frequency mask, and \(fft(\cdot)\) is the fast Fourier transform.
Then the inverse FFT is employed to convert \(\mathbf{x}_{ij}^{f}\) into the spatial domain, and the random spatial transformation with the complete transformation set \(TF=\{tf_{1},\ldots,tf_{Q}\}\), where \(tf_{i}\) is the \(i\)th transformation, \(Q\) is the class number of all the transformation. There are also two random variances. The random transformation combination \(q\) is obtained by randomly sampling from \(TF\), and \(M_{q}\) is a random value following a normal distribution. Thus, the process of random spatial transformation can be presented as \(\mathbf{x}_{ij}^{s}=RST(ifft(\mathbf{x}_{ij}^{f}),q,M_{q})\), where \(\mathbf{x}_{ij}^{s}\) is the final version of \(\mathbf{x}_{ij}\) after the augmentation, \(RST(\cdot)\) is the operation of random spatial transformation, and \(ifft(\cdot)\) is the inverse FFT.
Through the process outlined above, each sample in \(\mathbf{D}^{tr}\) is augmented with an additional random version to estimate the potential impact of various imaging conditions on SAR samples. It's worth noting that we've introduced multiple randomness factors to estimate as many imaging conditions as possible, ensuring the comprehensiveness of stratifying \(\mathrm{IC}\).
The limited SAR training set can be presented as \(\mathbf{D}_{a}^{tr}=\{\mathbf{D}_{1}^{tr},\mathbf{D}_{2}^{tr},..,\mathbf{D}_ {C}^{tr}\},\mathbf{D}_{i}^{tr}=\{\mathbf{x}_{i1},\mathbf{x}_{i1}^{s},\ldots, \mathbf{x}_{in},\mathbf{x}_{in}^{s}\}\). Then, a feature discrimination method employs a structural measurement to enable the ATR model to capture effective local features. Concurrently, it uses a vector angle measurement to enhance the discriminability of the extracted features. The hybrid measurement aims to fulfill the summation over \(\mathbf{X}_{v}\) in Eq. (2). The process of feature discrimination consists of two parts: hybrid measurement and loss calculation, as shown in Fig. 2. For every pair of samples in \(\mathbf{D}^{tr}\), we calculate their hybrid measurement:
\[hm(\mathbf{x}_{ij},\mathbf{x}_{nm})=stm(\mathbf{x}_{ij},\mathbf{x}_{nm})+vam( \mathbf{x}_{ij},\mathbf{x}_{nm}) \tag{4}\]
where \(hm(x,y)\) is the hybrid measurement between \(x\) and \(y\), \(stm(x,y)\) is the structural measurement between \(x\) and \(y\), and \(vam(x,y)\) is the vector angle measurement. The structural similarity index measure (SSIM) is employed as \(stm(\cdot,\cdot)\), and the cosine similarity is employed as \(vam(\cdot,\cdot)\).
Then, if \(\mathbf{x}_{ij}\) and \(\mathbf{x}_{nm}\) belong to the same class, i.e., \(j=m\), \(hm(\mathbf{x}_{ij},\mathbf{x}_{nm})\) should be as large as possible. Conversely, if \(\mathbf{x}_{ij}\) and \(\mathbf{x}_{nm}\) do not belong to the same class, i.e., \(j!=m\), \(hm(\mathbf{x}_{ij},\mathbf{x}_{nm})\) should be as small as possible. Thus, the discrimination loss \(L_{d}\) can be calculated based on the triplet loss. Besides, the cross-entropy loss is employed as the basic recognition loss. The final loss is the summation of \(L_{ce}\) and \(L_{d}\).
Our CIATR first proposed an SCM to analyze the reason for weak performance of ATR with limited SAR data and provides a causal solution to pursue the true causality between the \(\mathbf{I}\) and \(Y\) without the spurious correlation introduced by \(\mathrm{IC}\).
## III Experiments
In this section, we assess the effectiveness of our method using benchmark SAR image datasets, OpenSARship and MSTAR.
### _Dataset_
The OpenSARship dataset, gathered from 41 diverse Sentinel-1 images, facilitates the development of sophisticated ship detection and classification algorithms for challenging environments. This dataset comprises 11346 slices from 17 SAR ship types, integrated with reliable AIS information. Experiments utilize the GRD data, featuring a \(2.0\mathrm{m}\times 1.5\mathrm{m}\) resolution and a \(10\mathrm{m}\times 10\mathrm{m}\) pixel size in Sentinel-1 IW mode. Ship dimensions span from 92m to 399m in length and 6m to 65m in width.
The MSTAR dataset, a SAR ATR performance assessment standard, was launched by the Defense Advanced Research Project Agency and the Air Force Research Laboratory. Acquired via Sandia National Laboratory's STARLOS sensor, it includes X-band SAR images with 1-ft resolution across a 0\({}^{\circ}\) to 360\({}^{\circ}\) range.
### _Recognition Performances and Comparisons under OpenSARship and MSTAR_
In this section, the experiments under OpenSARship and MSTAR dataset is run and presented.
#### Iv-B1 Recognition Performances and Comparisons of 3 and 6 classes under OpenSARship
The OpenSARship dataset comprises several ship categories, representing 90% of the international shipping market, the most common and significant ships [22]. As per [26], experiments were conducted considering different numbers of classes: 3 and an extended set of 6. The 3-class experiment incorporates bulk carriers, container ships, and tanks, while the extended 6-class set also includes cargo ships, fishing vessels, and general cargo (Table I).
Tables II and III illustrate our method's superiority in 3-class and 6-class SAR ship image recognition tasks with training samples per class between 20 and 100. The 3-class recognition rate rises from 70.06% with 20 samples to 77.62% with 50, demonstrating effective use of additional samples. In 6-class recognition, performance ascends from 52.55% with 20 samples to 63.91% with 100.
In comparison with other methods (Table IV), our method excels. It yields rates of 70.06% and 74.81% when trained with 20 and 40 samples respectively, outperforming the Supervised method's 58.24% with 20 samples. Furthermore, our method surpasses Supervised, CNN, and CNN+Matrix recognition rates with 80 samples. Thus, within the 1-50 samples band, our method outmatches state-of-the-art techniques.
In conclusion, the results under OpenSARship dataset illustrates the resilience and effectiveness of our method with limited samples, proving its capability to perform SAR ATR tasks under resource-constrained conditions.
#### Iv-B2 Recognition Performances and Comparisons under MSTAR
In this subsection, we discuss the recognition performance of our CIATR and compare it with other algorithms under MSTAR dataset. In the K-shot setting, both our CIATR and other methods randomly select K images from each class in the MSTAR dataset for training in Table V.
Table VI shows the recognition results when the training samples are ranging from 5 to 100. Examining the recog
nition rate relative to the growth in sample size, we find that the rate increases significantly from 75.05% to 86.47% as samples double from 5 to 10. The rate continues to rise, reaching 96.70% with 25 samples and 97.32% with 40 samples, demonstrating the model's learning capability with more data. However, rate improvement slows down beyond 40 samples, only reaching 98.68% with 80 samples, indicating a learning saturation. Still, with 100 samples, the rate peaks at 98.89%, showing steady improvement. This analysis confirms the robustness of our method, even with limited samples, for SAR ATR.
Table VII is the comparison with other state-of-the-art methods for SAR ATR with limited data. Comparatively, the recognition performance significantly decreases when the per-class image number reduces from all data to 20 samples. MGAN-CNN mildly improves performance under 20 and 40 samples, while Semisupervised enhances it under 20, 40, and 80 samples, utilizing self-consistent augmentation and training resources. Quantitatively, our CIATR outperforms others under any number of training images per class, particularly under limited SAR training samples.
From the recognition performances and comparisons above, the superiority and effectiveness of our method have been validated.
## IV Conclusion
In conclusion, we have proposed the causal interventional ATR method (CIATR) to address the challenges of SAR ATR with limited training data. Our approach leverages causal inference and backdoor adjustments to mitigate the effects of varying imaging conditions. The structural causal model (SCM) helps us understand the role of imaging conditions as confounders in introducing spurious correlations in ATR models when data is limited. By using data augmentation with spatial-frequency domain hybrid transformation and feature discrimination with hybrid similarity measurement, we effectively estimate and mitigate the impacts of imaging conditions on SAR image features. The CIATR method enables us to establish causal relationships between SAR images and their corresponding classes, even with limited data. Experimental results on the MSTAR and OpenSARShip datasets validate the effectiveness of our approach in improving SAR ATR performance under limited training conditions.
|
2305.19179 | Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit
Global (Accelerated) Convergence Rates | Despite the impressive numerical performance of the quasi-Newton and
Anderson/nonlinear acceleration methods, their global convergence rates have
remained elusive for over 50 years. This study addresses this long-standing
issue by introducing a framework that derives novel, adaptive quasi-Newton and
nonlinear/Anderson acceleration schemes. Under mild assumptions, the proposed
iterative methods exhibit explicit, non-asymptotic convergence rates that blend
those of the gradient descent and Cubic Regularized Newton's methods. The
proposed approach also includes an accelerated version for convex functions.
Notably, these rates are achieved adaptively without prior knowledge of the
function's parameters. The framework presented in this study is generic, and
its special cases includes algorithms such as Newton's method with random
subspaces, finite-differences, or lazy Hessian. Numerical experiments
demonstrated the efficiency of the proposed framework, even compared to the
l-BFGS algorithm with Wolfe line-search. | Damien Scieur | 2023-05-30T16:22:17Z | http://arxiv.org/abs/2305.19179v2 | Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit Global (Accelerated) Convergence Rates
###### Abstract
Despite the impressive numerical performance of quasi-Newton and Anderson/nonlinear acceleration methods, their global convergence rates have remained elusive for over 50 years. This paper addresses this long-standing question by introducing a framework that derives novel and adaptive quasi-Newton or nonlinear/Anderson acceleration schemes. Under mild assumptions, the proposed iterative methods exhibit explicit, non-asymptotic convergence rates that blend those of gradient descent and Cubic Regularized Newton's method. Notably, these rates are achieved adaptively, as the method autonomously determines the optimal step size using a simple backtracking strategy. The proposed approach also includes an accelerated version that improves the convergence rate on convex functions. Numerical experiments demonstrate the efficiency of the proposed framework, even compared to a fine-tuned BFGS algorithm with line search.
## 1 Introduction
Consider the problem of finding the minimizer \(x^{\star}\) of the unconstrained minimization problem
\[f(x^{\star})=f^{\star}=\min_{x\in\mathbb{R}^{d}}f(x),\]
where \(d\) is the problem's dimension, and the function \(f\) has a Lipschitz continuous Hessian.
**Assumption 1**.: _The function \(f(x)\) has a Lipschitz continuous Hessian with a constant \(L\),_
\[\forall\;\;y,\,z\in\mathbb{R}^{d},\quad\|\nabla^{2}f(z)-\nabla^{2}f(y)\|\leq L \|z-y\|. \tag{1}\]
In this paper, \(\|.\|\) stands for the maximal singular value of a matrix and for the \(\ell_{2}\) norm for a vector. Many twice-differentiable problems like logistic or least-squares regression satisfy Assumption 1.
The Lipschitz continuity of the Hessian is crucial when analyzing second-order algorithms, as it extends the concept of smoothness to the second order. The groundbreaking work by Nesterov et al. [46] has sparked a renewed interest in second-order methods, revealing the remarkable convergence rate improvement of Newton's method on problems satisfying Assumption 1 when augmented with cubic regularization. For instance, if the problem is also convex, accelerated gradient descent typically achieves \(O(\frac{1}{t^{2}})\), while accelerated second-order methods achieve \(O(\frac{1}{t^{3}})\). Recent advancements have further pushed the boundaries, achieving even faster convergence rates of up to \(\mathcal{O}(\frac{1}{t^{7/2}})\) through the utilization of hybrid methods [43; 14] or direct acceleration of second-order methods [44; 27; 40].
Unfortunately, second-order methods may not always be feasible, particularly in high-dimensional problems common in machine learning. The limitation is that exact second-order methods require
solving a linear system that involves the Hessian of the function \(f\). This main limitation motivated alternative approaches that balance the efficiency of second-order methods and the scalability of first-order methods, such as _inexact/subspace/stochastic techniques_, _nonlinear/Anderson acceleration_, and _quasi-Newton_ methods.
### Contributions
Despite the impressive numerical performance of quasi-Newton methods and nonlinear acceleration schemes, there is currently no knowledge about their global explicit convergence rates. In fact, global convergence cannot be guaranteed without using either exact or Wolfe-line search techniques. This raises the following long-standing question **that has remained unanswered for over 50 years**:
_What are the non-asymptotic global convergence rates of quasi-Newton_
_and Anderson/nonlinear acceleration methods?_
This paper provides a partial answer by introducing generic updates (see algorithms 1 to 3) that can be viewed as cubic-regularized quasi-Newton methods or regularized nonlinear acceleration schemes.
Under mild assumptions, the iterative methods constructed within the proposed framework (see algorithms 3 and 6) exhibit _explicit, global and non-asymptotic_ convergence rates that interpolate the one of first order and second order methods (more details in appendix A):
* Convergence rate on non-convex problems (Theorem 4): \(\min_{i}\|\nabla f(x_{i})\|\leq O(t^{-\frac{2}{3}}+t^{-\frac{1}{3}})\),
* Convergence rate on (star-)convex problems (Theorems 5 and 6): \(f(x_{t})-f^{\star}\leq O(t^{-2}+t^{-1})\),
* Accelerated rate on convex problems (Theorem 7): \(f(x_{t})-f^{\star}\leq O(t^{-3}+t^{-2})\).
### Related work
Inexact, subspace, and stochastic methods.Instead of explicitly computing the Hessian matrix and Newton's step, these methods compute an approximation using sampling [2], inexact Hessian computation [29, 19], or random subspaces [20, 31, 35]. By adopting a low-rank approximation for the Hessian, these approaches substantially reduce per-iteration costs without significantly compromising the convergence rate. The convergence speed in such cases often represents an interpolation between the rates observed in gradient descent methods and (cubic) Newton's method.
Nonlinear/Anderson acceleration.Nonlinear acceleration techniques, including Anderson acceleration [1], have a long standing history [3, 4, 28]. Driven by their promising empirical performance, they recently gained interest in their convergence analysis [64, 26, 63, 38, 69, 67, 72, 71, 56, 65, 66, 6, 60, 8, 57]. In essence, Anderson acceleration is an optimization technique that enhances convergence by extrapolating a sequence of iterates using a combination of previous gradients and corresponding iterates. Comprehensive reviews and analyses of these techniques can be found in notable sources such as [38, 7, 37, 36, 5, 17]. However, these methods do not generalize well outside quadratic minimization and their convergence rate can only be guaranteed asymptotically when using a line-search or regularization techniques [62, 68, 56].
Quasi-Newton methods.Quasi-Newton schemes are renowned for their exceptional efficiency in continuous optimization. These methods replace the exact Hessian matrix (or its inverse) in Newton's step with an approximation that is updated iteratively during the method's execution. The most widely used algorithms in this category include DFP [18, 25] and BFGS [61, 30, 24, 10, 9]. Most of the existing convergence results predominantly focus on the asymptotic super-linear rate of convergence [70, 32, 12, 11, 15, 22, 75, 73, 74]. However, recent research on quasi-Newton updates has unveiled explicit and non-asymptotic rates of convergence [50, 52, 51, 41, 42]. Nonetheless, these analyses suffer from several significant drawbacks, such as assuming an infinite memory size and/or requiring access to the Hessian matrix. These limitations fundamentally undermine the essence of quasi-Newton methods, which are typically designed to be Hessian-free and maintain low per-iteration cost through their low-memory requirement and low-rank structure.
Recently, Kamzolov et al. [39] introduced an adaptive regularization technique combined with cubic regularization, with global, explicit (accelerated) convergence rates for any quasi-Newton method.
The method incorporates a backtracking line search on the secant inexactness inequality that introduces a quadratic regularization. However, this algorithm relies on prior knowledge of the Lipschitz constant specified in Assumption 1. Unfortunately, the paper does not provide an adaptive method to find jointly the Lipschitz constant as well, as it is _a priory_ too costly to know which parameter to update. This aspect makes the method impractical in real-world scenarios.
Paper OrganizationSection 2 introduces the proposed novel generic updates and some essential theoretical results. **Section 3** presents the convergence analysis of the iterative algorithm, which uses one of the proposed updates. **Section 4** is dedicated to the accelerated version of the proposed framework. **Section 5** presents examples of methods generated by the proposed framework.
## 2 Type-I and Type-II Step
This section first examines a remarkable property shared by quasi-Newton and Anderson acceleration: the sequence of iterates of these methods can be expressed as a combination of _directions_ formed by previous iterates and the current gradient. Building upon this observation, section 2.1 investigates how to obtain second-order information without directly computing the Hessian of the function \(f\) by _approximating_ the Hessian within the subspace formed by these directions. Subsequently, section 2.2 demonstrates how to utilize this approximation to establish an _upper bound_ for the function \(f\) and its gradient norm \(\|\nabla f(x)\|\). Minimizing these upper bounds, respectively, leads to a type-I and type-II method.
Motivation: what quasi-Newton and nonlinear acceleration schemes actually do?The BFGS update is a widely used quasi-Newton method for unconstrained optimization. It approximates the inverse Hessian matrix using updates based on previous gradients and iterates. The update reads
\[x_{t+1}=x_{t}-h_{t}H_{t}\nabla f(x_{t}),\ \ H_{t}=H_{t-1}\left(I-\frac{g_{t}d_{t}^ {\star}}{g_{t}^{\star}d_{t}}\right)+d_{t}\left(d_{t}^{T}\frac{d_{t}^{T}g_{t} +g_{t}^{T}H_{t-1}d_{t}}{(g_{t}^{\star}d_{t})^{2}}-\frac{g_{t}^{T}H_{t-1}}{g_{t }^{\star}d_{t}}\right)\]
where \(H_{t}\) is the approximation of the inverse Hessian at iteration \(t\), \(h_{t}\) is the step size, \(d_{t}=x_{t}-x_{t-1}\) is the step direction, \(g_{t}=\nabla f(x_{t})-\nabla f(x_{t-1})\) is the gradient difference. After unfolding the equation, the BFGS update can be seen as a combination of the \(d_{i}\)'s and \(\nabla f(x_{t})\),
\[x_{t+1}-x_{t}=H_{0}P_{0}\ldots P_{t}\nabla f(x_{t})+\sum_{i=1}^{t}\alpha_{i}d _{i}, \tag{2}\]
where \(P_{i}\) are projection matrices in \(\mathbb{R}^{d\times d}\) and \(\alpha_{i}\) are coefficients. Similar reasoning can be applied to other quasi-Newton formulas (see appendix B for more details).
This observation aligns with the principles of Anderson acceleration methods. Considering the same vectors \(d_{t}\) and \(g_{t}\), Anderson acceleration updates \(x_{t+1}\) as:
\[\alpha^{\star}=\min_{\alpha}\|\nabla f(x_{t})+\sum_{i=0}^{t-1}\alpha_{i}g_{i} \|,\quad x_{t+1}-x_{t}=\sum_{i=0}^{t}\alpha_{i}^{\star}\left(d_{i}-h_{t}g_{i} \right),\]
where \(h_{t}\) is the relaxation parameter, which can be seen as the step size of the method. As all \(x_{i}\)'s belong to the span of previous gradients, the update is similar to (2), see appendix B for more details. This is not surprising, as it has been shown that Anderson acceleration can be viewed as a quasi-Newton method [23]. Some studies have explored the relationship between these two classes of optimization techniques and established strong connections in terms of their algorithmic behavior [23; 76; 59; 13].
Hence, quasi-Newton algorithms and nonlinear/Anderson acceleration methods utilize previous directions \(d_{i}\) and the current gradient \(\nabla f(x_{t})\) in subsequent iterations. However, their convergence is guaranteed only if a line search is used, and their convergence speed is heavily dependent on \(H_{0}\) (quasi-Newton) or \(h_{t}\) (Anderson acceleration) [49].
### Error Bounds on the Hessian-Vector Product Approximation by a Difference of Gradients
Consider the following \(d\times N\) matrices that represent the _algorithm's memory_,
\[Y=[y_{1},\ldots,y_{N}],\quad Z=[z_{1},\ldots,z_{N}],\quad D=Y-Z,\quad G=[ \ldots,\nabla f(y_{i})-\nabla f(z_{i}),\ldots]. \tag{3}\]
For example, to mimic quasi-Newton techniques, the matrices \(Y\) and \(Z\) can be defined such that,
\[D=[\ldots,x_{t-i+1}-x_{t-i},\ldots],\quad G=[\ldots,\nabla f(x_{t-i+1})-\nabla f( x_{t-i}),\ldots],\ \ i=1\ldots N.\]
Motivated by (2), this paper studies the following update, defined as a linear combination of the previous directions \(d_{i}\),
\[x_{+}-x=D\alpha\quad\text{where}\quad\alpha\in\mathbb{R}^{N}. \tag{4}\]
The objective is to determine the optimal coefficients \(\alpha\) based on the information contained in the matrices defined in (3). Notably, the absence of the gradient in the update (4) distinguishes this approach from (2), allowing for the development of an adaptive method that eliminates the need for an initial matrix \(H_{0}\) (quasi-Newton methods) or a mixing parameter \(h_{t}\) (Anderson acceleration).
Under assumption (1), the following bounds hold for all \(x,y,z,x_{+}\in\mathbb{R}^{d}\)[46],
\[\|\nabla f(y)-\nabla f(z)-\nabla^{2}f(z)(y-z)\|\leq\tfrac{L}{2}\| y-z\|^{2}, \tag{5}\] \[\big{|}f(x_{+})-f(x)-\nabla f(x)(x_{+}-x)-\tfrac{1}{2}(x_{+}-x)^ {T}\nabla^{2}f(x)(x_{+}-x)\big{|}\leq\tfrac{L}{6}\|x_{+}-x\|^{3}. \tag{6}\]
The accuracy of the estimation of the matrix \(\nabla^{2}f(x)\), depends on the _error vector_\(\varepsilon\),
\[\varepsilon\stackrel{{\text{def}}}{{=}}[\varepsilon_{1},\ldots, \varepsilon_{N}],\quad\text{and}\quad\varepsilon_{i}\stackrel{{ \text{def}}}{{=}}\|d_{i}\|\left(\|d_{i}\|+2\|z_{i}-x\|\right). \tag{7}\]
The following Theorem 1 explicitly bounds the error of approximating \(\nabla^{2}f(x)D\) by \(G\).
**Theorem 1**.: _Let the function \(f\) satisfy Assumption 1. Let \(x_{+}\) be defined as in (4) and the matrices \(D,\,G\) be defined as in (3) and vector \(\varepsilon\) as in (7). Then, for all \(w\in\mathbb{R}^{d}\) and \(\alpha\in\mathbb{R}^{N}\),_
\[-\tfrac{L\|w\|}{2}\sum_{i=1}^{N}|\alpha_{i}|\varepsilon_{i} \leq w^{T}(\nabla^{2}f(x)D-G)\alpha\leq\tfrac{L\|w\|}{2}\sum_{i=1}^{N}| \alpha_{i}|\varepsilon_{i}, \tag{8}\] \[\|w^{T}(\nabla^{2}f(x)D-G)\|\leq\tfrac{L\|w\|}{2}\|\varepsilon\|. \tag{9}\]
Proof sketch and interpretation.The theorem states that the Hessian-vector product \(\nabla^{2}f(x)(y-z)\) can be approximated by the difference of gradients \(\nabla f(y)-\nabla f(z)\), providing a cost-effective approach to estimate \(\nabla^{2}f\) without computing it. This property is the basis of quasi-Newton methods. The detailed proof can be found in appendix F. The main idea of the proof is as follows. From (5) with \(y=y_{i}\) and \(z=z_{i}\), writing \(d_{i}=y_{i}-z_{i}\), and Assumption 1,
\[\|\nabla f(y_{i})-\nabla f(z_{i})-\nabla^{2}f(x)(y_{i}-z_{i})\|\leq\frac{L}{2} \|d_{i}\|^{2}+\|\nabla^{2}f(x)-\nabla^{2}f(z)\|\|d_{i}\|\leq\frac{L}{2} \varepsilon_{i}.\]
The _first_ term in \(\varepsilon_{i}\) bounds the error of (5), while the _second_ comes from the distance between (5) and the current point \(x\) where the Hessian is estimated. Then, it suffices to combine the inequalities with coefficients \(\alpha\) to obtain Theorem 1.
### Type I and Type II Inequalities and Methods
In the literature, Type-I methods often refer to algorithms that aim to minimize the function value \(f(x)\), while type-II methods minimize the gradient norm \(\|\nabla f(x)\|\)[23; 76; 13]. Applying the bounds (6) and (5) to the update in (4) yields the following Type-I and Type-II upper bounds, respectively.
**Theorem 2**.: _Let the function \(f\) satisfy Assumption 1. Let \(x_{+}\) be defined as in (4), the matrices \(D,\,G\) be defined as in (3) and \(\varepsilon\) be defined as in (7). Then, for all \(\alpha\in\mathbb{R}^{N}\),_
\[f(x_{+})\leq f(x)+\nabla f(x)^{T}D\alpha+\tfrac{\alpha^{T}H\alpha }{2}+\tfrac{L\|D\alpha\|^{3}}{6},\quad H\stackrel{{\text{def}}}{ {=}}\tfrac{G^{T}D+D^{T}G+\mathrm{1}L\|D\|\|\varepsilon\|}{2} \tag{10}\] \[\|\nabla f(x_{+})\|\leq\|\nabla f(x)+G\alpha\|+\tfrac{L}{2}\Big{(} \sum_{i=1}^{N}|\alpha_{i}|\varepsilon_{i}+\|D\alpha\|^{2}\Big{)}, \tag{11}\]
The proof can be found in appendix F. Minimizing eqs. (10) and (11) leads to algorithms 1 and 2, respectively, whose constant \(L\) is replaced by a parameter \(M\), found by backtracking line-search. A study of the (strong) link between these proposed algorithms and nonlinear/Anderson acceleration and quasi-Newton methods can be found in appendix B.
Solving the sub-problemsIn algorithms 1 and 2, the coefficients \(\alpha\) are computed by solving a minimization sub-problem in \(O(N^{3}+Nd)\) (see appendix C for more details). Usually, \(N\) is rather small (e.g. between \(5\) and \(100\)); hence solving the subproblem is negligible compared to computing a new gradient \(\nabla f(x)\). Here is the summary:
* **In algorithm 1**, the subproblem can be solved easily by a convex problem in two variables, which involves an eigenvalue decomposition of the matrix \(H\in\mathbb{R}^{N\times N}\)[46].
* **In algorithm 2**, the subproblem can be cast into a linear-quadratic problem of \(O(N)\) variables and constraints that can be solved efficiently with SDP solvers (e.g., SDPT3).
```
1:First-order oracle for \(f\), matrices \(G,\ D\), vector \(\varepsilon\), iterate \(x\), initial smoothness \(M_{0}\).
2:Initialize \(M\leftarrow\frac{M_{0}}{2}\)
3:do
4:\(M\gets 2M\) and \(H\leftarrow\frac{G^{T}D+D^{T}G}{2}+\text{I}_{N}\frac{M\|D\|\varepsilon\|}{2}\)
5:\(\alpha^{\star}\leftarrow\arg\min_{\alpha}f(x)+\nabla f(x)^{T}D\alpha+\frac{1} {2}\alpha^{T}H\alpha+\frac{M\|D\alpha\|^{3}}{6}\)
6:\(x_{+}\gets x+D\alpha\)
7:while\(f(x_{+})\geq f(x)+\nabla f(x)^{T}D\alpha^{\star}+\frac{1}{2}[\alpha^{\star}] ^{T}H\alpha^{\star}+\frac{M\|D\alpha^{\star}\|^{3}}{6}\)
8:return\(x_{+}\), \(M\)
```
**Algorithm 2** Type-II Subroutine with Backtracking Line-search
## 3 Iterative Type-I Method: Framework and Rates of Convergences
The rest of the paper analyzes the convergence rate of methods that use algorithm 1 as a subroutine; see algorithm 3. The analysis of methods that uses algorithm 2 is left for future work.
### Main Assumptions and Design Requirements
This section lists the important assumptions on the function \(f\). Some subsequent results require an upper bound on the radius of the sub-level set of \(f\) at \(f(x_{0})\).
**Assumption 2**.: _The radius of the sub-level set \(\{x:f(x)\leq f(x_{0})\}\) is bounded by \(\mathrm{R}<\infty\)._
To ensure the convergence toward \(f(x^{\star})\), some results require \(f\) to be star-convex or convex.
**Assumption 3**.: _The function \(f\) is star convex if, for all \(x\in\mathbb{R}^{d}\) and \(\forall\tau\in[0,1]\),_
\[f((1-\tau)x+\tau x^{\star})\leq(1-\tau)f(x)+\tau f(x^{\star}).\]
**Assumption 4**.: _The function \(f\) is convex if, for all \(y,\ z\in\mathbb{R}^{d}\), \(f(y)\geq f(z)+\nabla f(z)(y-z)\)._
The matrices \(Y,\ Z,\ D\) must meet some conditions listed below as "requirements" (see section 5 for details). All convergence results rely on _one_ of these conditions on the projector onto \(\mathbf{span}(D)\),
\[P_{t}\stackrel{{\text{\tiny def}}}{{=}}D_{t}(D_{t}^{T}D_{t})^{-1}D _{t}^{T}. \tag{12}\]
**Requirement 1a**.: _For all \(t\), the projector \(P_{t}\) of the stochastic matrix \(D_{t}\) satisfies \(\mathbb{E}[P_{t}]=\frac{N}{d}\textbf{I}\)._
**Requirement 1b**.: _For all \(t\), the projector \(P_{t}\) satisfies \(P_{t}\nabla f(x_{t})=\nabla f(x_{t})\)._
The first condition guarantees that, in expectation, the matrix \(D_{t}\) spans partially the gradient \(\nabla f(x_{t})\), since \(\mathbb{E}[P_{t}\nabla f(x_{t})]=\frac{N}{d}\nabla f(x_{t})\). The second condition simply requires the possibility to move towards the current gradient when taking the step \(x+D\alpha\). This condition resonates with the idea presented in (2), where the step \(x_{+}-x\) combines previous directions and the current gradient \(\nabla f(x_{t})\).
In addition, it is required that the norm of \(\|\varepsilon\|\) does not grow too quickly, hence the next assumption.
**Requireer:** First-order oracle \(f\), initial iterate and smoothness \(x_{0},\ M_{0}\), number of iterations \(T\).
```
for\(t=0,\ \ldots,\ T-1\)do Update \(G_{t}\), \(D_{t}\), \(\varepsilon_{t}\) (see section 5). \(x_{t+1},M_{t+1}\leftarrow\) [algorithm 1]\((f,G_{t},D_{t},\varepsilon_{t},x_{t},(M_{t}/2))\) endfor return\(x_{T}\)
```
**Algorithm 3** Generic Iterative Type-I Methods
**Requireer 2**.: _For all \(t\), the relative error \(\frac{\|\varepsilon_{t}\|}{\|D_{t}\|}\) is bounded by \(\delta\)._
The Requirement 2 is also non-restrictive, as it simply prevents taking secant equations at \(y_{i}-z_{i}\) and \(z_{i}-x_{i}\) too far apart. Most of the time, \(\delta\) satisfies \(\delta\leq O(R)\).
Finally, the condition number of the matrix \(D\) also has to be bounded.
**Requireer 3**.: _For all \(t\), the matrix \(D_{t}\) is full-column rank, which implies that \(D_{t}^{T}D_{t}\) is invertible. In addition, its condition number \(\kappa_{D_{t}}\stackrel{{\text{\tiny def}}}{{=}}\sqrt{\|D_{t}^{T }D_{t}\|\|(D_{t}^{T}D_{t})^{-1}\|}\) is bounded by \(\kappa\)._
The condition on the rank of \(D\) is not overly restrictive. In most practical scenarios, this condition is typically satisfied without issue. However, the second condition might be hard to meet, but section 5 studies strategies that prevent \(\kappa_{D}\) from exploding by taking orthogonal directions or pruning \(D\).
### Rates of Convergence
When \(f\) satisfies Assumption 1, algorithm 3 ensures a minimal function decrease at each step.
**Theorem 3**.: _Let \(f\) satisfy Assumption 1. Then, at each iteration \(t\geq 0\), algorithm 3 achieves_
\[f(x_{t+1})\leq f(x_{t})-\tfrac{M_{t+1}}{12}\|x_{t+1}-x_{t}\|^{3},\quad M_{t+1} <\max\left\{2L\ ;\ \tfrac{M_{0}}{2^{p}}\right\}. \tag{13}\]
Under some mild assumptions, algorithm 3 converges to a critical point for non-convex functions.
**Theorem 4**.: _Let \(f\) satisfy Assumption 1, and assume that \(f\) is bounded below by \(f^{*}\). Let Requirements 1b to 3 hold, and \(M_{t}\geq M_{\min}\). Then, algorithm 3 starting at \(x_{0}\) with \(M_{0}\) achieves_
\[\min_{i=1,\ \ldots,\ t}\|\nabla f(x_{i})\|\leq\max\left\{\frac{3L}{ t^{2/3}}\left(12\frac{f(x_{0})-f^{\star}}{M_{\min}}\right)^{2/3}\ ;\ \left(\frac{C_{1}}{t^{1/3}}\right)\left(12\frac{f(x_{0})-f^{\star}}{M_{\min}} \right)^{1/3}\right\},\] \[\text{where}\ \ C_{1}=\delta L\left(\tfrac{\kappa+2\kappa^{2}}{2} \right)+\max_{i\in[0,t]}\|(I-P_{i})\nabla^{2}f(x_{i})P_{i}\|.\]
Going further, algorithm 3 converges to an optimum when the function is star-convex.
**Theorem 5**.: _Assume \(f\) satisfy Assumptions 1 to 3. Let Requirements 1b to 3 hold. Then, algorithm 3 starting at \(x_{0}\) with \(M_{0}\) achieves, for \(t\geq 1\),_
\[(f(x_{t})-f^{\star})\leq 6\frac{f(x_{t})-f^{\star}}{t(t+1)(t+2)}+ \frac{1}{(t+1)(t+2)}\frac{L(3R)^{3}}{2}+\frac{1}{t+2}\frac{C_{2}(3R)^{2}}{4},\] \[\text{where}\quad C_{2}\stackrel{{\text{\tiny def}}} {{=}}\delta L\tfrac{\kappa+2\kappa^{2}}{2}+\max_{i\in[0,t]}\|\nabla^{2}f(x_{i} )-P_{i}\nabla^{2}f(x_{i})P_{i}\|.\]
Finally, the next theorem shows that when algorithm 3 uses a stochastic \(D\) that satisfies Requirement 1a, then \(f(x_{t})\) also converges in expectation to \(f(x^{\star})\) when \(f\) is convex.
**Theorem 6**.: _Assume \(f\) satisfy Assumptions 1, 2 and 4. Let Requirements 1a, 2 and 3 hold. Then, in expectation over the matrices \(D_{i}\), algorithm 3 starting at \(x_{0}\) with \(M_{0}\) achieves, for \(t\geq 1\),_
\[\mathbb{E}_{D_{t}}[f(x_{t})-f^{\star}]\leq\frac{1}{1+\frac{1}{4} \left[\tfrac{N}{d}t\right]^{3}}(f(x_{0})-f^{\star})+\frac{1}{\left[\tfrac{N}{ d}t\right]^{2}}\frac{L(3R)^{3}}{2}+\frac{1}{\left[\tfrac{N}{d}t\right]}\frac{C_{3}(3R)^{2}}{2},\] \[\text{where}\quad C_{3}\stackrel{{\text{\tiny def}}} {{=}}\delta L\tfrac{\kappa+2\kappa^{2}}{2}+\tfrac{(d-N)}{d}\max_{i\in[0,t]} \|\nabla^{2}f(x_{i})\|.\]
InterpretationThe rates presented in Theorems 4 to 6 combine the ones of cubic regularized Newton's method and gradient descent (or coordinate descent, as in Theorem 6) for functions with Lipschitz-continuous Hessian. As \(C_{1},C_{2}\), and \(C_{3}\) decrease, the rates approach those of cubic Newton.
The constants \(C_{1}\), \(C_{2}\), and \(C_{3}\) quantify the error of approximating \(D\nabla^{2}f(x)D\) by \(H\) in (10) into two terms. The first represents the error made by approximating \(\nabla^{2}f(x)D\) by \(G\), while the second describes the low-rank approximation of \(\nabla^{2}f(x)\) in the subspace spanned by the columns of \(D\). The approximation is more explicit in \(C_{3}\), where increasing \(N\) reduces the constant up to \(N=d\).
To retrieve the convergence rate of Newton's method with cubic regularization, the approximation needs to satisfy three properties: **1)** the points contained in \(Y_{t}\) and \(Z_{t}\) must be close to each other, and to \(x_{t}\) to reduce \(\delta\) and \(\|\varepsilon\|\); **2)** the condition number of \(D\) should be close to 1 to reduce \(\kappa\); **3)**\(D\) should span a maximum dimension in \(\mathbb{R}^{d}\) to improve the approximation of \(\nabla^{2}f(x)\) by \(P\nabla^{2}f(x)P\).
For example, \(Z_{t}=x_{t}\mathbf{I}_{N}^{T}\), \(D_{t}=h\mathbf{I}_{N}\) with \(h\) small, and \(Y_{t}=Z_{t}+D_{t}\) achieve these conditions. This (naive) strategy estimates all directional second derivatives with a finite difference for all coordinates and is equivalent to performing a Newton's step in terms of complexity.
```
First-order oracle \(f\), matrices \(G\), \(D\), vector \(\varepsilon\), iterate \(x\), smoothness \(M_{0}\), minimal norm \(\Delta\) Initialize \(M\leftarrow\frac{M_{0}}{2}\), \(\gamma\leftarrow\frac{1}{4}\frac{\|\varepsilon\|}{\|D\|}\left(1+\kappa_{D}^{2}\right)\), \(\mathtt{ExitFlag}\leftarrow\mathtt{False}\) while\(\mathtt{ExitFlag}\) is False do Update \(M\) and \(H\leftarrow\frac{G^{T}D+D^{T}G}{2}+\mathbf{I}_{N}\frac{M\|D\|\|\varepsilon\|}{2}\) \(\alpha^{*}\leftarrow\arg\min_{\alpha}f(x)+\nabla f(x)^{T}D\alpha+\frac{1}{2} \alpha^{T}H\alpha+\frac{M\|D\alpha\|^{3}}{6}\) \(x_{+}\gets x+D\alpha\) If\(-\nabla f(x_{+})^{T}D\alpha\geq\frac{\|\nabla f(x_{+})\|^{3/2}}{\sqrt{\frac{3 }{4}}}\) and \(\|D\alpha\|\geq\Delta\)then\(\mathtt{ExitFlag}\leftarrow\mathtt{LargeStep}\) If\(-f(x_{+})^{T}D\alpha\geq\frac{\|\nabla f(x_{+})\|^{2}}{M\left(\gamma+\frac{1}{2} \alpha^{T}\right)}\)then\(\mathtt{ExitFlag}\leftarrow\mathtt{SmallStep}\) endwhile return\(x_{+}\), \(\alpha\), \(M\), \(\gamma\), \(\mathtt{ExitFlag}\)
```
**Algorithm 4** Type-I subroutine with backtracking for the accelerated method
## 4 Accelerated Algorithm for Convex Functions
This section introduces algorithm 5, an accelerated variant of algorithm 3 for convex functions, designed using the estimate sequence technique from [44]. It consists in iteratively building a function
\(\Phi_{t}(x)\), a regularized lower bound on \(f\), that reads
\[\Phi_{t}(x)=\tfrac{1}{\sum_{i=0}^{t}b_{i}}\left(\sum_{i=0}^{t}b_{i}\left(f(x_{i}) +\nabla f(x_{i})(x-x_{i})\right)+\lambda_{t}^{(1)}\tfrac{\|x-x_{0}\|^{2}}{2}+ \lambda_{t}^{(2)}\tfrac{\|x-x_{0}\|^{3}}{6}\right),\]
where \(\lambda_{t}^{(1,2)}\) are non-decreasing. The key aspects of acceleration are as follows (see section 4 for more details): **1)** The accelerated algorithm makes a step at a linear combination between \(v_{t}\), the optimum of \(\Phi_{t}\), and the previous iterate \(x_{t}\). **2)** It uses a modified version of algorithm 1, see algorithm 4. **3)** Under some conditions, the step size can be considered as "large", i.e., similar to a cubic-Newton step. The \(\Delta>0\) ensures the step is sufficiently large to ensure theoretical convergence - but setting \(\Delta=0\) does not seem to impact the numerical convergence. The presence of both small and large steps is crucial to obtain the theoretical rate of convergence.
**Theorem 7**.: _Assume \(f\) satisfy Assumptions 1, 2 and 4. Let Requirements 1b to 3 hold. Then, algorithm 5 starting at \(x_{0}\) with \(M_{0}\) achieves, for all \(\Delta>0\) and for \(t\geq 1\),_
\[f(x_{t})-f^{\star}\leq\frac{(M_{0})_{\max}^{2}}{L}\left(\frac{3R }{t+3}\right)^{2}+\frac{4(M_{0})_{\max}}{3\sqrt{3}}\max\left\{1\;;\;\tfrac{2} {\Delta}\right\}\left(\frac{3R}{t+3}\right)^{3}+\frac{\frac{\tilde{\lambda}^{(1 )}R^{2}}{2}+\frac{\tilde{\lambda}^{(2)}R^{3}}{6}}{(t+1)^{3}}.\] \[\text{where }\;\tilde{\lambda}^{(1)}=0.5\cdot\delta\left(L \kappa+M_{1}\kappa^{2}\right)+\|\nabla f(x_{0})-P_{0}\nabla f(x_{0})P_{0}\|, \qquad\tilde{\lambda}^{(2)}=M_{1}+L,\] \[(M_{0})_{\max}=\tfrac{L}{2}(2\Delta+(2\kappa^{2}+\kappa)\delta)+ (2\sqrt{3}-1)\max_{0\leq i\leq t}\|(I-P_{i})\nabla^{2}f(x_{i})P_{i}\|.\]
InterpretationThe interpretation is similar to the one from Section 3. Ignoring \(\tilde{\lambda}^{(1,2)}\), the rate of Theorem 7 combines the one of accelerated gradient and accelerated cubic Newton [45, 44]. The constant \(M_{0}\) blends the Lipschitz constant of the Hessian \(L\) with its approximation errors \((2\kappa^{2}+\kappa)\delta\) and \(\|(I-P)\nabla^{2}f(x)\|\). The better the Hessian is approximated, the smaller the constant.
## 5 Some update strategies for matrices \(Y,\,Z,\,D,\,G\)
The framework presented in this paper is characterized by its generality, requiring only minimal assumptions on the matrix \(D\) and vector \(\varepsilon\). This section explores different strategies for updating the matrices from (3), which can be classified into two categories: _online_ and _batch techniques_.
**Recommended method.** Among all the methods presented in this section, the most promising technique seems to be the _Orthogonal Forward Estimates Only_, as it ensures that the condition number \(\kappa_{D}=1\) and the norm of the error vector \(\|\varepsilon\|\) is small.
### Online Techniques
The online technique updates the matrix \(D\) while algorithms 3 and 5 are running. To achieve Requirement 1b, the method employs either a steepest or orthogonal forward estimate, defined as
\[x_{t+\frac{1}{2}}=x_{t}-h\nabla f(x_{t})\quad\text{(steepest)}\quad\text{or} \quad x_{t+\frac{1}{2}}=x_{t}-h(I-P_{t-1})\frac{\nabla f(x_{t})}{\|\nabla f(x _{t})\|}\quad\text{(orthogonal)}.\]
Then, it include \(x_{t+\frac{1}{2}}-x_{t}\) in the matrix \(D_{t}\). The projector \(P_{t-1}\) is defined in (12), and parameter \(h\) can be a fixed small value (e.g., \(h=10^{-9}\)). This section investigates three different strategies for storing past information: _Iterates only_, _Forward Estimates Only_, and _Greedy_, listed below.
\[Y_{t} =[x_{t+\frac{1}{2}},x_{t},x_{t-1},\dots,x_{t-N+1}],\quad Z_{t}=[x_ {t},x_{t-1},\dots,x_{t-N}]\] (Iterates only) \[Y_{t} =[x_{t+\frac{1}{2}},x_{t-\frac{1}{2}},\dots,x_{t-N+\frac{1}{2}}], \quad Z_{t}=[x_{t},x_{t-1},\dots,x_{t-N}]\] (Forward Estimates Only) \[Y_{t} =[x_{t+\frac{1}{2}},x_{t},x_{t-\frac{1}{2}},\dots,x_{t-\frac{N+1} {2}}],\quad Z_{t}=[x_{t},x_{t-\frac{1}{2}},\dots,x_{t-\frac{N}{2}}]\] (Greedy)
**Iterates only:** In the case of quasi-Newton updates and Nonlinear/Anderson acceleration, the iterates are constructed using the equation \(x_{t+1}-x_{t}\in\nabla f(x_{t})+\textbf{span}\{x_{t-i+1}-x_{t-i}\}_{i=1\dots N}\). The update draws inspiration from this observation. However, it does not provide control over the condition number of \(D_{t}\) or the norm \(\|\varepsilon\|\). To address this, one can either accept a potentially high condition number or remove the oldest points in \(D\) and \(G\) until the condition number is bounded (e.g., \(\kappa=10^{9}\)).
**Forward Estimates Only:** This method provides more control over the iterates added to \(Y\) and \(Z\). When using the _orthogonal_ technique to compute \(x_{i+\frac{1}{2}}\) reduces the constants in Theorems 4, 5 and 7: the condition number of \(D\) is equal to 1 as \(D^{T}D=h^{2}I\), and the norm of \(\varepsilon\) is small (\(\|\varepsilon\|\leq O(h)\)).
**Greedy:** The greedy approach involves storing both the iterates and the forward approximations. It shares the same drawback as the _Iterates only_ strategy but retains at least the most recent information about the Hessian-vector product approximation, thereby reducing the \(\|z_{i}-x_{i}\|\) term in \(\varepsilon\) (7).
### Batch Techniques
Instead of making individual updates, an alternative approach is to compute them collectively, centered on \(x_{t}\). This technique generates a matrix \(D_{t}\) consisting of \(N\) orthogonal directions \(d_{1},\cdots,d_{N}\) of norm \(h\). The corresponding \(Y_{t},Z_{t},G_{t}\) matrices are then defined as follows:
\[Y_{t}=[x_{t}+d_{1},\ldots,x_{t}+d_{n}],\quad Z_{t}=[x_{t},\ldots,x_{t}],\quad G _{t}=[\ldots,\nabla f(x_{t}+d_{i})-\nabla f(x_{t}),\ldots].\]
This section explores two batch techniques that generate orthogonal directions: _Orthogonalization_ and _Random Subspace_. Both lead to \(\delta=3h\) and \(\kappa=1\) in Requirements 2 and 3. However, they require \(N\) additional gradient computations at each iteration (instead of one for the online techniques). For clarity, in the experiments, only the Greedy version is considered.
Orthogonalization:This technique involves using any online technique discussed in the previous section and storing the directions in a matrix \(\tilde{D}_{t}\). Then, it constructs the matrices \(D_{t}\) by performing an orthogonalization procedure on \(\tilde{D}_{t}\), such as the QR algorithm. This approach provides Hessian estimates in relevant directions, which can be more beneficial than random ones.
Random Subspace:Inspired by [35], this technique randomly generates \(D_{t}\) at each iteration by either taking \(D_{t}\) to be \(N\) random (rescaled) canonical vectors or by using the \(Q\) matrix from the QR decomposition of a random \(N\times D\) matrix. This ensures that \(D_{t}\) satisfies Requirement 1a. For clarity, in the experiments, only the QR version is considered.
## 6 Numerical Experiments
This section compares the methods generated by this paper's framework to the fine-tuned \(\ell\)-BFGS algorithm from minFunc[53]. More experiments are conducted in appendix E. The tested methods are the Type-I iterative algorithms (algorithm 3 with the techniques from section 5). The step size of the forward estimation was set to \(h=10^{-9}\), and the condition number \(\kappa_{D_{t}}\) is maintained below \(\kappa=10^{9}\) with the iterates only and Greedy techniques. The accelerated algorithm 6 is used only with the _Forward Estimates Only_ technique. The compared methods are evaluated on a logistic regression problem with no regularization on the Madelon UCI dataset [33]. The results are shown in fig. 1.
Regarding the number of iterations, the greedy orthogonalized version outperforms the others due to the orthogonality of directions (resulting in a condition number of one) and the meaningfulness of previous gradients/iterates. However, in terms of gradient oracle calls, the recommended method, _orthogonal forward iterates only_, achieves the best performance by striking a balance between the cost per iteration (only two gradients per iteration) and efficiency (small and orthogonal directions, reducing theoretical constants). Surprisingly, the accelerated method's performance is suboptimal, possibly because it tightens the theoretical analysis, diminishing its inherent adaptivity.
## 7 Conclusion, Limitation, and Future work
This paper introduces a generic framework for developing novel quasi-Newton and Anderson/Nonlinear acceleration schemes, offering a global convergence rate in various scenarios, including accelerated convergence on convex functions, with minimal assumptions and design requirements.
One limitation of the current approach is requiring an additional gradient step for the _forward estimate_, as discussed in Section 5. However, this forward estimate is crucial in enabling the algorithm's adaptivity, eliminating the need to initialize a matrix \(H_{0}\) (quasi-Newton) or employ a mixing parameter \(h_{0}\) (Anderson acceleration).
In future research, although unsuitable for large-scale problems, the method presented in this paper can achieve super-linear convergence rates, as with infinite memory, they would be as fast as cubic Newton methods. Utilizing the average-case analysis framework from existing literature, such as [48, 58, 21, 16, 47], could also improve the constants in Theorems 4 and 5 to match those in Theorem 6. Furthermore, exploring convergence rates for type-2 methods, which are believed to be effective for variational inequalities, is a worthwhile direction.
Ultimately, the results presented in this paper open new avenues for research. It may also provide a potential foundation for investigating additional properties of existing quasi-Newton methods and may even lead to the discovery of convergence rates for an adaptive, cubic-regularized BFGS variant.
Figure 1: Comparison between the type-1 methods proposed in this paper and the optimized implementation of \(\ell\)-BFGS from minFunc[53] with default parameters, except for the memory size. All methods use a memory size of \(N=25\). |
2308.08088 | Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme
Detection | Hateful meme detection is a challenging multimodal task that requires
comprehension of both vision and language, as well as cross-modal interactions.
Recent studies have tried to fine-tune pre-trained vision-language models
(PVLMs) for this task. However, with increasing model sizes, it becomes
important to leverage powerful PVLMs more efficiently, rather than simply
fine-tuning them. Recently, researchers have attempted to convert meme images
into textual captions and prompt language models for predictions. This approach
has shown good performance but suffers from non-informative image captions.
Considering the two factors mentioned above, we propose a probing-based
captioning approach to leverage PVLMs in a zero-shot visual question answering
(VQA) manner. Specifically, we prompt a frozen PVLM by asking hateful
content-related questions and use the answers as image captions (which we call
Pro-Cap), so that the captions contain information critical for hateful content
detection. The good performance of models with Pro-Cap on three benchmarks
validates the effectiveness and generalization of the proposed method. | Rui Cao, Ming Shan Hee, Adriel Kuek, Wen-Haw Chong, Roy Ka-Wei Lee, Jing Jiang | 2023-08-16T01:38:49Z | http://arxiv.org/abs/2308.08088v1 | # Pro-Cap: Leveraging a Frozen Vision-Language Model
###### Abstract.
Hateful meme detection is a challenging multimodal task that requires comprehension of both vision and language, as well as cross-modal interactions. Recent studies have tried to fine-tune pre-trained vision-language models (PVLMs) for this task. However, with increasing model sizes, it becomes important to leverage powerful PVLMs more efficiently, rather than simply fine-tuning them. Recently, researchers have attempted to convert meme images into textual captions and prompt language models for predictions. This approach has shown good performance but suffers from non-informative image captions. Considering the two factors mentioned above, we propose a probing-based captioning approach to leverage PVLMs in a zero-shot visual question answering (VQA) manner. Specifically, we prompt a frozen PVLM by asking hateful content-related questions and use the answers as image captions (which we call Pro-Cap), so that the captions contain information critical for hateful content detection. The good performance of models with Pro-Cap on three benchmarks validates the effectiveness and generalization of the proposed method.1
memes, multimodal, semantic extraction +
Footnote β : 10.0
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
+
Footnote β : 10.0
data available from these datasets. With the development of Pre-trained Vision-Language Models (PVLMs) such as VisualBERT [18] and ViLBERT [23], recent work leverage these powerful PVLMs to facilitate the hateful meme detection task. A common approach is to fine-tune PVLMs with task-specific data [9; 20; 26; 34; 37]. However, it is less feasible to fine-tune the larger models such as BLIP-2 [15] and Flamingo [1] on meme detection because there are billions of trainable parameters. Therefore, computationally feasible solutions other than direct fine-tuning are needed to leverage large PVLMs in facilitating hateful meme detection.
Different from the approach above using PVLMs, PromptHate[2] is a recently proposed model that converts the multimodal meme detection task into a unimodal masked language modeling task. It first generates meme image captions with an off-the-shelf image caption generator, ClipCap[25]. By converting all input information into text, it can prompt a pre-trained language model along with two demonstrative examples to predict whether or not the input is hateful by leveraging the rich background knowledge in the language model. Although PromptHate achieves state-of-the-art performance, it is significantly affected by the quality of image captions, as shown in Table 1. Image captions that are merely generic descriptions of images may omit crucial details [14; 37], such as the race and gender of people, which are essential for hateful content detection. But with additional image tags, such as entities found in the images and demographic information about the people in the images, the same model can be significantly improved, as shown in Table 1. However, generating these additional image tags is laborious and costly. For instance, entity extraction is usually conducted with the Google Vision Web Entity Detection API 2, which is a paid service. Ideally, we would like to find a more affordable way to obtain entity and demographic information from the images that is critical for hateful content detection.
Footnote 2: [https://cloud.google.com/vision/docs/detecting-web](https://cloud.google.com/vision/docs/detecting-web)
Both above-mentioned approaches (i.e., one using PVLMs and the other converting the task to a unimodal task) have their pros and cons. In this paper, we combine the ideas from these two approaches and design a hateful meme detection method that leverages the power of a frozen PVLM to complement the unimodal approach of PromptHate. Specifically, we use a set of "probing" questions to query a PVLM (BLIP-2 [15] in our experiments) for information related to common vulnerable targets in hateful content. The answers obtained from the probing questions will be treated as image captions (denoted as **Pro-Cap**) and used as input to a trainable hateful meme detection model. Figure 1 illustrates the overall workflow of the method. We refer to the step of using probing questions to generate the captions as _probing-based captioning_.
Our proposed method fills existing research gaps by: 1) Leverage a PVLM without any adaptation or fine-tuning, thereby reducing computational cost; 2) Instead of explicitly obtaining additional image tags with costly APIs, we utilize the frozen PVLM to generate captions that contain information useful for hateful meme detection. To the best of our knowledge, this is the first work that to leverage PVLMs in a zero-shot manner through question answering to assist in the hateful meme detection task. To further validate our method, we test the effect of the generated Pro-Cap on both PromptHate[2] and a BERT-based[4] hateful meme detection model.
Based on the experimental results, we observe that PromptHate with Pro-Cap (denoted as Pro-CapPromptHate) significantly surpasses the original PromptHate without additional image tags (i.e., about 4, 6, and 3 percentage points of absolute performance improvement on FHM [12], MAMI [5], and HarM [28] respectively). Pro-CapPromptHate also achieves comparable results with PromptHate with additional image tags, indicating that probing-based captioning can be a more affordable way of obtaining image entities or demographic information. Case studies further show that Pro-Cap offers essential image details for hateful content detection, enhancing the explainability of models to some extent. Meanwhile, Pro-Capbert clearly surpasses multimodal BERT-based models of similar sizes (i.e., about 7 percentage points of absolute improvement with VisualBERT on FHM [12]), proving the generalization of the probing-based captioning method.
## 2. Related Work
_Memes_, typically intended to be humorous or sarcastic, are increasingly being exploited for the proliferation of hateful content, leading to the challenging task of online hateful meme detection [5; 12; 27]. To combat the spread of hateful memes, one line of work regards the hateful meme detection as a multimodal classification task. Researchers have applied pre-trained vision-language models (PVLMs) and fine-tune them based on meme detection data [20; 26; 34; 37]. To improve performance, some have tried model ensembling [20; 26; 34]. Another line of work considers combining pre-trained models (e.g., BERT [4] and CLIP [29]) with task-specific model architectures and tunes them end-to-end [13; 14; 28]. Recently, authors in [2] have tried converting all meme information into text and prompting language models to better leverage the contextual background knowledge present in language models. This approach achieves the state-of-the-art results on two hateful meme detection benchmarks. However, it adopts a generic method for describing the image through image captioning, often ignoring important factors necessary for hateful meme detection. In this work, we seek to address this issue through probe-based captioning by prompting pre-trained vision-language models with hateful content-centric questions in a zero-shot VQA manner.
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**Model** & \multicolumn{2}{c}{**Performance**} \\ & AUC & Acc. \\ \hline \hline PromptHate (w/o) & 76.76 & 67.28 \\ PromptHate & 81.45 & 72.98 \\ \hline VisualBERT (w/o) & 68.71 & 61.48 \\ VisualBERT & 72.56 & 68.24 \\ ViLBERT (w/o) & 73.05 & 64.70 \\ ViLBERT & 75.72 & 68.24 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Impact on detection performances on the FHM dataset [12] from image captions. (w/o) denotes models without additional entity and demographic information.
## 3. Preliminary
We formally define our task and briefly review the use of pre-trained vision-language models (PVLMs) for zero-shot visual question answering (VQA). At the end of the section, we provide a brief introduction to the specific PVLM utilized in our work.
Given a meme image \(\mathcal{I}\) and a piece of accompanying meme text \(\mathcal{T}\), the model predicts whether the meme is hateful or not. Specifically, the model predicts scores \(\mathbf{s}\in\mathbb{R}^{2}\) over the label space, where \(s_{0}\) is a score indicating how likely the meme is _non-hateful_, whereas \(s_{1}\) is a score for the meme being _hateful_. If \(s_{0}>s_{1}\), the model classifies the meme as non-hateful; otherwise, the meme is classified as hateful. Our proposed method (to be presented in detail in Section 4) uses zero-shot VQA to generate relevant captions to assist with hateful meme detection. To perform zero-shot VQA, we assume that there is a PVLM capable of processing an image and a textual prompt formatted as _Question:_ [QUESTION]_Answer_, where [QUESTION] is a placeholder for the question. The PVLM then generates a sequence of tokens as the answer to the question. For example, given an image showing an Asian woman and the prompt _Question: What is the race of the person in the image?Answer_, the PVLM may generate the answer _Asian_.
In this work, we use the recently released BLIP-2 model (Luo et al., 2019) as the PVLM, as it has demonstrated good performance in zero-shot VQA. The BLIP-2 model is composed of a frozen pre-trained image encoder, a frozen pre-trained language model, and a lightweight Querying Transformer, which is responsible for bridging the modality gap. It is worth noting that the BLIP-2 model can be replaced with any other PVLM that is capable of zero-shot VQA.
## 4. Proposed Method
### Overview
Recall that the key idea of our method is to elicit image details that are critical for hateful content detection, such as the gender and race of the people in the image. Because these details are not always included in automatically generated image captions, we propose relying on VQA to obtain such critical information, where the questions are carefully curated to elicit demographic and other relevant information. We opt to use zero-shot VQA because (1) for the intended type of questions, we do not have any VQA training data to train our own model, and (2) recent work has demonstrated promising performance of zero-shot VQA.
Specifically, we prompt the PVLM with \(K\)_probing questions_ and regard the set of \(K\) answers from the PVLM as image captions, which we refer to as **Pro-Cap**. We then combine the original text \(\mathcal{T}\) with Pro-Cap as input to a hateful meme detection model. We experiment with two alternative hateful meme detection models: one based on BERT encoding, and the other based on PromptHate, a recently proposed prompting-based hateful meme detection model.
In the rest of this section, we first present the details of how we design our VQA questions to elicit the most critical details of an image for hateful meme detection. We then explain how the generated Pro-Cap is used by two alternative hateful meme detection models.
### Design of VQA Questions
We leverage PVLMs for zero-shot VQA to generate Pro-Cap as image captions. We want Pro-Cap to provide not only a general description of the image but also details critical for hateful meme detection. To obtain a general caption of the image, we design the first probing question to inquire about the generic content of the image, as shown in Table 2. However, such generic captions may be insufficient for hateful meme detection as hateful content usually targets persons or groups with specific characteristics, such as race, gender, or religion (Bartaglia et al., 2016; Luo et al., 2019). Additionally, previous studies have shown that augmenting image representations with entities found in the image or demographic information of people in the image significantly aids hateful meme detection (Song et al., 2019; Wang et al., 2019). Such details may be missing in generic image captions. Therefore, we design additional questions that aim to bring out information central to hateful content. This aligns the generated image captions more closely with the goal of hateful meme detection. Specifically, the high-level idea is to ask questions about common vulnerable targets of hateful content. Inspired by (Song et al., 2019), which categorizes the targets of hateful memes into _Religion, Race, Gender, Nationality_, and Disability_, we ask questions about these five types of targets. For example, to generate image captions that indicate the race of the people in an image, we can ask the following question: _what is the race of persons in the image?_ We list the five questions designed for these five types of targets in Table 2. Additionally, we observe that some animals, such as pigs, are often depicted in hateful memes, frequently as a means to annoy Muslims. With this consideration, we also design a question asking about the presence of animals in the image.
In (Bartaglia et al., 2016), the author claimed that PVLMs may hallucinate non-existent objects. For example, even when there is nobody in an image, PVLMs may generate an answer about race in response to the question _what is the race of the person in the image?_. To prevent such misleading information, we use two validation questions. Specifically, we inquire about the existence of persons and animals. Only when the PVLM responds that a person or an animal exists will we include in the Pro-Cap the answers to those person-related or animal-related questions. For instance, if the answer to the question validating the existence of people indicates that nobody is present, we will ignore all answers from questions asking about _religion, race, gender, nationality_, and _disability_.
We use \(\mathcal{C}\) to represent the concatenation of the answers to the probing questions that are finally included as part of the Pro-Cap based on the validation results. We will then concatenate \(\mathcal{T}\) and \(\mathcal{C}\) together as input to a purely text-based hateful meme classification model, as shown at the bottom of Figure 1.
### BERT-based Detection Model
We now introduce the first of the two alternative hateful meme classification models, which is based on BERT (Bartaglia et al., 2016). We first feed the concatenation of the meme text \(\mathcal{T}\) and the Pro-Cap \(\mathcal{C}\) into the BERT model to generate a vector \(\mathbf{r}\in\mathbb{R}^{d}\):
\[\mathbf{r}=\text{BERT}([\mathcal{T},\mathcal{C}]), \tag{1}\]
where \([\cdot,\cdot]\) represents concatenation. Next, we feed the sentence representation \(\mathbf{r}\) into a linear layer for hateful meme classification:
\[\mathbf{s}=\text{Sigmoid}(\mathbf{W}^{\text{T}}\mathbf{r}+\mathbf{b}), \tag{2}\]
where \(\mathbf{W}\in\mathbb{R}^{d\times 2}\) and \(\mathbf{b}^{2}\) are learnable parameters.
### PromptHate for Hateful Meme Detection
Next, we introduce the second hateful meme classification model, PromptHate (Beng et al., 2017), which employs a prompt-based method to classify memes. PromptHate was developed to better leverage contextual background knowledge by prompting language models. Given a test meme, PromptHate first uses an image captioning model to obtain generic image captions. It then concatenates the meme text, the image captions, and a prompt template into \(\mathcal{S}\): _It was_ [MASK]., to prompt a language model (LM) to predict whether the meme is hateful. Specifically, it compares the probability of the language model predicting [MASK] to be a positive word (e.g., _good_) given the context, versus the probability of predicting a negative word (e.g., _bad_). The approach also includes one positive and one negative example in the context, and [MASK] will be replaced by their respective label words. An overview of PromptHate is shown in Figure 2. For further details, please refer to (Beng et al., 2017).
In (Beng et al., 2017), PromptHate utilizes ClipCap (Zhu et al., 2018) to generate image captions. In this work, we replace this with Pro-Cap \(\mathcal{C}\). We then represent every meme \(\mathcal{O}\) as \(\mathcal{O}=[\mathcal{T},\mathcal{C},\mathcal{S}]\). With these inputs, the language models (LMs), for instance, RoBERTa (R
memes related to COVID-19, which are classified into three categories: _harmless_, _partially harmful_, and _very harmful_. We merge _partially harmful_ and _harmful_ into one category. Because hateful content is always regarded as harmful, we use this dataset to test the capability of generalization of our proposed method from hateful meme detection to harmful meme detection.
**Evaluation Metrics.** Hateful meme detection is a binary classification task. In addition to detection accuracy, we also compute the Area Under the Receiver Operating Characteristics curve (AUCROC) used in prior work (Beng et al., 2017; Chen et al., 2017; Li et al., 2018; Wang et al., 2018). We conduct experiments with **ten** random seeds and report the average performance and standard deviation. All models use the same set of random seeds.
**Implementation Details.** Given a meme image, we first detect the meme text with the open-source Easy-OCR tool 3 and then in-paint over the detected texts. To generate the answers to VQA questions, we prompt BLIP-2 (Li et al., 2018), specifically the FlanT5XL version. We then insert the generated image captions into two text-based hateful meme detection models, i.e., the BERT-based model and the PromptHate model. For the BERT-based model, to avoid overfitting, we add a dropout rate of \(0.4\) to the classification layer. We use a learning rate of \(2e-5\) and a batch size of \(64\). For PromptHate, we train the model with a batch size of \(16\) and empirically set the learning rate to \(1.3e-5\) on FHM and \(1e-5\) on the other two datasets (Chen et al., 2017). We optimize both models with the AdamW optimizer (Kingmae and Ba, 2014) and implement them in PyTorch. Due to space limit, we provide more details (i.e., computation costs and model sizes) in Appendix A.
Footnote 3: [https://github.com/jaide4AIEasyOCR](https://github.com/jaide4AIEasyOCR)
### Baselines
We compare our method against both unimodal and multimodal models to demonstrate the effectiveness of the proposed method, where we regard models receiving information from one modality (i.e., the meme text or the meme image only) as unimodal models. Note that because Pro-Cap already contains image information, even if Pro-Cap is input into a unimodal BERT, the model is not considered to be unimodal.
For the unimodal models, we consider a text-only and an image-only model. For the text-only model, we fine-tune a pre-trained BERT model (Chen et al., 2017) based on the meme text only for meme classification, which we represent as **Text-BERT**. For the image-only model, we first extract object-level image features with an off-the-shelf feature extractor, Faster-RCNN (Wang et al., 2018), which is trained for object detection. We then perform average pooling over object features and feed the resulting vector into a classification layer. We use **Image-Region** to denote the image-only model.
For multimodal models, we categorize them into two groups: 1) fine-tuning generic multimodal models that are proposed to conduct different multimodal tasks; 2) models specifically designed for hateful meme detection. For the first type of multimodal models, we firstly consider the **MMBT-Region** model (Chen et al., 2017), which is a widely used multimodal baseline in hateful meme detection (Beng et al., 2017; Li et al., 2018; Wang et al., 2018) and the model has not been pre-trained with multimodal data. Secondly, we consider several multimodal pre-trained models, such as VisualBERT (Wang et al., 2018) pre-trained on MS-COCO (Wang et al., 2018) (**VisualBERT COCO**) and ViLBERT pre-trained on Conceptual Captions (Wang et al., 2018) (**ViBERT CC**). Some recently released powerful pre-trained models are also included such as the _Align before Fusion_ model (Chen et al., 2017) (**ALBEF**) and the _Bootstrapping Language-Image Pre-training_ model (Li et al., 2018) (**BLIP**). For the second category of baselines which are designed for the meme detection task, we consider the models listed below. The **CLIP-BERT** model (Wang et al., 2018) leverages the CLIP model (Wang et al., 2018) to deal with noisy meme images, uses pre-trained BERT (Chen et al., 2017) for representing meme text, and fuses them with concatenation. The **MOMETA** model (Wang et al., 2018) designed both local and global multimodal fusion mechanisms to exploit multimodal interactions for hateful meme detection. Note that the MOMENTA model is designed to leverage augmented image tags (the detected image entities). **DisMultiHate**(Li et al., 2018) disentangles target information from memes as targets are essential for identifying hateful content. The **PromptHate** model (Beng et al., 2017) is what we discussed in Section 4.4.
### Experiment Results
As discussed earlier, previous work has shown that additional image tags can enhance hateful meme detection. We therefore consider two settings for comparison: 1) without any augmented image tags; 2) with augmented image tags. We display the performance of models **without** augmented image tags in Table 4 and **with** augmented image tags in Table 5. The standard deviations (\(\pm\)) of ten random seed runs are also reported, and the best results are highlighted in bold.
**Without augmented image tags:** We first compare Pro-Capbert with unimodal and multimodal models that also utilize BERT as the text encoder (i.e., VisualBERT, ViLBERT, and MMBT-Region). Evidently, Text BERT, which utilizes only meme text, is substantially outperformed by Pro-CapBERT. This suggests that 1) visual signals are vital for hateful meme detection, and 2) the image captions obtained from the probing questions are informative.
Experiment results from multimodal pre-trained BERT-based models are presented in the second block of Table 4. Interestingly, Pro-Capbert still has better performances in all three datasets, surpassing the most powerful multimodal pre-trained BERT-base model, ViLBERT, by over 4% on FHM and surpassing MMBT-Region by about 3% on HarM. This is despite the fact that BERT has less model parameters compared with these multimodal models (e.g. ViLBERT has 252.1M parameters while BERT only has about 110M parameters). Pro-CapBERT is still competitive against models specifically designed for hateful meme detection (i.e., models in the third block of Table 4). We provide experimental results of recently published multimodal pre-trained models (i.e., BLIP and ALBEF) in the fourth block. By comparing the simple Pro-Capbert with these models, we observe that Pro-Capbert gives comparable results.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline
**Datasets** & \multicolumn{2}{c|}{**Train**} & \multicolumn{2}{c}{**Test**} \\ & \#Hate. & \#Non-hate. & \#Hate. & \#Non-hate. \\ \hline \hline FHM & 3,050 & 5,450 & 250 & 250 \\ HarM & 1,064 & 1,949 & 124 & 230 \\ MAMI & 5,000 & 5,000 & 500 & 500 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Statistical distributions of datasets used for evaluation.
While Pro-Caprert does not out-perform ALBEF and BLIP all the time, performance is reasonably good given that in terms of trainable parameters, Pro-Caprert is three times smaller than these two pre-trained models. Meanwhile, Pro-Caprert shows even better results than the two models on HarM. Notably, HarM is a real-world dataset which is much noisier than FHM. HarM also focuses on a relatively new topic (COVID-19), which may not have been observed a lot by the two pre-trained models.
When comparing BLIP and ALBEF with PromptHate, which has a similar model size, PromptHate with Pro-Cap demonstrates significant advantages over the two models on three benchmarks, especially on the noisy HarM dataset. We conjecture that a possible reason is that multimodal pre-trained models leverage pre-training data that is relatively cleaner, on a smaller scale and primarily comprises of non-memes. This leads to some difficulties when confronted with noisy real-world memes. In contrast pure language models are pre-trained on larger and noisier data, which may lead to some intrinsic robustness. If visual signals are reasonably converted to text, pure textual models can be competitive for multimodal tasks such as hateful meme detection.
Reinforcing the point of proper visual signal conversion, the enhanced performance of Pro-CapPromptHate over PromptHate highlights the importance of our probing-based captioning method, which provides essential cues for hateful content detection. With probe-based captioning, Pro-CapPromptHate is able to conduct deep multimodal reasoning that require background knowledge (due to the good performance on FHM), is stable towards noisy real-world meme data (according to performance on HarM), and has great generalization in meme detection (according to the good performance on all three benchmarks).
**With augmented image tags:** For a fair comparison with recent state-of-the-art models, we consider testing our proposed probe-captioning method with the same set of augmented image tags from baselines. To utilize the augmented image tags, we simply pad these tags at the end of each textual meme representation in a similar manner to [2]. With additional image information such
\begin{table}
\begin{tabular}{c|c|c} \hline
**Ans. Length** & **FHM** & **MAMI** & **HarM** \\ \hline No Centric & 70.08\(\pm\)1.57 & 72.78\(\pm\)0.63 & 80.11\(\pm\)1.14 \\ \hline Penalty = 1 & 71.94\(\pm\)0.97 & 73.06\(\pm\)0.82 & 82.09\(\pm\)1.21 \\ Penalty = 2 & 72.28\(\pm\)0.90 & 72.91\(\pm\)1.16 & 82.85\(\pm\)1.51 \\ Penalty = 3 & 71.40\(\pm\)1.06 & 72.47\(\pm\)0.74 & 83.25\(\pm\)1.00 \\ \hline Pro-CapPromptHate & **72.28\(\pm\)0.90** & **73.06\(\pm\)0.82** & **83.25\(\pm\)1.00** \\ \hline \end{tabular}
\end{table}
Table 6: Ablation study about the impact from the length of VQA answers.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline
**Dataset** & \multicolumn{2}{c|}{**FHM**} & \multicolumn{2}{c|}{**MAMI**} & \multicolumn{2}{c}{**HarM**} \\
**Model** & **AUC.** & **Acc.** & **AUC.** & **Acc.** & **AUC.** & **Acc.** \\ \hline \hline Text BERT & 66.10\({}_{a0.55}\) & 57.12\({}_{a0.49}\) & 74.48\({}_{a0.60}\) & 67.37\({}_{a0.57}\) & 81.39\({}_{a0.91}\) & 75.68\({}_{a1.59}\) \\ Image-Region & 56.69\({}_{a1.05}\) & 52.34\({}_{a1.39}\) & 70.20\({}_{a0.63}\) & 64.18\({}_{a0.81}\) & 76.46\({}_{a0.47}\) & 73.05\({}_{a1.80}\) \\ \hline \hline VisualBERT COCO & 68.71\({}_{a1.02}\) & 61.48\({}_{a1.19}\) & 78.71\({}_{a0.59}\) & 71.06\({}_{a0.94}\) & 80.46\({}_{a1.04}\) & 75.31\({}_{a1.44}\) \\ ViLBERT CC & 73.05\({}_{a0.62}\) & 64.70\({}_{a1.12}\) & 77.71\({}_{a1.20}\) & 69.48\({}_{a1.00}\) & 84.11\({}_{a0.88}\) & 78.70\({}_{a1.17}\) \\ MMBT-Region & 72.86\({}_{a0.64}\) & 65.06\({}_{a1.76}\) & 79.17\({}_{a0.91}\) & 70.46\({}_{a0.76}\) & 85.48\({}_{a0.75}\) & 79.83\({}_{a2.00}\) \\ \hline CLIP-BERT & 66.97\({}_{a0.34}\) & 58.28\({}_{a0.63}\) & 77.66\({}_{a0.64}\) & 68.44\({}_{a1.07}\) & 82.63\({}_{a3.83}\) & 80.48\({}_{a1.95}\) \\ DisMultiHate & 69.11\({}_{a0.84}\) & 62.42\({}_{a0.72}\) & 78.21\({}_{a0.61}\) & 70.58\({}_{a1.13}\) & 83.69\({}_{a1.33}\) & 78.05\({}_{a0.73}\) \\ PromptHate & 76.76\({}_{a0.95}\) & 67.82\({}_{a1.23}\) & 76.21\({}_{a1.05}\) & 68.08\({}_{a0.58}\) & 87.51\({}_{a0.74}\) & 79.38\({}_{a1.72}\) \\ \hline BLIP & 76.80\({}_{a2.37}\) & 69.20\({}_{a1.84}\) & 80.59\({}_{a0.87}\) & 71.84\({}_{a1.11}\) & 87.09\({}_{a1.46}\) & 81.81\({}_{a1.74}\) \\ ALBEF & 79.40\({}_{a0.53}\) & 70.58\({}_{a0.50}\) & 83.24\({}_{a0.93}\) & 72.77\({}_{a1.00}\) & 85.49\({}_{a1.23}\) & 80.99\({}_{a0.80}\) \\ \hline \hline Pro-Caprert & 77.50\({}_{a0.58}\) & 68.14\({}_{a0.64}\) & 79.62\({}_{a0.91}\) & 71.06\({}_{a0.88}\) & 89.04\({}_{a1.00}\) & 82.06\({}_{a1.92}\) \\ Pro-CapPromptHate & **80.87\(\pm\)**0.66 & 72.28\({}_{a0.90}\) & 82.53\({}_{a0.49}\) & **73.06\({}_{a0.82}\)** & **90.25\({}_{a0.54}\)** & **83.25\({}_{a1.00}\)** \\ \hline \end{tabular}
\end{table}
Table 4: Model comparison without any augmented image tags.
\begin{table}
\begin{tabular}{c|c c|c c} \hline
**Dataset** & \multicolumn{2}{c|}{**FHM**} & \multicolumn{2}{c|}{**MAMI**} & \multicolumn{2}{c}{**HarM**} \\
**Model** & **AUC.** & **Acc.** & **AUC.** & **Acc.** & **AUC.** & **Acc.** \\ \hline \hline VisualBERT COCO & 72.56\({}_{a0.80}\) & 64.28\({}_{a1.27}\) & 80.84\({}_{a0.67}\) & 72.86\({}_{a0.71}\) & 82.96\({}_{a0.98}\) & 78.81\({}_{a0.80}\) \\ ViLBERT CC & 75.72\({}_{a0.91}\) & 68.24\({}_{a0.44}\) & 80.33\({}_{a1.01}\) & 71.75\({}_{a1.14}\) & 84.79\({}_{a1.23}\) & 81.39\({}_{a1.62}\) \\ \hline MOMENTA & 69.17\({}_{a1.74}\) & 61.34\({}_{a1.89}\) & 81.68\({}_{a2.80}\) & 72.10\({}_{a2.90}\) & 86.32\({}_{a3.83}\) & 80.48\({}_{a1.95}\) \\ DisMultiHate & 79.89\({}_{a1.71}\) & 71.26\({}_{a1.66}\) & 80.08\({}_{a0.55}\) & 71.87\({}_{a0.47}\) & 86.39\({}_{a1.17}\) & 81.24\({}_{a1.04}\) \\ PromptHate & 81.45\({}_{a0.74}\) & 72.98\({}_{a1.09}\) & 79.95\({}_{a0.66}\) & 70.31\({}_{a0.64}\) & 90.96\({}_{a0.62}\) & 84.47\({}_{a1
as entities and demographic information, most models have some improvements. An interesting thing is that neither BLIP nor ALBEF benefits much from additional image tags. This is because the additional tags are usually single words or short phrases, which may be noisy or redundant, while BLIP and ALBEF may be less capable of dealing with noisy inputs. Similar to the results in Table 4, when augmenting image information: 1) the simple Pro-CapBERT still obviously surpasses multimodal pre-trained BERT-base models such as VisualBERT or ViLBERT; 2) the Pro-CapBERT performs better than models with similar sizes but specifically designed for hateful meme detection (i.e., MOMENTA or DisMultiHate) in most cases; 3) the Pro-CapBERT achieves comparable results compared with more powerful multimodal pre-trained models, which is about three times larger and surpasses them on the HarM dataset, which is real-world and noisy; 4) Pro-CapPromptHate surpasses the original PromptHate and achieves the best performance on three benchmarks as well. An interesting point is that comparing Pro-CapPromptHate without any augmented tags and original PromptHate with augmented additional image information, they achieve comparable performance on FHM and HarM and the former even surpasses the latter on MAMI. However, extracting the additional image information is expensive and laborious, which can be replaced by probing-based captioning according to the experimental results. The equally good performance on three benchmarks highlights the stability and generalization of our proposed approach.
### Ablation Study
In this section, we conduct ablation studies to better understand our Pro-Cap method. Specifically, we consider the impact of asking different questions and the impact of the length of answers to the probing questions. To eliminate other factors, we consider Pro-CapPromptHate without any augmented image tags. For brevity, we only show accuracy in this section. We present the full results in Appendix B.
**The impact of asking hateful-content centric questions:** We first conduct an ablation study on the effect of prompting PVLMs with questions facilitating hateful meme detection. According to Table 2, the first question asks about the image content while all questions in the second block are for common vulnerable targets of hateful contents. To better understand the impact of including image captions generated by these target-specific questions, we experiment with a setting where captions from the target-specific questions are removed and only the generic caption about image content is used. The results are shown in the first block of Table 6. Compared with the last block of the table, we observe that with captions generated by target-specific probing questions, the model's performance improved on all three datasets, specifically with over 2% on FHM and over 3% on HarM. However, we notice minor improvement on MAMI. We believe that this is because MAMI memes are all related to woman and generic captions about meme images may already cover the gender of persons in the image. However, the other two datasets involve memes with more complexities and
\begin{table}
\begin{tabular}{|c|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**Meme** & \multicolumn{1}{p{142.3pt}|}{**Training events in the first block of the table**} & \multicolumn{1}{p{142.3pt}|}{**Training events in the second block of the table**} & \multicolumn{1}{p{142.
therefore asking a wide ragen of target-specific probing questions is more helpful. It also implies that in real-world hateful meme detection, probing-based captioning would be helpful.
**The length of answers to probing questions:** We apply BLIP-2 as a zero-shot VQA model. Different from existing VQA benchmarks (Krizhevsky et al., 2017; Krizhevsky et al., 2017), where answers are often single words or short phrases, we may want the answers used as image captions to be longer and thus more informative. In this cases, we experiment with answers of different length. To conduct the analysis, we set the length penalty in BLIP-2's text decoder for answer generation with different values (i.e., 1, 2 and 3). With increased length penalty, longer answers are encouraged. We show results of model performance with different answer length in Table 6. The results show that detection performance is robust and does not vary much with different answer lengths. This indicates the stability of the Pro-Cap method. On the other hand, to a very small extent, different datasets do favor answers of different lengths. For instance, the Hard dataset prefers longer answers while the MAMI dataset prefers shorter answers.
### Case Study
In this section, we conduct case studies to better understand the strengths and limitations of our proposed method. We first compare Pro-CapPromptHate against PromptHate with image captions and show examples in Table 7. From the three examples, we observe that in most cases, generic captions about the image content do not provide the key information for hateful meme detection, while asking questions about common vulnerable targets helps. For instance, in the first example, the answer from asking questions about race, country and religion all provide some key words such as _islamic_ or _muslim_; in the second example, answers to questions about country and religion are important image captions and the answer to the race-related question is the most important for hateful meme detection. In contrast, we observe that the basic captions in the original PromptHate miss these crucial facts about the meme images.
Next, we conduct error analysis about our proposed probe-captioning in Table 8. In the first example, all probe-captions generate sufficient image captions for the hateful meme detection, while the model still fails at prediction. This may be due to the current language models performing poorly in further complex reasoning. We also note that the small scale of hateful meme datasets may be inadequate for training a model to perform complex reasoning. Recent studies about large language models pre-trained with trillions of words (Wang et al., 2018) may facilitate hateful meme detection to some extent. Besides, we observe minor errors in predicted answers from the zero-shot VQA model (e.g., the wrong prediction of "a woman kissing a man" when asking about gender). It highlights that with the development of better zero-shot VQA models, the our strategy could potentially facilitate more for the two text-based hateful meme detection models. The second example highlights a limitation of most hateful content detection models in that they may be biased. During the training stage, there may be hateful contents towards Muslims so that once models seen Muslims, they tend to predict the meme as hateful. To alleviate the issue, debiasing techniques may be needed. Due to space limitation, we omit visualization examples in the main pages and refer the reader to examples in Appendix C.
## 6. Conclusion
In this study, we attempt to leverage pre-trained vision-language models (PVLMs) in a low-computation-cost manner to aid the task of hateful meme detection. Specifically, without any fine-tuning of PVLMs, we probe them in a zero-shot VQA manner to generate hateful content-centric image captions. With the distilled knowledge from large PVLMs, we observe that a simple language model, BERT, can surpass all multimodal pre-trained BERT models of a similar scale. PromptHate with probe-captioning outperforms previous results significantly and achieves the new state-of-the-art on three benchmarks.
**Limitations:** We would like to point out a few limitations of the proposed method, suggesting potential future directions. Firstly, we heuristically use answers to all probing questions as Pro-Cap, even though some questions may be irrelevant to the meme target. We report the performance of PromptHate with the answer from one probing question in Appendix D, highlighting that using all questions may not be the optimal solution. A future direction could involve training a model to dynamically select probing questions that are most relevant for meme detection. Secondly, although we demonstrate the effectiveness of Pro-Cap through performance and a case study in this paper, more thorough analysis is needed. For instance, in the future, we could use a gradient-based interpretation approach (Wang et al., 2018) to examine how different probing questions influence the final results, thereby enhancing the interpretation of the models.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|} \hline
**Meme** & \multicolumn{1}{p{113.8pt}|}{**Stochastic Perturbation**} \\ \hline
**GT** & Hateful (gender) & Non-hateful \\ \hline
**Pred** & Non-hateful & Hateful \\ \hline
**Meme text** & scientist are working hard to cure them all & islam is a religion of peace stop criticizing my religion \\ \hline
**Pro-Cap** & (Content:) two women in wedding dresses kissing each other. (Race:) a white woman kissing a brunette woman in a wedding dress. (Gender:) a woman is kissing a man in a wedding dress. (Country:) the person in the image comes from a country in the philipines. (Religion:) the person in the image is a christian. & (Content:) a man with a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a beard a bear d a beard a beard a beard a beard a bear d a beard a beard a bear d a beard a beard a bear d a beard a bear d a beard a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d d a bear d a bear d a bear d a bear d a bear d a bear d a bear d d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d a bear d d a bear d a bear d a bear d d a bear d a bear d d a bear d a bear d a bear d a bear d a bear d a bear d d a bear d a bear d d a bear d d a bear d |
2307.00251 | Local Eviction Moratoria and the Spread of COVID-19 | At various stages during the initial onset of the COVID-19 pandemic, various
US states and local municipalities enacted eviction moratoria. One of the main
aims of these moratoria was to slow the spread of COVID-19 infections. We
deploy a semiparametric difference-in-differences approach with an event study
specification to test whether the lifting of these local moratoria led to an
increase in COVID-19 cases and deaths. Our main findings, across a range of
specifications, are inconclusive regarding the impact of the moratoria -
especially after accounting for the number of actual evictions and conducting
the analysis at the county level. We argue that recently developed augmented
synthetic control (ASCM) methods are more appropriate in this setting. Our ASCM
results also suggest that the lifting of eviction moratoria had little to no
impact on COVID-19 cases and deaths. Thus, it seems that eviction moratoria had
little to no robust effect on reducing the spread of COVID-19 throwing into
question its use as a non-pharmaceutical intervention. | Julia Hatamyar, Christopher F. Parmeter | 2023-07-01T07:03:19Z | http://arxiv.org/abs/2307.00251v1 | # Local eviction Moratoria and the spread of COVID-19
###### Abstract.
At various stages during the initial onset of the COVID-19 pandemic, various US states and local municipalities enacted eviction moratoria. One of the main aims of these moratoria was to slow the spread of COVID-19 infections. We deploy a semiparametric difference-in-differences approach with an event study specification to test whether the lifting of these local moratoria led to an increase in COVID-19 cases and deaths. Our main findings, across a range of specifications, are inconclusive regarding the impact of the moratoria - especially after accounting for the number of actual evictions and conducting the analysis at the county level. We argue that recently developed augmented synthetic control (ASCM) methods are more appropriate in this setting. Our ASCM results also suggest that the lifting of eviction moratoria had little to no impact on COVID-19 cases and deaths. Thus, it seems that eviction moratoria had little to no robust effect on reducing the spread of COVID-19 throwing into question its use as a non-pharmaceutical intervention.
Julia Hatamyar, Centre for Health Economics, University of York. Christopher F. Parmeter, Department of Economics, University of Miami, Coral Gables, FL 33146; Corresponding Author e-mail: [email protected] All R and Stata code used in this paper is available upon request.
We thank participants at the University of York Applied Microeconomics Cluster Seminar and the University of Miami for their invaluable feedback. The usual disclaimer applies.
\({}^{1}\)[https://www.vox.com/21569601/eviction-moratorium-cdc-covid-19-congress-rental-assistance-rent-crisis](https://www.vox.com/21569601/eviction-moratorium-cdc-covid-19-congress-rental-assistance-rent-crisis).
On the surface, eviction moratoria seem a prudent policy measure. However, given a raft of other COVID-19 policies that were already in place across most US states, the efficacy of such a policy with respect to preventing the spread of COVID-19 is not obvious.2 This suggests that identification of such an impact is likely to prove difficult. This is succinctly characterized by Goodman-Bacon & Marcus (2020, pg. 154): "Good control groups will have to match treatment groups on many dimensions. Smart research designs will try to focus on situations where treatment and control groups differ only by the introduction of a single COVID policy (or, at least, only few policies)."
Footnote 2: In addition to slowing/mitigating the spread of COVID-19 due to evictions, the moratorium kept tenants in their homes at a time when unemployment was high due to economy-wide impacts from the pandemic.
To date the findings in the literature related to the ability of eviction moratoria to slow the spread of COVID-19 are mixed (as presaged by Goodman-Bacon & Marcus 2020). The first attempt to study the impact of eviction moratoria on the spread of COVID-19 is Leifheit et al. (2021) who use data from the 44 states that ever instituted an eviction moratoria from the period March 13 to September 3, 2020. Leifheit et al. (2021) deploy a difference-in-difference (DiD) approach with a two-way fixed effects event-study specification and find that both COVID-19 incidence and mortality increased steadily in states **after** the moratoria expired. They find that a spike in deaths due to evictions occurring after expiration of moratoriums _preceded_ a spike in cases, which occurred almost 10 weeks later. In related work, Nande et al. (2021), use a simulated model of viral transmissions, and predict that evictions increase COVID-19 infection risk. They then apply their simulated model to Philadelphia using locally-specific parameters, and conclude that eviction moratoria are an effective and important policy measure.
Using a panel of individuals who were diagnosed with COVID-19 and a Cox DiD regression, Sandoval-Olascoaga et al. (2021) find an increased likelihood of a COVID diagnosis after state-level moratoria were lifted. Jowers et al. (2021) study the impact of "housing precarity policies" at the county level, which include both eviction and utility disconnection moratoria, on added COVID-19 cases and deaths, using a traditional panel fixed effects regression. Although the authors find that eviction moratoria reduce infections and deaths by a significant amount, their econometric model raises causal identification concerns - and does not control for any other local policies in place. In contrast to the above studies, Pan et al. (2020) examine a variety of non-pharmaceutical interventions (including eviction moratoria)
using a negative binomial specification, and do _not_ find any statistically significant impact of eviction policies on COVID-19 spread.3
Footnote 3: The authors find that only shelter-in-place, stay at home measures, mask mandates, and travel restrictions achieved a significant effect.
Our work here critically examines the impact of local eviction moratoria on COVID-19 incidence and mortality. Although the work of Leifheit et al. (2021) and Sandoval-Olascoaga et al. (2021) are crucially important for understanding the potential causal effects of the state level eviction moratoria on limiting the spread of the COVID-19 virus, we nonetheless demonstrate that their results are not robust when replicated using alternative econometric techniques. This paper also differs from previous work in that we include actual eviction numbers as a control, perform analysis at the county level, and focus mainly on large metropolitan centers (where population density is increased).
We preview our results here. First, we construct a dataset mimicking that of Leifheit et al. (2021). We also buttress this exercise with several other extensions which we believe lend credence to the estimation of a causal effect, and fail to find that expiring eviction moratoria had quantitatively meaningful impacts on either cases or deaths.4 Next, we construct a new dataset at the county level, for a variety of metropolitan areas. We use Princeton Eviction Lab (Hepburn, Louis & Desmond, 2020) data on the actual number of evictions in each of these counties by week, which allows us to control for this important confounding variable. Lastly, we repeat the analysis using three different estimators (each of which has merits beyond the simple two-way fixed effects DiD approach), and again fail to find significant evidence that expiring moratoria had any causal impact on either cases of, or deaths from, COVID-19.
Footnote 4: Replication details and results can be found in the appendix.
One reason that we believe the main finding of Leifheit et al. (2021) dissipates is that the timing differences of expiring eviction moratoria suggest that an alternative weighting scheme be used (Goodman-Bacon & Marcus, 2020, Sun & Abraham, 2021, De Chaisemartin & d'Haultfoeuille, 2020, Borusyak et al., 2021, Baker et al., 2022). This scheme weights the treatment effects based on the cohorts of time from the expiration of the moratoria which has meaningful consequences not only for the estimates, but also the standard errors.5 When using more recent statistical models to account for this requirement, the Leifheit et al. (2021) analysis fails at the state level. However, even if the results did hold, the county
level is arguably the more relevant geographic area of analysis due to significant differences between state and county-level policy implementation (for example Austin's local moratoria in contrast to the lack of a binding Texas order). Finally, although Leifheit et al. (2021) do control for various policies and population size in their specifications, they do not control for political or eviction-related potential confounders. These variables are likely to impact both the implementation of eviction laws and the number of COVID-19 cases and deaths.
Lastly, even with the cohort specific weighting, we argue that the most appropriate method to study the potential causal impact of eviction moratoria on the transmission of COVID-19 is augmented synthetic control (ASC) with staggered adoption (Ben-Michael, Feller & Rothstein 2022). This method constructs synthetic control observations that can be compared to the treated group while accounting for the staggered adoption that is prevalent in many event study applications. It is an ideal tool since even taking out county-specific averages, as done in a DiD, is unlikely to be credible given the substantial heterogeneity that is likely to be present in differences between counties, both in trends and in levels. As Imbens (2022, pg. 2561) notes "The basic synthetic control method...has in a short time found many applications in a wide range of fields, including...the effects of country- or state-level COVID-19 policies." Again, using ASC with staggered adoption, our findings remain consistent. Once the moratoria expires, there is no statistically significant effect on COVID-19 cases or deaths.
Overall, our main finding is that while eviction moratoria certainly helped to keep people in their homes during a time of significant economic upheaval, the moratoria themselves had no statistically significant effect on COVID transmission. The fact that our findings differ from most previous work is likely due to the inability of studies at the state level to pinpoint specific transmission patterns that are likely to vary at a local scale, other policy devices already in place prior to any moratoria expiring, individuals being aware of the transmission of COVID and taking necessary steps to avoid infection, and eviction moratoria not being truly complete bans on evictions. All of these issues combined make it plausible that an eviction moratoria, as a policy instrument for public health, is rather imperfect.6 Targeted policies such as mask wearing, social distancing and stay-at-home orders. are likely to be much more effective, as shown in Pan et al. (2020).
## 2. Background
Understanding the economic, social, and health impacts of COVID-19, as well as the effects of various policies implemented to address the pandemic, is a crucial topic of research across multiple disciplines. However, a large scale multidisciplinary review of 102 articles attempting to estimate the impact of various COVID-19 policies on COVID-19 outcomes found that only _one_ of them met criteria and design checks for estimating causal impacts (Haber, Clarke-Deedler, Feller, Smith, Salomon, MacCormack-Gelles, Stone, Bolster-Foucault, Daw, Hatfield et al., 2021). We therefore outline relevant background on the policy studied in this paper, eviction moratoria, to highlight the importance of carefully considering the methodological framework used for causal inference.
### **Eviction Moratoria and the Pandemic.**
In the United States, one way that both federal and some state governments interceded to combat the spread of COVID-19 was by placing a moratorium on evictions. The justification for these moratoria was that evictions could lead to shelter overcrowding and homelessness as those forced to leave their homes searched for alternative housing. Thus, preventing landlords from evicting tenants would allow for better self-isolation, potentially limiting community spread. According to a CDC spokesperson,"it's hard to follow social distancing orders if you have to double-up at a friend's or family member's house, and it's impossible if you're homeless and are forced to turn to shelters7 as a last resort."8 Figure 1 depicts the total number of per-county COVID-19 cases by population for our main sample, given the county's current weekly moratorium status. In total, there appears to be a much higher number of COVID-19 cases in counties without a current moratorium; however, this is not controlling for the crucially important presence of other COVID-19 mitigating policies.
Footnote 7: Limited evidence indicates a wide degree of heterogeneity in the incidence of COVID-19 infections in homeless shelters during the initial weeks of the pandemic (Mosites, Parker, Clarke, Gaeta, Baggett, Imbert, Sankaran, Scarborough, Huster, Hanson et al., 2020).
Footnote 8: [https://www.vox.com/21569601/eviction-moratorium-cdc-covid-19-congress-rental-assistance-rent-crisis](https://www.vox.com/21569601/eviction-moratorium-cdc-covid-19-congress-rental-assistance-rent-crisis)
At the federal level, the CDC eviction moratorium went into effect on September 4th, 2020. Until January 1, 2021, landlords were no longer able to "force tenants out of their homes due to a failure to pay rent, as long as the tenants **legally declare** they qualify for protection9 under the order." Landlords could still evict tenants for other reasons - like
"engaging in criminal activity" or "threatening the health and safety of other residents." These requirements for obtaining protection under the national moratorium may explain why a substantial number of evictions still occurred even after September 4th. Alternatively, certain states or counties may have simply decided not to enforce the CDC ruling. Figure 2 shows the average number of eviction filings in the Eviction Lab database by week in 2020 - with no obvious effect of the September 4th ruling (depicted by the vertical line) for those counties in our sample.
Footnote 1: The _Hospital_ website is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a \(H\) website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _Hospital_ website, which is a \(H\) website, which is a _Hospital_ website, which is a _Hospital_ website, which is a _H_
### **Eviction Law in the United States.**
In addition to heterogeneity in COVID response policies across state, there exists substantial heterogeneity in (pre-pandemic) state eviction statutes.10 In most states, landlords must present tenants with written complaint (notice of intended eviction) for non-payment of rent a few days to a few weeks prior to the intended eviction date. Most, but not all, states then require court orders or judicial rulings in order for the physical eviction to proceed. If a tenant has the right to appeal the eviction, there is large variation across states in terms of the minimum number of days in which a trial can be scheduled after the tenant receives written notice. This means, in some states, landlords could have started eviction processes so that once moratoria lifted tenants could be removed expeditiously, and these removal processes differ according to underlying statutes.
Footnote 10: βEviction Lawsβ Policy Surveillance Program of the LawAtlas Project
In the context of COVID-related eviction moratoria, it is especially important to control for whether a state's laws require a landlord to waive the right to evict a tenant after accepting partial repayment of rent. Since part of the tenant's "best efforts" under the
Figure 2. Average Eviction Filings by Week
national moratorium require partial payment of rent if possible, states in which this prevents an eviction from going forward will have lower eviction rates (and potentially lower infection rates) as a result of the pre-existing eviction laws, not the COVID-related eviction policies. In addition, areas which had a moratorium on both eviction filings _and_ hearings saw more of a surge in evictions following expiration of local moratoria (Cowin, Martin & Stevens, 2020). These examples of substantial variation make clear that any potential treatment and control groups for COVID-related policy are likely not comparable in terms of their underlying eviction policies - therefore we rely on ASC in our preferred analyses (Kreif, Grieve, Hangartner, Turner, Nikolova & Sutton, 2016). We also conduct analysis on a subset of cities for which data on underlying eviction legislation is available in Section 6.
## 3. Data
Our sample contains 59 counties from the 30 US cities which enacted eviction moratoria and for which eviction data for 2020 is available on Princeton Eviction Lab. The sample period begins April 20, 2020 and ends December 31, 2020.11 We extend the sample period to the end of 2020; even though the CDC eviction moratorium went into effect on September 4th, COVID-19 has a lag of 2-3 weeks, so we require data that goes past September to be able to properly extract cohort effects. We also do not have evidence that the nationwide moratorium made any difference at the local level on the actual number of evictions (see Figure 2). Since the Eviction Lab eviction data is at the city and/or county level, eviction moratorium was also collected manually for each local municipality from this website. This is important to capture the true effect of moratorium endings, as there may be localities with orders that differ from their state's. For example, Texas's eviction moratorium ended on May 18, 2020, but the city of Austin, Texas, had an eviction moratorium in place through December 31st, 2020. More concerning, some states may have had no state-level moratorium in place, yet certain metropolitan areas _within_ those states enacted their own orders. It is therefore crucial to collect detailed information about local municipality orders and not rely exclusively on state-level moratorium information. Since eviction data is at either the census-tract or the ZIP code level, all eviction counts were aggregated to the county level (using HUD USPS crosswalk information from Q1 2020). Figure 3 depicts the number of counties in which moratoria lifted during each week of the sample period,
and demonstrates no obvious pattern or grouping of the timing of moratoria endings across observations or with respect to the national CDC moratorium on September 4th, 2020.
COVID-19 case and death information was taken from the New York Times database, which is provided at the county level in the covid19R package available in the R statistical programming environment. Measurement errors in the data resulting in a few negative numbers for new cases and deaths were interpolated using a cubic spline. Demographic variables at the county level were taken from the 2018 American Community Survey, and
Figure 3. Total Counties with Moratoria Lifted per Week
include racial and ethnic demographics,12 educational attainment, average renting rates, and poverty and inequality indices. We also use Census estimates for population density in each county. OxCGRT provides a database of various COVID-19 policies at the state level, including start and end dates for mask mandates, stay-at-home orders, school closings, and an overall policy Stringency Index.13 County-level policy information was taken from the HHS.14 Information on political party vote share was taken from the MIT Election Lab (MIT 2018), and the Yale Climate Communication study (Howe, Mildenberger, Marlon & Leiserowitz 2015) provides county-level survey data on belief in climate change, which we use as a proxy for trust in science. Finally, we merge selected details on eviction laws from the "Eviction Laws" Policy Surveillance Program of the LawAtlas Project to account for differences across eviction proceedings.
Footnote 12: Which are known to be correlated with COVID-19 infection rates (Millett, Jones, Benkeser, Baral, Mercer, Beyrer, Honermann, Lankiewicz, Mena, Crowley et al. 2020, Mahajan & Larkins-Pettigrew 2020), and are not controlled for by Leifheit et al. (2021).
Footnote 13: [https://raw.githubusercontent.com/OxCGRT/USA-covid-policy/master/data/OxCGRT_US_latest.csv](https://raw.githubusercontent.com/OxCGRT/USA-covid-policy/master/data/OxCGRT_US_latest.csv)
Table 1 presents summary statistics for selected variables; the high degree of variation in number of weekly eviction filings is of note. There is a strong negative correlation (-0.370) between local moratorium length and the number of eviction filings per county. We also note a weak negative correlation between the number of eviction filings and the strength of various other COVID-19 mitigating policies as captured in the local Stringency Index variable. The lack of correlation between moratorium length and political affiliation or stringency index is also of note. Also, the positive correlation between eviction filings and new COVID-19 cases is consistent with Figure 1 (and the subsequent correlation with deaths).15
Footnote 14: healthdata.gov
Footnote 15: Table A5 in Appendix B contains a full correlation matrix for our policy and political variables.
## 4. Methodology: Difference-in-Differences
This section outlines the main econometric methods for staggered treatment timing settings used in this paper. We also present negative binomial results following Leifheit et al. (2021), who did not account for cohort effects (as discussed earlier).
For each method, our primary estimand of interest is the Average Treatment Effect on the Treated (ATT), \(k\) periods after treatment:
\[ATT_{k}\equiv\frac{1}{J}\sum_{j=1}^{J}Y_{j,T_{j}+k}(T_{j})-Y_{j,T_{j}+k}( \infty). \tag{1}\]
Event time relative to treatment time for unit \(j\), \(T_{j}\), is indexed by \(k=t-T_{j}\). \(Y_{j,T_{j}+k}(T_{j})\) is the potential outcome at time \(T_{j}+k\) under treatment, and \(\mathrm{Y}_{j,T_{j}+k}(\infty)\) is the potential outcome for untreated units. Their difference, \(Y_{j,T_{j}+k}(T_{j})-Y_{j,T_{j}+k}(\infty)\), is the individual (unit-level) treatment effect, which is averaged to obtain the \(ATT\) as in Equation (1).
### Negative Binomial Regression: Leifheit et al. (2021) Analysis
For the state-level analysis, we follow Leifheit et al. (2021) and use population-averaged negative binomial regression with two-way fixed effects (i.e. traditional difference-in-differences with an event study approach):
\[Y_{it}=\alpha+\beta_{1}\mathrm{T}_{it}+\beta_{2}\mathrm{Post}_{t}+\beta_{3}( \mathrm{T}_{it}\times\mathrm{Post}_{t})+\gamma_{i}+\lambda_{t}+\epsilon_{it}, \tag{2}\]
with state-day as the unit of analysis, log of state population included as an offset, first-order autoregressive (AR1) structure, state and week fixed effects \(\gamma_{i}\) and \(\lambda_{t}\), and conventional (non-robust) standard errors. \(\beta_{3}\) at various leads and lags from treatment time is the coefficient of interest for estimating \(ATT_{k}\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Statistic & Mean & St. Dev. & Min & Max \\ \hline Moratorium Length (weeks) & 22.429 & 10.319 & 8 & 40 \\ Eviction Filings & 69.352 & 146.904 & 0 & 1,726 \\ Population Density & 2,087.582 & 2,541.545 & 99.106 & 13,801.320 \\ GINI Index & 0.479 & 0.035 & 0.383 & 0.562 \\ Percent White & 0.683 & 0.137 & 0.341 & 0.954 \\ Percent Black & 0.189 & 0.132 & 0.012 & 0.592 \\ Percent Latinx & 0.143 & 0.120 & 0.008 & 0.433 \\ Percent College Educated & 0.155 & 0.031 & 0.086 & 0.224 \\ Percent Renting & 0.343 & 0.089 & 0.154 & 0.596 \\ Stringency Index & 52.567 & 12.677 & 8.330 & 79.630 \\
2016 Election Political Difference & 27.583 & 19.067 & 0.210 & 76.960 \\ Percent Belief in Climate Change & 70.437 & 5.640 & 57.360 & 81.238 \\ \hline \end{tabular} This table presents summary statistics of various covariates for our sample of 30 US cities. Data on moratoria length and eviction filings is taken from Princeton Eviction Lab (Hepburn et al., 2020). Population density, GINI index, demographic and education data is taken from the 2018 ACS. COVID-19 stringency index is from the OxCGRT database. Political difference data is from the MIT Election Lab, and climate change belief data is from the Yale Climate Communication study.
\end{table}
Table 1. Summary Statistics
### DR-DiD
For our preferred DiD approach, we use the Double-Robust DiD (DR-DiD) proposed by Callaway and Sant'Anna (2021). This semiparametric estimator corrects for the bias inherent in two-way fixed-effects event study estimates (Goodman-Bacon and Marcus, 2020).
The starting point for estimation in the DR-DiD model is the Group-Time ATT:
\[ATT(g,t)=\mathbb{E}[Y_{t}(g)-Y_{t}(0)|G_{g}=1], \tag{3}\]
i.e., the ATT for units who are members of group \(g\) at time period \(t\). Nonparametric identification is obtained using the Double-Robust estimand of Sant'Anna and Zhao (2020):
\[ATT(g,t;\delta)=\mathbb{E}\left[\left(\frac{G_{g}}{\mathbb{E}[G_{g}]}-\frac{ \frac{p_{g}(X)C}{1-p_{g}(X)}}{\mathbb{E}\left[\frac{p_{g}(X)C}{1-p_{g}(X)} \right]}\right)(Y_{t}-Y_{g-\delta-1}-m(X))\right]\]
where \(G_{g}=1\) if a unit is first treated in period \(g\), \(C=1\) if a unit is not treated in any time period (control), \(p_{g}(X)=P(G_{g}=1|X,G_{g}+C=1)\) is the probability of being first treated in period \(g\) conditional on covariates and either being a member of group \(g\) or never treated, \(m(X)=\mathbb{E}[Y_{t}-Y_{g-\delta-1}|X,C=1]\) is the outcome regression for the never-treated group, and \(t=g-\delta-1\) is the reference time period.16 This group-time ATT is then aggregated with respect to time-to-event \(e\), using the weight of each cohort share and the associated influence function to obtain valid confidence intervals:
Footnote 16: That is, the most recent time period when untreated potential outcomes are observed for group \(g\).
\[\theta_{es}(e)=\sum_{g\in G}\mathbf{1}\{g+e\leq T\}P(G=g|G+e\leq T)ATT(g,g+e). \tag{4}\]
### Interaction-Weighted DID (IWES)
Drawbacks of the DR-DiD procedure include the inability to include time-varying \(X_{i}\), as all time-varying \(X_{i}\) are held constant at their value in the last pre-treatment period. Further, in specifications with many controls the estimator does not converge due to propensity scores being very near 0 or 1.17 We therefore also perform an event study DiD using the Sun and Abraham (2021) Interaction-Weighted estimator (IWES). This procedure is equivalent to the DR-DiD, except that the group-time ATT is estimated using a traditional two-way fixed effect regression **before** the weighted aggregation is performed.
Footnote 17: This indicates the overlap condition may be violated, and alternatively, ASC may be more appropriate.
Specifically, the Group-Time ATTs \(\beta_{g,e}\) are estimated:
\[Y_{i,t}=\alpha+\sum_{g\in G}\sum_{g+e\neq-1}\beta_{g,e}(\mathbf{1}(E_{i}=e)\cdot G _{i,t}^{g+e})+\lambda_{t}+\epsilon_{it} \tag{5}\]
using linear regression, and then aggregated as in Equation (4).
### Augmented Synthetic Control for Staggered Treatment Adoption
The goal of synthetic control is to use the observed outcomes of \(Y_{jt}\) to construct a weighted average of \(Y_{iT}(\infty)\), which is not observed in our data. More specifically, synthetic control imputes the missing potential outcome as a weighted average of the control outcomes (Abadie, Diamond & Hainmueller, 2010, Abadie, 2021). The weights are chosen as the solution to the constrained optimization problem:
\[\min_{\mathbf{\gamma}\in\Delta}\lvert\lvert\mathbf{V}^{1/2}\left(\mathbf{Y}_{i\cdot}-\tilde {\mathbf{Y}}_{j\cdot}^{\prime}\right)\rvert\rvert_{2}^{2}+\upsilon\sum_{W_{i}=0}f( \gamma_{i}).\]
where \(\Delta\) is the appropriately sized simple. Synthetic control has many deep theoretical underpinnings, but at its core it is quite simple, to find a set of weights, \(\mathbf{\gamma}\) that can be used to construct an estimator of the controls to be used as the appropriate counterfactual. In fact this simplicity in intuition is perhaps its greatest strength and one of the reasons for its perceived popularity.
As Abadie et al. (2010) show, when the treated units vector of lagged outcomes, \(\mathbf{Y}_{i\cdot}\) lie interior of the convex hull of the control groups lagged outcomes \(\tilde{\mathbf{Y}}_{j\cdot}^{\prime}\) the corresponding weights will achieve perfect pre-treatment fit with the corresponding treatment effect estimator in possession of many desirable statistical properties. However, due to potential dimensionality issues, it is not universally feasible to achieve perfect pre-treatment fit. Even with close to perfect fit it is commonly recommended (Abadie, Diamond & Hainmueller, 2015) to run an extensive battery of placebo checks to ensure that \(\mathbf{\gamma}\) do not overfit due to noise. ASC (proposed by Ben-Michael, Feller & Rothstein, 2021) adjusts for poor pre-treatment fit.
Ben-Michael et al. (2022) also extend SCM to the staggered treatment adoption setting. In this version, the original SCM estimator is considered for a single unit \(j\). The SCM weights \(\hat{\gamma_{j}}\) are the solution to:
\[\min_{\mathbf{\gamma}_{j}\in\Delta_{j}^{scm}}\frac{1}{L_{j}}\left(\sum_{\ell=1}^{ L_{j}}Y_{j,T_{j}-\ell}-\sum_{i=1}^{N}\gamma_{ij}Y_{i,T_{j}+\ell}\right)^{2}+ \lambda\sum_{n=1}^{N}\gamma_{ij}^{2} \tag{6}\]
where \(\gamma_{j}\in\Delta_{j}^{scm}\) has elements that satisfy \(\gamma_{ij}\geq 0\)\(\forall i\), \(\sum_{i}\gamma_{ij}=1\), and \(\gamma_{ij}=0\) whenever \(i\) is not a possible donor. This modification focuses only on lagged outcomes and penalizes the weights towards uniformity using \(\lambda\).
Given the vector of weights \(\hat{\gamma_{ij}}\) solving equation X, the estimate of the missing potential outcome for treated unit \(j\) at event time \(k\) is:
\[\hat{Y}_{j,T_{j}+k}(\infty)=\sum_{i=1}^{N}\hat{\gamma_{ij}}Y_{j,T_{j}+k} \tag{7}\]
and the estimated treatment effect is \(\hat{\tau}_{jk}=Y_{j,T_{j}+k}-\hat{Y}_{j,T_{j}+k}(\infty)\), the difference between the observed outcome under treatment for the treated units and the estimated potential outcome for the synthetic control.
With multiple treated units (i.e. the staggered adoption case), the above setup is generalized to create weights for each treated unit. The estimated treatment effect averages over the unit effect estimates:
\[A\hat{T}T_{k}=\frac{1}{J}\sum_{j=1}^{J}\hat{\tau}_{jk} \tag{8}\]
which can be interpreted as both the average of individual unit SCM estimates, and an estimate for the average treated unit (Ben-Michael et al., 2021). These equivalent interpretations are used to construct goodness-of-fit measures
\[q^{sep}(\hat{\Gamma})\equiv\sqrt{\frac{1}{J}\sum_{j=1}^{J}\frac{1}{L_{j}} \sum_{\ell=1}^{L_{j}}\left(Y_{j,T_{j}-\ell}-\sum_{i=1}^{N}\gamma_{ij}Y_{i,T_{ j}+\ell}\right)^{2}} \tag{9}\]
and
\[q^{pool}(\hat{\Gamma})\equiv\sqrt{\frac{1}{L}\sum_{\ell=1}^{L}\left(\frac{1}{ J}\sum_{T_{j}>\ell}Y_{j,T_{j}-\ell}-\sum_{i=1}^{N}\gamma_{ij}Y_{i,T_{j}+\ell} \right)^{2}}. \tag{10}\]
The final "partially pooled" estimator minimizes a weighted average of these two measures:
\[\nu(\hat{q}^{pool})^{2}+(1-\nu)(\hat{q}^{sep})^{2} \tag{11}\]
where \(\hat{q}\) have been normalized by their values computed with weights \(\hat{\Gamma}\). Ben-Michael et al. (2022) describe a heuristic for choice of \(\nu\) which we adhere to in our analysis.
## 5. Results
Given various cohort effects, we thought it easier to display our findings visually rather than in standard tabular form. For those interested, all specific point estimates and associated standard errors for both cases and deaths can be found in Appendix B for all of the different estimation approaches deployed here.
While we advocate for using the FIPS level, we first discuss state level results to help compare our findings with those of Leifheit et al. (2021). We use as covariates state-level COVID policies and the natural logarithm of the population.
### State-Level Analysis
Figure 4 presents the cohort effects at the state level using the negative binomial specification of Leifheit et al. (2021) as well as the IWES and the doubly robust DID estimators. For these estimators we include as controls state-level COVID-19 policies (measures of stay-at-home orders, school closures, and mask mandates) and the logarithm of the population. There are several striking and immediate features. After the expiration of a moratorium, cases go up. This is true for all three estimators. However, where they diverge is in the statistical strength of this increase. The Negative Binomial specification of Leifheit et al. (2021) suggests statistically relevant increases in cases after 3 weeks. Neither the IWES or DR-DiD estimators find a statistically significant effect. Further, the estimated effects for the three estimators are quite similar after the lifting of the moratoria with the exception of weeks 11 and 12, where again the Negative Binomial estimator suggests another "spike" in cases. We view the near constancy of the impact of COVID-19 cases after about week 4 to be an equilibrium effect from the end of the moratorium being lifted.
If we turn our attention to deaths, panel (b) in Figure 4 paints a much different figure at the state level. Initially, deaths at the state level fluctuate around zero until around week 6, when we start to see a dedicated increase. Again, the Negative Binomial specification finds statistically significant increases in deaths attributed to COVID-19 starting at week 6 whereas the IWES and DR-DiD estimators do not find statistically significant effects. The week 6 increase in deaths is intuitive given the roughly two week lag of COVID-19 effects. Thus, finding increases in COVID-19 cases after 3 weeks suggests that around week 5 or 6 an increases in deaths is expected. We also note that while there is more variation in deaths as we move further from the end of the moratorium being lifted, it does appear to be roughly stable, in line with the impact on cases.
So, using the Negative Binomial specification promoted in Leifheit et al. (2021) we see an increase in deaths from COVID-19 at around the same time (though our data run longer than their analysis) but we also find a more intuitive increase in cases. Leifheit et al. (2021) claimed that their lack of finding an increase in cases prior to the spike in deaths was due to, among other things, undercounting of COVID cases (which could be attributed to those recently evicted not getting testing for COVID-19).
### FIPS Level Analysis
As argued earlier, the state level is not the appropriate level to address the impact of eviction moratoria on the spread and mortality of COVID-19 given the discrepancy between local and state ordinances. To that end we migrate from a state level aggregate dataset to a FIPS level analysis. Once we are in this setting we abandon the Negative Binomial specification advocated by Leifheit et al. (2021) and focus our attention exclusively on the IWES and DR-DiD estimators.
Our main specification for both estimators include the average stringency index (by FIPS), the logarithm of population, the proportion of the population that is black, the proportion of the population that is Hispanic, the proportion of the population that is college educated, and the average number of eviction filings (by FIPS). Figure 5 presents the cohort comparison of the benchmark specification across the IWES and the DR-DiD estimators.
Figure 4. Effect of Moratorium End: State-level Analysis
Several interesting features emerge. First, for cases, while both estimators fail to find statistically relevant effects due to the eviction moratoria expiring, the IWES estimator also finds a near 0 effect, while the DR-DiD estimator has a much larger positive effect which remains throughout the time-frame. We again see that in the first few weeks after the eviction moratoria expires at the the FIPS level, there are no noticeable impacts on cases, until around week 4, at which point the DR-DiD estimates experience the intuitive increase in average cases. Perhaps most interesting is that the simple switch from the state level to the FIPS level for the IWES estimates does not enjoy a similar increase in cases. Again this is additional evidence that buttresses our claim that the state level is the inappropriate focus for these effects.
Turning our attention to deaths attributable to COVID-19, we see an expected pattern; the first few weeks after the eviction moratoria at the FIPS level is lifted there is no distinguishable pattern in deaths for either the IWES or DR-DiD estimates, and it is not until around week 8 that the DR-DiD estimates start to increase. We also see that even with the DR-DiD estimates increasing starting at week 8, aside from the significant effect at week
Figure 5. Effect of Moratorium End, FIPS-level Benchmark
10, both estimates (IWES and DR-DiD) remain statistically insignificant throughout the time-frame, with the DR-DiD having economically larger (and positive) COVID-19 deaths.
Having compared the estimated impacts of the eviction moratoria expiration at the FIPS level for a common specification to get a sense of the differences in the estimators, we now turn to different model specifications for each estimator.
#### 5.2.1. Double-Robust DiD
Focusing exclusively on the DR-DiD estimator, we consider three alternative specifications. Model 1 controls for the stringency index (held constant at the first pre-treatment period), a binary indicator for ever having mask mandate or stay-at-home orders, and the logarithm of the FIPS population. Model 2 is the same as Model 1 but includes the proportion of the population that is black, the proportion of the population that is Hispanic, the proportion of the population that is college educated. Model 3 is the benchmark model previously discussed (including average eviction filings in a FIPS to Model 2).
Figure 6 presents the cohort effects across these three models. Several features are worth highlighting. All three specifications have similar patterns for cases, but with varying widths of confidence intervals around the point estimate, with the narrowest intervals stemming from Model 1. We also see a pronounced 'bump' in cases starting around week 5 for all three specifications, peaking at week 7 and then slowly decaying back towards 0 by the end of the year. The increase in cases is intuitive but the lack of robust statistical findings is a bit concerning. All of the confidence intervals contain 0, speaking to the difficulty that is inherent in trying to discern the impact that the expiration of the eviction moratoria had on total COVID-19 cases.
Turning our attention to deaths attributable to COVID-19, we see much less agreement across the three specifications. First, there is not a noticeable increase in the estimated increase in deaths until around week 8, but it is much less pronounced than it was for cases. We also see that for weeks 8 through 12, there is a difference between model specifications 2 and 3 and those from model 1 in terms of the magnitude of the number of deaths. Only in week 10 do we observe an estimated effect that is statistically different from 0, consistent with the totality of our findings so far.
#### 5.2.2. Interaction-Weighted DID
As the DR-DiD does not allow controlling for time-varying covariates, and often fails to converge when including the full suite of controls,18 we now
turn to the Interaction-Weighted DID (IWES). We report our findings in Figures 7 while the estimates and associated standard errors for cases and deaths can be found in Table A8 in Appendix B.
As with our analysis of the impacts using the IWES estimator, we consider three different specifications. We note that the specifications here are slightly different than those analyzed with the DR-DiD estimator since the estimators make slightly different assumptions on the nature of time variation in the controls. Here we consider a baseline model (Model 1) that includes as controls the stringency index (time-varying), county-level stay-at-home orders, county-level mask mandates, and the logarithm of the population in the FIPS. Model 2 includes all the controls from Model 1, but also includes the proportion of the population that is black, the proportion of the population that is Hispanic, the proportion of the population that is college educated. Finally, Model 3 adds to Model 2 with eviction filings (time varying) and political point difference in the 2016 election.
The results are consistent with our earlier DR-DiD estimates; there is no significant increase in COVID-19 cases (Panel A) following the lifting of eviction moratoria. Model 1 has the highest estimated impacts, which seem to occur immediately after the moratoria expire,
Figure 6. Effect of Moratorium End, FIPS-level
but with wide confidence intervals containing 0. Models 2 and 3 display the same behavior, but with smaller estimated effects than Model 1 along with confidence intervals that contain 0. Interestingly, for all three model specifications we see that the estimated cohort effect drops at week 7. Overall the IWES estimator suggests that the expiration of the moratoria changed little regarding COVID-19 cases.
Panel B of Figure 7 presents the results for deaths. Again, the results are nearly identical to the setup with cases. We see that the estimates from Model 1 are higher in magnitude than for Models 2 and 3, as to be expected, but the confidence intervals contain 0 (outside of weeks 1-3 for Model 1), throwing some doubt as to the true effect of these moratoria. We also can see higher estimated effects for the cohorts immediately after the moratoria are lifted, which is at odds with the behavior of the disease. If the eviction moratoria were implemented with the express intent of mitigating the spread of COVID, and the diseases as a one to two week lag time of transmission along with another one to two week lag time for severe symptoms to lead to death, then we would not anticipate such a large estimated effect for deaths so early on. This result also differs from the state level findings in Leifheit et al. (2021).
Figure 7. Effect of Moratorium End, FIPS-level: Sun & Abraham IWES
Thus, across both the IWES and DR-DiD estimators for a variety of model specifications, we see increases in both cases of COVID and mortality from COVID, but with wide confidence intervals and time varying behavior that is not consistent with the behavior of the virus. This suggests that the identification of these eviction expiration effects are difficult to identify in practice, consistent with the concerns raised in Goodman-Bacon and Marcus (2020).
### Robustness Checks
Beyond our main DR-DiD and IWES specifications, we also consider if various forms of confounding and a different identification approach can more accurately reveal the impact of the expiration of eviction moratoria. To that end we consider a subset of our main dataset that incorporates observable features of eviction law at the FIPS/state level and the cohort effects using ASC with staggered adoption.
#### 5.3.1. Augmented Synthetic Control with Staggered Adoption
We now present the results for ASC with staggered adoption. Note that the current implementation in R for the augsynth package does not yet allow for matching on auxiliary covariates, so we report results without any matching. Figure 8 depicts the results for total COVID cases and deaths attributable to COVID.19 Panel A depicts average effects and demonstrates a very slight and temporarily significant increase in the incidence rate of cases six weeks after the moratoria lifted, and again in weeks 10-12 after lifting.
Footnote 19: Figure A1 in Appendix B plots out the depicts pre-treatment balance and individual treatment effects for cases and deaths. Point estimates, standard errors, and confidence bounds are presented in Table A9.
Looking at Panel B, we again see no statistically meaningful effect of the moratoria expiration on deaths from COVID. We do see an increasing trend over time as we move further away from the moratoria ending, with a strange dip occurring at three weeks post expiration. The overall set of cohort effects is consistent with our earlier story that while there are estimated positive effects on mortality from COVID, said effects are difficult to precisely pin down. Our assumption is that if we were to also match on covariates that this would serve to further introduce noise as the quality of the mathces is likely to be poor as the number of covariates to match on increases.
#### 5.3.2. Eviction Law Subset
As there is a great deal of heterogeneity across both states and individual counties in terms of landlord and tenant protection statutes (see Section 2), we repeat the DR-DiD analysis here using a subset of data for which we have detailed information on existing tenancy laws. Arguably, it may be the case that areas with more tenant protection statutes in existence _prior_ to the COVID-19 pandemic may have been more likely to implement stricter eviction moratoria during the pandemic - introducing confounding. Including eviction law information reduces the size of our data to 17 cities and 36 counties. However, we are now able to control for whether a landlord waives the right to eviction if rent is partially repaid, the minimum number of days the landlord must provide before ending a tenancy due to non-payment of rent, the minimum number of days between when a landlord gives notice of tenancy termination and when the eviction may
Figure 8. Effect of Moratorium End on New Cases and Deaths: Ben-Michael Augmented Synthetic Control
take place.20 Once again, we find no evidence that the lifting of eviction moratoria increased the number of cases or deaths.
Footnote 20: If there is time between notice and repossession, tenants may be able to file appeals or otherwise negotiate in order to avoid the eviction.
## 6. Conclusions
This paper set about critically examining the impact of local eviction moratoria on the spread of COVID-19. While several earlier studies documented increases in deaths attributable to COVID-19 following the expiration of these moratoria, we found minimal effects when deploying both newer econometric methods and what we believe to be more sensible data specification choices. Specifically, accounting for the differential timing of eviction
Figure 9. Effect of Moratorium End on New Cases and Deaths: Eviction Law Subset
moratoria across 44 states, switching from state level to country level data, controlling for number of evictions, and using cohort specific weighting for our time to treatment effects, we found that eviction moratoria likely did _not_ mitigate the spread of COVID-19. This finding was consistent across a range of specifications and estimation approaches. In fact, the only setting where we found an effect of the moratoria was when are data were aggregated up to the state level.
However, as we stated earlier, the state is precisely the wrong level to focus attention on as different local municipalities had different rules in place for eviction filings and such aggregation washes away local variation in COVID-19 cases and deaths. Further, ignoring the fact that individuals could still be evicted when these eviction moratoria were in place represents a key omitted variable that helps to understand the impact of such a policy. Even the CDC's eviction moratoria, put in place on September 4th, 2020 nationally, did not prevent all evictions. Renters needed to qualify to seek eviction protections.
These findings may seem to undermine the need for such moratoria, however, when they were initially instituted it was not as a non-pharmaceutical intervention per se, but a means to keep people from being homeless at a time of extreme economic uncertainty. Further, as the understanding of COVID-19 became more prevalent across the country, it is likely that individuals took other precautions to mitigate the risk of catching the virus and so eviction moratoria, kept in place as a means of reducing the spread of COVID-19, were simply not effective.
Our crucial requirement of actual eviction numbers leads to the biggest limitation in this study: the sample size would ideally be larger than 30 cities. We believe, however, that the geographic dispersion of cities in the dataset is representative of the country as a whole. Future work may examine each city individually using a synthetic control method to uncover heterogeneity across cities or regions. In terms of methodology, we attempt to address the many issues inherent in policy impact evaluation for the COVID-19 pandemic by applying a variety of econometric techniques to our research question. However, the setting of staggered treatment timing is a fast-growing area of research, and it may be worthwhile for future authors to apply a staggered version of penalized synthetic control (Abadie and L'hour, 2021) or synthetic difference-in-differences (Arkhangelsky et al., 2021), as they become available.
We stress that our findings do not mean the moratoria were poor policies overall, far from it. The moratoria were designed not only to keep the spread of COVID-19 low, but to
insulate individuals from losing their residence during this time of great upheaval. In that view the moratoria likely were quite effective. Indeed, An, Gabriel and Tzur-Ilan (2022) find that the eviction moratoria reduced the financial stress of households by allowing them to redirect financial resources towards immediate consumption needs. Evictions in general lead to negative physical and mental health outcomes (Desmond and Shollenberger, 2015; Benfer et al., 2021), a decreased likelihood of seeking medical attention (Collinson and Reed, 2018), and damage to the overall public health of children (Schwartz, 2020) - all of which are arguably even more problematic during a global pandemic.
|
2306.08524 | Heintze-Kobayashi-Wolf theory for negatively curved homogeneous Finsler
manifolds | In this paper, we generalize the Heintze-Kobayashi-Wolf theory to homogeneous
Finsler geometry, by proving two main theorems. First, any connected negatively
curved homogeneous Finsler manifold is isometric to a Lie group endowed with a
left invariant metric, and that Lie group must be simply connected and
solvable. Second, the requirement in Heintze's criterion is necessary and
sufficient for a real solvable Lie algebra to generate a Lie group which admits
negatively curved left invariant Finsler metrics. | Ming Xu | 2023-06-14T14:18:28Z | http://arxiv.org/abs/2306.08524v1 | # Heintze-Kobayashi-Wolf theory for negatively curved homogeneous Finsler manifolds
###### Abstract.
In this paper, we generalize the Heintze-Kobayashi-Wolf theory to homogeneous Finsler geometry, by proving two main theorems. First, any connected negatively curved homogeneous Finsler manifold is isometric to a Lie group endowed with a left invariant metric, and that Lie group must be simply connected and solvable. Second, the requirement in Heintze's criterion is necessary and sufficient for a real solvable Lie algebra to generate a Lie group which admits negatively curved left invariant Finsler metrics.
Mathematics Subject Classification(2010): 53B40, 53C30, 53C60
Keywords: homogeneous flag curvature formula, homogeneous Finsler manifold, Heintze-Kobayashi-Wolf theory, linear submersion, negative curvature, solvable Lie algebra
## 1. Introduction
Complete Riemannian manifold with strictly negative section curvature (we will call it _negatively curved_ for simplicity) is a hot topic, which has been extensively studied [6, 13]. In homogeneous geometry, using Lie algebraic data to classify negatively curved homogeneous manifolds is a natural thought. However, unlike dealing with positively curved ones [7], there are too many smooth coset spaces admitting negative curvature, so that explicitly classifying them is impossible.
Fortunately, we have the following Heintze-Kobayashi-Wolf theory (_HKW theory_ in short) as a remedial measure:
1. By a result of J.A. Wolf [22] and its refinement by E. Heintze [14], any connected negatively curved homogeneous manifold is isometric to a Lie group, which is endowed with a left invariant metric. So we only need to discuss those negatively curved solvmanifolds.
2. By a theorem of S. Kobayashi [19], any connected negatively curved homogeneous Riemannian manifold must be simply connected. So the classification for negatively curved solvmanifolds is a Lie algebraic problem.
3. E. Heintze proved that a real solvable Lie algebra \(\mathfrak{g}\) generates a Lie group \(G\) which admits negatively curved left invariant Riemannian metrics if and only if \(\dim_{\mathbb{R}}\mathfrak{g}=\dim_{\mathbb{R}}[\mathfrak{g},\mathfrak{g}]+1\) and there exists \(y_{0}\in\mathfrak{g}\) such that \(\operatorname{ad}(y_{0}):[\mathfrak{g},\mathfrak{g}]\to[\mathfrak{g},\mathfrak{ g}]\) only has eigenvalues with positive real parts (see [15] or Theorem 2.7 below).
To summarize, for each negatively curved homogeneous Riemannian manifold, the HKW theory provides a relatively simple representative for it. Later, R. Azencott and E.N. Wilson proved similar results for homogeneous non-positive curvature [3, 4].
In recent years, researchers studied negative curvature in Finsler geometry, where the negatively curved property requires the flag curvature to be strictly negative everywhere. For example, Akbar-Zadeh's theorem tells us that any compact or homogeneous Finsler manifold with negative constant curvature must be Riemannian [1, 8]. Z. Shen proved that a compact Finsler manifold with negative flag curvature and constant S-curvature must be Riemannian [21]. Deng and his coworkers proved some rigidity results in the homogeneous context [11, 26].
It is natural to ask
**Question 1.1**.: _Can the HKW theory be generalized to homogeneous Finsler geometry?_
The progresses imply a positive answer to Question 1.1. For example, S. Deng and Z. Hou proved that any connected negatively curved homogeneous Finsler manifold is simply connected [10]. We proved that the criterion in [15] can be generalized to some special Finsler solvmanifolds (see Theorem 1.3 in [26]).
In this paper, we completely answer Question 1.1 by two main theorems.
**Theorem 1.2**.: _Any connected homogeneous Finsler manifold is isometric to a Lie group endowed with a left invariant Finsler metric, and this Lie group must be simply connected and solvable._
**Theorem 1.3**.: _Let \(G\) be a connected simply connected solvable Lie group with \(\dim_{\mathbb{R}}G\geq 2\), and we apply the notations \(\mathfrak{g}=\operatorname{Lie}(G)\), \(\mathfrak{l}^{0}=[\mathfrak{g},\mathfrak{g}]\) and \(\mathfrak{l}^{1}=[\mathfrak{l}^{0},\mathfrak{l}^{0}]\). Then the following claims are equivalent:_
1. \(G\) _admits a negatively curved left invariant Finsler metric;_
2. \(\dim_{\mathbb{R}}\mathfrak{g}=\dim_{\mathbb{R}}\mathfrak{l}^{0}+1\) _and there exists_ \(y_{0}\in\mathfrak{g}\) _such that the real linear endomorphism_ \(\operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(u_{0})\) _on_ \(\mathfrak{l}^{0}/\mathfrak{l}^{1}\) _induced by_ \(\operatorname{ad}(y_{0})=[y_{0},\cdot]\) _only has eigenvalues with positive real parts;_
3. \(\dim_{\mathbb{R}}\mathfrak{g}=\dim_{\mathbb{R}}\mathfrak{l}^{0}+1\) _and there exists_ \(y_{0}\in\mathfrak{g}\) _such that the real linear endomorphism_ \(\operatorname{ad}_{\mathfrak{l}^{0}}(y_{0})=[y_{0},\cdot]\) _on_ \(\mathfrak{l}^{0}\) _only has eigenvalues with positive real parts._
To summarize, the HKW theory can still guide us study the negative curvature problem in homogeneous Finsler geometry.
The proof of Theorem 1.2 is very similar to that for its analog in Riemannian geometry, which is sketchy in the literatures. To make this paper more self contained, we supply the details. The proof of Theorem 1.3 is very different from that in [15], because most calculations there are no longer valid in the Finsler context. Here we use a homogeneous flag curvature formula (see Theorem 2.3 or Theorem 4.1 in [27]), which qualitatively indicates where to find a non-negative flag curvature, and we refine the argument which proves Theorem 1.3 in [26] with a linear submersion and careful algebraic discussion.
This paper is scheduled as follows. In Section 2, we summarize some necessary knowledge in general and homogeneous Finsler geometry. In Section 3, we prove Theorem 1.2. In Section 4, we prove Theorem 1.3.
## 2. Preliminaries in general and homogeneous Finsler geometries
### Minkowski norm and linear submersion
A _Minkowski norm_ on a finite dimensional real vector space \(\mathbf{V}\) is a continuous function \(F:\mathbf{V}\to[0,+\infty)\) satisfying [5]:
1. Regularity: \(F|_{\mathbf{V}\backslash\{0\}}\) is a positive smooth function;
2. Positive \(1\)-homogeneity: \(F(\lambda y)=\lambda F(y)\), \(\forall\lambda\geq 0,y\in\mathbf{V}\);
3. Strong convexity: for each \(y\in\mathbf{V}\backslash\{0\}\), the _fundamental tensor_ \[g_{y}(u,v)=\frac{1}{2}\frac{\partial^{2}}{\partial s\partial t}|_{s=t=0}F^{2} (y+su+tv),\quad\forall u,v\in\mathfrak{m},\] is an inner product on \(\mathbf{V}\).
Let \(F\) and \(\overline{F}\) be the Minkowski norms on \(\mathbf{V}\) and \(\overline{\mathbf{V}}\) respectively. The surjective real linear map \(l:\mathbf{V}\to\overline{\mathbf{V}}\) is called a _linear submersion_ from \(F\) to \(\overline{F}\), if [2]
\[\inf\{F(v)|l(v)=\overline{v}\}=\overline{F}(\overline{v}),\quad\forall \overline{v}\in\overline{\mathbf{V}}.\]
For each \(F\) and \(l\) as mentioned above, there exists a unique \(\overline{F}\) on \(\mathbf{V}\) such that \(l\) is a submersion between Minkowski norms. We call this \(\overline{F}\) the _Minkowski norm induced by submersion_ from \(F\) and \(l\). The following lemma is useful in later discussion.
Suppose \(l:(\mathbf{V},F)\to(\overline{\mathbf{V}},\overline{F})\) is a linear submersion between Minkowski norms. Then for each \(\overline{u}\in\overline{\mathbf{V}}\backslash\{0\}\), there exists a unique \(u\in l^{-1}(\overline{u})\) satisfying \(F(u)=\overline{F}(\overline{u})\). Denote by \(g_{u}(\cdot,\cdot)\) and \(\overline{g}_{\overline{u}}(\cdot,\cdot)\) the fundamental tensors for \(F\) and \(\overline{F}\) respectively, then this \(u\in l^{-1}(\overline{u})\), which is
called the _horizontal lifting_ of \(\overline{u}\), can be alternatively determined by \(g_{u}(u,\ker l)=0\). Further more, we have the following lemma, which is a reformulation of Proposition 2.2 in [2].
**Lemma 2.1**.: _The linear map \(l\) induces a linear isometry from the inner product \(g_{u}(\cdot,\cdot)\) on the \(g_{u}(\cdot,\cdot)\)-orthogonal complement of \(\ker l\) to the inner product \(\overline{g_{\overline{u}}}(\cdot,\cdot)\) on \(\overline{\nabla}\)._
### Finsler metric and flag curvature
A _Finsler metric_ on a smooth manifold \(M\) is a continuous function \(F:TM\to[0,+\infty)\), such that \(F|_{TM\setminus 0}\) is a positive smooth function and \(F(x,\cdot)=F|_{T_{x}M}\) for each \(x\in M\) is a Minkowski norm on \(T_{x}M\)[20].
At any point \(x\) in a Finsler manifold \((M,F)\), the _flag curvature_ for the vector \(y\in T_{x}M\backslash\{0\}\) (i.e., the _flag pole_) and the tangent plane \(\mathbf{P}=\mathrm{span}^{\mathbb{R}}\{y,u\}\subset T_{x}M\) (i.e., the _flag_) is defined as
\[K(x,y,\mathbf{P})=\frac{g_{y}(R_{y}(u),u)}{g_{y}(y,y)g_{y}(u,u)-g_{y}(y,u)^{2}},\]
in which \(R_{y}:T_{x}M\to T_{x}M\) is the Riemann curvature operator. When \(F\) is Riemannian, the flag curvature coincides with the sectional curvature, which is irrelevant to the choice of \(y\). See [20] for the details.
### Homogeneous Finsler manifold and a flag curvature formula
A Finsler manifold \((M,F)\) is called _homogeneous_ if its isometry group \(I(M,F)\) acts transitively on \(M\)[8]. Since \(I(M,F)\) is a Lie transformation group [9], we can present the homogeneous manifold \(M\) as \(M=G/H\). Here \(G\) is a Lie subgroup of \(I(M,F)\) which acts transitively on \(M\), and \(H\) is the isotropy subgroup at the origin \(o=eH\in G/H=M\). When \(M\) is connected, we may require \(G\) to be connected as well.
A decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\) with \(\mathfrak{g}=\mathrm{Lie}(G)\) and \(\mathfrak{h}=\mathrm{Lie}(H)\) is called a _reductive_ decomposition for \((G/H,F)\) if it is \(\mathrm{Ad}(H)\)-invariant (in the Lie algebraic level, it implies \([\mathfrak{h},\mathfrak{m}]\subset\mathfrak{m}\)). The following lemma provides a canonical reductive decomposition.
**Lemma 2.2**.: _The orthogonal decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\) with respect to the Killing form of \(\mathfrak{g}\) is a reductive composition, such that the maximal nilpotent ideal of \(\mathfrak{g}\) is contained in \(\mathfrak{m}\)._
The Riemannian analog of Lemma 2.2 can be found in [28]. Its proof can be naturally transferred to the Finsler context.
The subspace \(\mathfrak{m}\) in a reductive decomposition for \((G/H,F)\) can be canonically identified with the tangent space \(T_{o}(G/H)\), such that the \(\mathrm{Ad}(H)\)-action on \(\mathfrak{m}\) coincides with the isotropic \(H\)-action on \(T_{o}(G/H)\). Then the \(G\)-invariant Finsler metric \(F\) on \(G/H\) can be one-to-one corresponded to its restriction to \(T_{o}(G/H)\), which is any arbitrary \(\mathrm{Ad}(H)\)-invariant Minkowski norm on \(\mathfrak{m}\). For simplicity, we still use \(F\) and \(g_{y}(\cdot,\cdot)\) to denote this norm and its fundamental tensor respectively. See [8] for more detailed discussion in homogeneous Finsler geometry.
By homogeneity, we only need to discuss the curvatures of a homogeneous Finsler manifold \((G/H,F)\) at the origin. The following homogeneous flag curvature formula (see Theorem 4.1 in [27]) played an important role when we classified positively curved homogeneous Finsler manifolds [12].
**Theorem 2.3**.: _Let \((G/H,F)\) be a homogeneous Finsler manifold with the reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\). Suppose that \(u\) and \(v\) are a commuting pair of linearly independent vectors in \(\mathfrak{m}\) and assume \(g_{u}(u,[u,\mathfrak{m}]_{\mathfrak{m}})=0\). Then for \(\mathbf{P}=\mathrm{span}\{u,v\}\), we have_
\[K(o,u,\mathbf{P})=\frac{g_{y}(U(u,v),U(u,v))}{g_{y}(u,u)g_{y}(v,v)-g_{y}(u,v)^{ 2}},\]
_where \(U(u,v)\in\mathfrak{m}\) is determined by_
\[2g_{u}(U(u,v),w)=g_{u}([u,w]_{\mathfrak{m}},v)+g_{u}(u,[v,w]_{\mathfrak{m}}), \quad\forall w\in\mathfrak{m}.\]
When a Lie group \(G\) is viewed as the homogeneous manifold \(G/H=G/\{e\}\), the corresponding homogeneous Finsler metric is called _left invariant_. In this situation, the reductive decomposition is unique, i.e., \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}=0+\mathfrak{g}\), so we have the following immediate corollary of Theorem 2.3.
**Lemma 2.4**.: _Let \(F\) be a left invariant Finsler metric on the Lie group \(G\). Suppose that there exist a commuting pair of linearly independent vectors \(u\) and \(v\) in \(\mathfrak{g}=\mathrm{Lie}(G)\), which satisfies \(g_{u}(u,[\mathfrak{g},u])=0\), then \((G,F)\) is not negatively curved._
See [16, 17, 23, 24, 25] for more homogeneous curvature formulae in homogeneous Finsler geometry and homogeneous spray geometry.
### Negatively curved homogeneous Finsler manifold
Let \((M,F)\) be a connected negatively curved homogeneous Finsler manifold. Then it is complete. The main theorem in [10] tells us
**Theorem 2.5**.: _Any connected negatively curved homogeneous Finsler manifold must be simply connected._
By Cartan-Hadamard Theorem [5], \(M\) is homeomorphic to an Euclidean space, which implies
**Lemma 2.6**.: _A connected negatively curved homogeneous Finsler manifold \((M,F)\) can be presented as \(M=G/H\), where \(G\) is the connected isometry group \(I_{0}(M,F)\), and \(H\) is a maximal compact subgroup of \(G\)._
The proof of Lemma 2.6 is contained in the argument proving Theorem 1.1 in [26].
When \(M\) is a connected solvable Lie group and \(F\) is a left invariant Riemannian metric, E. Heintze proved [15]
**Theorem 2.7**.: _Let \(G\) be a connected simply connected Lie group with a solvable Lie algebra \(\mathfrak{g}\), then it admits a negatively curved left invariant Riemannian metric if and only if \(\dim_{\mathbb{R}}\mathfrak{g}=\dim_{\mathbb{R}}[\mathfrak{g},\mathfrak{g}]+1\) and there exists \(y_{0}\in\mathfrak{g}\) such that \(\mathrm{ad}(y_{0})=[y_{0},\cdot]:[\mathfrak{g},\mathfrak{g}]\to[\mathfrak{g}, \mathfrak{g}]\) only has eigenvalues with positive real parts._
## 3. Proof of Theorem 1.2
Suppose that \((M,F)\) is a connected negatively curved homogeneous Finsler manifold, then Lemma 2.6 provides \(M=G/H\), where \(H\) is a maximal compact subgroup of \(G=I_{0}(M,F)\).
**Lemma 3.1**.: \(G\) _has a trivial center._
**Proof.** Assume conversely that there exists \(\rho\in C(G)\) which is not the identity map. Then \(\rho\) generates an infinite discrete subgroup \(\Gamma\subset C(G)\), which acts freely and isometrically on \((M,F)\). Indeed, each \(\rho^{i}\) is a Clifford-Wolf translation and each \(\Gamma\)-orbit is contained in a geodesic on \((M,F)\). On the quotient manifold \(\overline{M}=M/\Gamma\), \(F\) induces a metric \(\overline{F}\), such that the covering map \(\pi:M\to\overline{M}\) is locally isometric everywhere. So \((\overline{M},\overline{F})\) is also negatively curved. The \(G\)-action on \(\overline{M}\) is transitive and isometric, so \((\overline{M},\overline{F})\) is a homogeneous Finsler manifold. Because \(\overline{M}\) is not simply connected, we get a contradiction to Theorem 2.5.
Let \(\mathfrak{g}=\mathfrak{s}+\mathfrak{r}\) be the Levi decomposition for \(\mathfrak{g}=\mathrm{Lie}(G)\), where \(\mathfrak{r}\) is the maximal solvable ideal and \(\mathfrak{s}\) is a semi simple subalgebra of \(\mathfrak{g}\) respectively. Let \(\mathfrak{k}\) be a maximally compactly imbedded subalgebra of \(\mathfrak{s}\), i.e., \(\mathfrak{k}\) generates a maximal compact connected subgroup of \(\mathrm{Int}(\mathfrak{s})\).
**Lemma 3.2**.: \(\mathfrak{k}\) _generates a compact connected subgroup \(K\) in \(G\)._
Notice that a compactly imbedded subalgebra is compact, but generally speaking, it may not generate a compact subgroup. Here the negative curvature condition is crucial.
**Proof.** We have a Lie algebra direct sum decomposition \(\mathfrak{s}=\oplus_{i=1}^{m}\mathfrak{s}_{i}\), in which each \(\mathfrak{s}_{i}\) is a simple ideal. Correspondingly, \(\mathfrak{k}=\oplus_{i=1}^{m}\mathfrak{t}_{i}\), where each \(\mathfrak{t}_{i}\) is a maximally compact subalgebra of \(\mathfrak{s}_{i}\). Denote by \(K_{i}\) the connected Lie subgroup that \(\mathfrak{t}_{i}\) generates. Then we have \(K=K_{1}\cdots K_{m}\). To prove Lemma 3.2, we only need to verify that each \(K_{i}\) is compact. There are two cases to consider.
**Case 1**: \(\mathfrak{t}_{i}\) is semi simple. In this case, \(\mathfrak{t}_{i}=\mathfrak{t}_{i}^{\prime}\oplus\mathfrak{c}(\mathfrak{t}_{i})\), where \(\mathfrak{t}_{i}^{\prime}=[\mathfrak{t}_{i},\mathfrak{t}_{i}]\) is semi simple and \(\dim_{\mathbb{R}}\mathfrak{c}(\mathfrak{t}_{i})=1\). Meanwhile, \(\mathfrak{s}_{i}\) is a simple ideal of non-compact type, its compact dual
is compact simple and has the same rank as \(\mathfrak{k}_{i}\). Indeed, \((\mathfrak{s}_{i}^{\prime},\mathfrak{k}_{i})\) is an irreducible Hermitian symmetric pair. Let \(\mathfrak{t}_{i}\) be a Cartan subalgebra of \(\mathfrak{k}_{i}\), then it is also a Cartan subalgebra of \(\mathfrak{s}_{i}^{\prime}\), and it contains \(\mathfrak{c}(\mathfrak{k}_{i})\). The \(\operatorname{ad}_{\mathfrak{g}\otimes\mathbb{C}}(\mathfrak{s}_{i}^{\prime})\)-action on \(\mathfrak{g}\otimes\mathbb{C}\) only has purely imaginary weights in \(\sqrt{-1}\mathfrak{t}_{i}^{*}\). In particular, for any \(u\in\mathfrak{c}(\mathfrak{k}_{i})\), the semi simple complex linear endomorphism \(\operatorname{ad}_{\mathfrak{g}\otimes\mathbb{C}}(u)\) on \(\mathfrak{g}\otimes\mathbb{C}\) only has purely imaginary eigenvalues. Since the root system of \(\mathfrak{k}_{i}\) is a subset in that of \(\mathfrak{s}_{i}^{\prime}\), and \(\mathfrak{c}(\mathfrak{g})\) is the common kernel for all roots of \(\mathfrak{k}_{i}\), we can find a suitable \(u\in\mathfrak{c}(\mathfrak{g})\backslash\{0\}\), such that all eigenvalues of \(\operatorname{ad}(u)\) are contained in \(2\mathbb{Z}\pi\sqrt{-1}\). Then \(\operatorname{Ad}(\exp u)=e^{\operatorname{ad}(u)}\) is the identity map on \(\mathfrak{g}\). Since \(G\) is connected, \(\exp u\in C(G)\), and it must be \(e\) by Lemma 3.1.
To summarize, above argument indicates that \(\mathfrak{c}(\mathfrak{k}_{i})\) generates a compact Lie subgroup \(S^{1}\). Meanwhile, the compact semi simple \(\mathfrak{k}_{i}^{\prime}=[\mathfrak{k}_{i},\mathfrak{k}_{i}]\) generates a compact connected \(K_{i}^{\prime}\) in \(G\). So \(K_{i}=K_{i}^{\prime}S^{1}\) is compact. The proof of Lemma 3.2 is finished.
**Lemma 3.3**.: _There exists a connected solvable subgroup \(G^{\prime}\) of \(G\) acting transitively on \(M\)._
**Proof.** By Lemma 2.6, \(H\) is a maximal compact subgroup of \(G\). Since any compact subgroup of \(G\) is contained by a maximal one, and all maximal compact subgroups of \(G\) are conjugate to each other (see Theorem 14.1.3 in [18]), we may assume without loss of generality that \(H\) contains the compact subgroup \(K\) in Lemma 3.2, i.e., in the Lie algebraic level, \(\mathfrak{k}\subset\mathfrak{h}\).
For each \(\mathfrak{s}_{i}\), \(1\leq i\leq m\), in the proof of Lemma 3.2, we have an Iwasawa decomposition \(\mathfrak{s}_{i}=\mathfrak{k}_{i}+\mathfrak{n}_{i}+\mathfrak{a}_{i}\) (when \(\mathfrak{s}_{i}\) is compact, \(\mathfrak{k}_{i}=\mathfrak{s}_{i}\) and \(\mathfrak{n}_{i}=\mathfrak{a}_{i}=0\)). Then \(\mathfrak{g}^{\prime}=\oplus_{i=1}^{m}(\mathfrak{n}_{i}+\mathfrak{a}_{i})+ \mathfrak{r}\) is a solvable subalgebra of \(\mathfrak{g}\). Denote by \(G^{\prime}\) the solvable Lie subgroup \(\mathfrak{g}^{\prime}\) generates in \(G\). Since \(\mathfrak{k}\subset\mathfrak{h}\), i.e, \(\mathfrak{g}^{\prime}+\mathfrak{h}=\mathfrak{g}\), so \(G^{\prime}\cdot o\) is an open submanifold in \(G/H\). Since both \((G/H,F)\) and \((G^{\prime}\cdot o,F|_{G^{\prime}\cdot o})\) are homogeneous, they are both complete. So we have \(G^{\prime}\cdot o=G/H=M\).
**Lemma 3.4**.: _There exists a connected solvable subgroup \(G^{\prime\prime}\) of \(G\), which acts freely and transitively on \(M\)._
**Proof.** Let \(G^{\prime}\) be the connected solvable Lie subgroup of \(G\) given by Lemma 3.3. Then \(M\) can be presented as \(M=G^{\prime}/H^{\prime}\). Let \(\mathfrak{g}^{\prime}=\operatorname{Lie}(G^{\prime})\) and \(\mathfrak{h}^{\prime}=\operatorname{Lie}(H^{\prime})\). Then Lemma 2.2 tells that \(\mathfrak{h}^{\prime}\cap[\mathfrak{g}^{\prime},\mathfrak{g}^{\prime}]=0\). This observation enables us to find a linear complement \(\mathfrak{g}^{\prime\prime}\) of \(\mathfrak{h}^{\prime}\) in \(\mathfrak{g}^{\prime}\) which contains \([\mathfrak{g}^{\prime},\mathfrak{g}^{\prime}]\). Indeed, \(\mathfrak{g}^{\prime\prime}\) is a Lie subalgebra of \(\mathfrak{g}^{\prime}\), and it generates a connected solvable Lie subgroup \(G^{\prime\prime}\) in \(G^{\prime}\). By similar argument as in the proof of Lemma 3.3, we get \(G^{\prime\prime}\cdot o=M\). Because \(\dim_{\mathbb{R}}G^{\prime\prime}=\dim_{\mathbb{R}}M\), the map \(\pi:G^{\prime\prime}\to M\), \(\pi(g)=g\cdot o\) is a smooth covering map. Since \(M\) is connected and simply connected, this covering map \(\pi\) is a diffeomorphism. So the \(G^{\prime\prime}\)-action on \((M,F)\) is free and transitive, which proves Lemma 3.4.
Now we finish the proof of Theorem 1.2.
**Proof of Theorem 1.2.** Lemma 3.4 provides a connected solvable Lie subgroup \(G^{\prime\prime}\subset I_{0}(M,F)\). The map \(\pi:G^{\prime\prime}\to M\), \(\pi(g)=g\cdot o\), is a diffeomorphism. Further more, it is equivariant with respect to left translations on \(G^{\prime\prime}\) and the natural action of \(G^{\prime\prime}\subset I(M,F)\) on \(M\). So \(\pi^{*}F\) is a left invariant metric on \(G^{\prime\prime}\), i.e., \(\pi\) is an isometry between \((M,F)\) and \((G^{\prime\prime},\pi^{*}F)\). The first statement in Theorem 1.2 is proved.
Suppose that the connected Lie group \(G\) admits a negatively curved left invariant Finsler mtric. By Lemma 2.5, \(G\) must be connected. We prove the solvability of \(G\) by contradiction. Assume \(G\) is not solvable, then in the Levi decomposition \(\mathfrak{g}=\mathfrak{s}+\mathfrak{r}\) for \(\mathfrak{g}=\operatorname{Lie}(G)\), the semi simple subalgebra \(\mathfrak{s}\) has a nonzero maximal compactly imbedded subalgebra \(\mathfrak{k}\). By Lemma 3.2, \(\mathfrak{k}\) generates a compact subgroup \(K\) in \(G\), with \(\dim K>0\). The left translations of \(K\) on \(G\) is contained in a maximal compact subgroup of \(I_{0}(G,F)\), so Lemma 2.6 indicates that left translations of \(K\) fixes some element of \(G\). This is an impossible because \(K\neq\{e\}\). The second statement in Theorem 1.2 is proved.
## 4. Proof of Theorem 1.3
### Some notations and preparation lemmas
Throughout this section, we apply the following notations and assumptions. Let \(G\) be a connected simply connected solvable Lie
group, then \(\mathfrak{g}=\mathrm{Lie}(G)\) is solvable. We denote by
\[\mathfrak{l}^{0}=[\mathfrak{g},\mathfrak{g}],\ \mathfrak{l}^{1}=[\mathfrak{l}^{0}, \mathfrak{l}^{0}],\ \cdots,\ \mathfrak{l}^{i}=[\mathfrak{l}^{0},\mathfrak{l}^{i-1}],\cdots\]
the descending sequence of \([\mathfrak{g},\mathfrak{g}]\). Because \(\mathfrak{g}\) is solvable, \(\mathfrak{l}^{0}\) is nilpotent. We may assume it is \(k\)-step nilpotent, i.e., \(\mathfrak{l}^{k}=0\) and \(\mathfrak{c}(\mathfrak{l}^{0})\supset\mathfrak{l}^{k-1}\neq 0\). For any \(0\leq i<j\) and \(y_{0}\in\mathfrak{g}\), \(\mathrm{ad}(y_{0})\) induces a real linear endomorphism \(\mathrm{ad}_{\mathfrak{l}^{i}/\mathfrak{l}^{j}}(y_{0})\) on \(\mathfrak{l}^{i}/\mathfrak{l}^{j}\). We denote by \(\mathrm{pr}_{j}\) the linear projection from \(\mathfrak{l}^{0}\) to \(\mathfrak{l}^{0}/\mathfrak{l}^{j}\). Here are some obvious facts:
\[\mathrm{ad}_{\mathfrak{l}^{i}/\mathfrak{l}^{j}}(y_{0})(\mathrm{pr}_{j}(v))= \mathrm{pr}_{j}([y_{0},v]),\ e^{\mathrm{ad}_{\mathfrak{l}^{i}/\mathfrak{l}^{j} }(y_{0})}(\mathrm{pr}_{j}(v))=\mathrm{pr}_{j}(e^{\mathrm{ad}(y_{0})}(v)),\ \forall v\in \mathfrak{l}^{i}. \tag{4.1}\]
For simplicity, we use the same notations for real linear maps to denote their complexifications (i.e., their complex linear extension maps). For example, (4.1) is still valid when we choose \(v\) from \(\mathfrak{l}^{i}\otimes\mathbb{C}\) and view the projection image \(\mathrm{pr}_{j}(v)\) as a vector in \((\mathfrak{l}^{i}/\mathfrak{l}^{j})\otimes\mathbb{C}=\mathfrak{l}^{i}\otimes \mathbb{C}/\mathfrak{l}^{j}\otimes\mathbb{C}\).
**Lemma 4.1**.: _Let \(A\) be a real linear endomorphism on a finite dimensional real vector space \(\mathbf{V}\). Then the following statements are equivalent:_
1. \(A\) _only has eigenvalues with positive real parts;_
2. _For each_ \(v\in(\mathbf{V}\otimes\mathbb{C})\backslash\{0\}\)_,_ \(\lim_{t\to+\infty}e^{tA}(v)=\infty\)_;_
3. _For each_ \(v\in\mathbf{V}\backslash\{0\}\)_, there exits a sequence_ \(t_{n}\in\mathbb{R}\) _satisfying_ \(\lim_{n\to\infty}t_{n}=+\infty\) _and_ \(\lim_{n\to\infty}e^{t_{n}A}(v)=\infty\)_._
**Proof.** First, we prove (1)\(\Rightarrow\)(2). Assume that \(A\) only has eigenvalues with positive real parts. Its complex linear extension on \(\mathbf{V}\otimes\mathbb{C}\) shares the same eigenvalues as \(A\), which only have positive real parts. Notice that \(\mathbf{V}\otimes\mathbb{C}\) is the linear direct sum of
\[(\mathbf{V}\otimes\mathbb{C})_{\lambda}=\{v\in\mathbf{V}\otimes\mathbb{C}| \exists m>>0,\ \mathrm{s.t.}\ (\lambda I-A)^{m}(v)=0\}\]
for all eigenvalues \(\lambda\) of \(A\), and \(e^{tA}\) preserves each \((\mathbf{V}\otimes\mathbb{C})_{\lambda}\). So we only need to verify \(\lim_{t\to+\infty}e^{tA}(v)=\infty\) for each \(v\in(\mathbf{V}\times\mathbb{C})\backslash\{0\}\). Using the Jordan form of \(A\), we can find \(m\in\mathbb{N}\cup\{0\}\) and \(v_{0},v_{1},\cdots,v_{m}\in(\mathbf{V}\otimes\mathbb{C})_{\lambda}\), satisfying \(v_{0}=v\), \(v_{m}\neq 0\) and
\[e^{tA}(v)=e^{\lambda t}(v_{0}+tv_{1}+\cdots+t^{m}v_{m}),\quad\forall t\in \mathbb{R}. \tag{4.2}\]
By the assumptions \(v_{m}\neq 0\) and \(\mathrm{Re}\lambda>0\), we get \(\lim_{t\to+\infty}e^{tA}(v)=\infty\) immediately. This ends the proof for (1)\(\Rightarrow\)(2).
Next, (2)\(\Rightarrow\)(3) is a trivial fact.
Finally, we prove (3)\(\Rightarrow\)(1). Assume conversely that \(A\) has an eigenvalue \(\lambda\) with \(\mathrm{Re}\lambda\leq 0\). If \(\lambda\in\mathbb{R}\), \(A\) has a nonzero eigenvector \(v\in\mathbf{V}\) satisfying \(A(v)=\lambda v\). Then we have \(\lim_{t\to+\infty}e^{tA}(v)=\lim_{t\to+\infty}e^{\lambda t}v=0\) or \(v\), which is a contradiction to (3). If \(\lambda=a+b\sqrt{-1}\) with \(a\leq 0\) and \(b\in\mathbb{R}\backslash\{0\}\), then we can find a linearly independent pair \(u,v\in\mathbf{V}\), such that \(A(u)=au+bv\) and \(A(v)=-bu+av\). Then \(e^{tA}(u)=e^{at}(\cos(bt)u+\sin(bt)v)\), which is periodic when \(a=0\) and converges to \(0\) when \(a<0\). In each situation, we can get a contradiction to (3). This ends the proof of Lemma 4.1.
**Lemma 4.2**.: _Let \(\mathbf{V}\) be a finite dimensional real or complex vector space and \(r\) a positive integer. Suppose that we have \(k_{i}\in\mathbb{N}\cup\{0\}\), \(\xi_{i}=a_{i}+b_{i}\sqrt{-1}\in\mathbb{C}\) with \(a_{i}>0\) and \(b_{i}\in\mathbb{R}\), and \(w_{i}\in\mathbf{V}\backslash\{0\}\), \(\forall 1\leq i\leq r\). Assume that the pairs in \(\{(k_{i},\xi_{i}),\forall 1\leq i\leq r\}\) are all distinct. Then there exists a sequence \(t_{n}\in\mathbb{R}\) satisfying \(\lim_{n\to+\infty}t_{n}=+\infty\) and \(\lim_{n\to+\infty}f(t_{n})=\infty\), where \(f(t)\) is the \(\mathbf{V}\)-valued function \(f(t)=\sum_{i=1}^{s}t^{k_{i}}e^{\xi_{i}t}w_{i}\)._
**Proof.** Suppose that \(\{1,\cdots,s\}\) is the set of all indices \(i\in\{1,\cdots,r\}\) which satisfies
\[a_{i}=\max\{a_{j},\forall 1\leq j\leq r\}\quad\text{and}\quad k_{i}=\max\{k_{j}|a_ {j}=\max\{a_{k},\forall 1\leq k\leq r\}\}.\]
Because \(\{(k_{i},\xi_{i}),\forall 1\leq i\leq r\}\) are all distinct, \(\{b_{1},\cdots,b_{s}\}\) are all distinct. Direct calculation shows
\[(t^{k_{1}}e^{\xi_{1}t})^{-1}f(t)=w_{1}+\sum_{i=2}^{s}e^{(b_{i}-b_{1})t\sqrt{-1}}w_ {i}+o(1), \tag{4.3}\]
where \(o(1)\) is with respect to \(t\to+\infty\).
Now we prove Lemma 4.2 by contradiction. Assume conversely that it is not correct, then \(f(t)\) is bounded for \(t\in[0,+\infty)\), and the right side of (4.3) converges to \(0\) when \(t\) goes to \(+\infty\). Then we must have \(s\geq 2\) and
\[\lim_{t\to+\infty}\sum_{i=2}^{s}e^{(b_{i}-b_{1})t\sqrt{-1}}w_{i}=-w_{1}\neq 0.\]
It implies
\[\lim_{C\to+\infty}\int_{0}^{C}\sum_{i=2}^{s}e^{(b_{i}-b_{1})t\sqrt{-1}}w_{i}dt =\infty. \tag{4.4}\]
However, because each \(e^{(b_{i}-b_{1})t\sqrt{-1}}\) has zero integral in its periods, we have the estimate
\[||\int_{0}^{C}\sum_{i=2}^{s}e^{(b_{i}-b_{1})t\sqrt{-1}}w_{i}dt||\leq\sum_{i=2} ^{s}\frac{2\pi||w_{i}||}{|b_{i}-b_{1}|}<+\infty, \tag{4.5}\]
in which \(||\cdot||\) is any arbitrary norm on \(\mathbf{V}\). The contradiction between (4.4) and (4.5) ends the proof of Lemma 4.2.
### Proof of (1)\(\Rightarrow\)(2) in Theorem 1.3
Assume that there exists a negatively curved left invariant Finsler metric \(F\) on \(G\). The first statement in (2) of Theorem 1.3 has been proved in Proposition 4.1 in [26]. To be self contained, we recall its proof here. Since \(\mathfrak{g}\) is solvable, \(\dim_{\mathbb{R}}\mathfrak{l}^{0}<\dim_{\mathbb{R}}\mathfrak{g}\). So we can find \(u\in\mathfrak{g}\backslash\mathfrak{l}^{0}\) satisfying \(g_{u}(u,\mathfrak{l}^{0})=0\). Obviously, we have \(\dim_{\mathbb{R}}[u,\mathfrak{g}]\leq\dim_{\mathbb{R}}\mathfrak{l}^{0}<\dim _{\mathbb{R}}\mathfrak{g}-1\). To prove the first statement in (2) of Theorem 1.3, we only need to verify \(\dim_{\mathbb{R}}[u,\mathfrak{g}]=\dim_{\mathbb{R}}\mathfrak{g}-1\). Assume conversely it is not true, then the kernel of \(\operatorname{ad}(u):\mathfrak{g}\to[u,\mathfrak{g}]\) contains a vector \(v\in\mathfrak{g}\backslash\mathbb{R}u\), i.e., \(u\) and \(v\) is a linear independent commuting pair. By Lemma 2.4, \(F\) can not be negatively curved. This is a contradiction.
Next, we prove the second statement in (2) of Theorem 1.3. If \(\dim_{\mathbb{R}}\mathfrak{g}=2\), it can not be Abelian, otherwise it has constant zero curvature. Then we can find a basis \(\{e_{1},e_{2}\}\) for \(\mathfrak{g}\), such that \([e_{1},e_{2}]=e_{2}\). Choosing \(y_{0}=e_{1}\), then the second statement in (2) is proved. In the discussion below, we assume \(\dim_{\mathbb{R}}\mathfrak{g}\geq 3\).
By linear submersion, the projection map \(\operatorname{pr}_{1}:\mathfrak{l}^{0}\to\mathfrak{l}^{0}/\mathfrak{l}^{1}\) and the Minkowski norm \(F|_{\mathfrak{l}^{0}}\) on \(\mathfrak{l}^{0}\) induces a Minkowski norm \(\overline{F}\) on \(\mathfrak{l}^{0}/\mathfrak{l}^{1}\). We denote by \(\overline{g}_{\cdot}(\cdot,\cdot)\) the fundamental tensor of \(\overline{F}\). The fundamental tensor of \(F|_{\mathfrak{l}^{0}}\) coincides with that of for \(F\), i.e., \(g_{\cdot}(\cdot,\cdot)\), except that all three inputs must be from \(\mathfrak{l}^{0}\).
**Lemma 4.3**.: _Choose any \(y_{0}\in\mathfrak{g}\backslash\mathfrak{l}^{0}\), we have_
\[\overline{g}_{\overline{u}}(\overline{u},\operatorname{ad}_{\mathfrak{l}^{0}/ \mathfrak{l}^{1}}(y_{0})\overline{u})\neq 0,\quad\forall\overline{u}\in(\mathfrak{l}^{0}/ \mathfrak{l}^{1})\backslash\{0\}.\]
**Proof.** We prove Lemma 4.3 by contradiction. Assume conversely that
\[\overline{g}_{\overline{u}}(\overline{u},\operatorname{ad}_{\mathfrak{l}^{0}/ \mathfrak{l}^{1}}(y_{0})\overline{u})=0\text{ for some }\overline{u}\in(\mathfrak{l}^{0}/ \mathfrak{l}^{1})\backslash\{0\}. \tag{4.6}\]
Let \(u\) be the horizonal lifting of \(\overline{u}\), i.e., \(u\in\mathfrak{l}^{0}\backslash\mathfrak{l}^{1}\) satisfies \(\operatorname{pr}_{1}(u)=\overline{u}\) and \(g_{u}(u,\mathfrak{l}^{1})=0\). In \([y_{0},u]+\mathfrak{l}^{1}\), there exists a unique \(u^{\prime}\) satisfying \(g_{u}(u^{\prime},\mathfrak{l}^{1})=0\). So the assumption (4.6) implies
\[g_{u}(u,[y_{0},u])=g_{u}(u,[y_{0},u]+\mathfrak{l}^{1})=g_{u}(u,u^{\prime})= \overline{g}_{\overline{u}}(\overline{u},\operatorname{ad}_{\mathfrak{l}^{0}/ \mathfrak{l}^{1}}(y_{0})\overline{u})=0, \tag{4.7}\]
where we have applied Lemma 2.1 to get the third equal.
The condition \(g_{u}(u,\mathfrak{l}^{1})=0\) implies
\[g_{u}(u,[\mathfrak{l}^{0},u])=g_{u}(u,\mathfrak{l}^{1})=0. \tag{4.8}\]
The first statement of (2), which has been proved, indicates that \(\mathfrak{g}=\mathfrak{l}^{0}+\mathbb{R}y_{0}\), so (4.7) and (4.8) can be summarized as \(g_{u}(u,[\mathfrak{g},u])=0\). To apply Lemma 2.4 to get the contradiction to negative curvature, we just need to find \(v\in\mathfrak{g}\backslash\mathbb{R}u\) satisfying \([u,v]=0\). When \(\mathfrak{l}^{0}\) is \(k\)-step nilpotent with \(k>1\), we can choose \(v\) from \(\mathfrak{l}^{k-1}\backslash\{0\}\subset\mathfrak{l}^{1}\). When \(\mathfrak{l}^{0}\) is \(1\)-step nilpotent, i.e.,
is Abelian, because \(\dim_{\mathbb{R}}\mathfrak{l}^{0}=\dim_{\mathbb{R}}\mathfrak{g}-1\geq 2\), we can choose \(v\) from \(\mathfrak{l}^{0}\backslash\mathbb{R}u\). This ends the proof of Lemma 4.3.
By Lemma 4.3, we can achieve \(\overline{g}_{\overline{u}}(\overline{u},\operatorname{ad}_{\mathfrak{l}^{0}/ \mathfrak{l}^{1}}(y_{0})\overline{u})>0\) for some \(\overline{u}\in(\mathfrak{l}^{0}/\mathfrak{l}^{1})\backslash\{0\}\), by a possible replacement of \(y_{0}\) with \(-y_{0}\). If \(\dim_{\mathbb{R}}\mathfrak{l}^{0}/\mathfrak{l}^{1}=1\), \(\operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y_{0})\) has only one eigenvalue, which is positive, so the second statement in (2) of Theorem 1.3 is proved in this case.
Now we consider the situation that \(\dim\mathfrak{l}^{0}/\mathfrak{l}^{1}>1\). By the connectedness, we have
\[\overline{g}_{\overline{u}}(\overline{u},\operatorname{ad}_{\mathfrak{l}^{0}/ \mathfrak{l}^{1}}(y_{0})\overline{u})>0,\quad\forall\overline{u}\in(\mathfrak{ l}^{0}/\mathfrak{l}^{1})\backslash\{0\}.\]
Then by the positive \(2\)-homogeneity and the continuity, there is a constant
\[c=\min_{\overline{u}\in(\mathfrak{l}^{0}/\mathfrak{l}^{1})\backslash\{0\}} \frac{\overline{g}_{\overline{u}}(\overline{u},\operatorname{ad}_{\mathfrak{l }^{0}/\mathfrak{l}^{1}}(y_{0})\overline{u})}{\overline{g}_{\overline{u}}( \overline{u},\overline{u})}>0.\]
Fix any \(\overline{u}\in(\mathfrak{l}^{0}/\mathfrak{l}^{1})\backslash\{0\}\), we set \(\overline{u}(t)=e^{\operatorname{fad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y_ {0})}(\overline{u})\) and \(f(t)=\frac{1}{2}\overline{F}(\overline{u}(t))^{2}\). Then for each \(t\in\mathbb{R}\), \(\overline{u}(t)\neq 0\) and \(f(t)\) depend on \(t\) smoothly. Calculation shows
\[\frac{d}{dt}f(t)=\overline{g}_{\overline{u}(t)}(\overline{u}(t), \operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y_{0})(\overline{u}(t)) )\geq c\cdot\overline{g}_{\overline{u}(t)}(\overline{u}(t),\overline{u}(t))= cf(t),\]
\(f(t)\geq e^{ct}f(0)>0\) when \(t\geq 0\). So we have \(\lim_{t\to+\infty}e^{t\operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y _{0})}(\overline{u})=\infty\) for any \(\overline{u}\in(\mathfrak{l}^{0}/\mathfrak{l}^{1})\backslash\{0\}\). By (3)\(\Rightarrow\)(1) in Lemma 4.1, \(\operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y_{0})\) only has eigenvalues with positive real parts. This ends the proof of (1)\(\Rightarrow\)(2) in Theorem 1.3.
### Proof of (2)\(\Rightarrow\)(3) in Theorem 1.3
Let \(y_{0}\in\mathfrak{g}\) be the vector given in (2) of Theorem 1.3. We will prove that it meets the requirement in the second statement in (3) of Theorem 1.3.
**Lemma 4.4**.: _For each \(l\in\mathbb{N}\), \(\operatorname{ad}_{\mathfrak{l}^{l-1}/\mathfrak{l}^{l}}(y_{0})\) only has eigenvalues with positive real parts._
**Proof.** When \(l=1\), Lemma 4.4 is just the second statement in (2) of Theorem 1.3. Now we further assume that for \(l\in\mathbb{N}\), \(\operatorname{ad}_{\mathfrak{l}^{l-1}/\mathfrak{l}^{l}}(y_{0})\) only has eigenvalues with positive real parts.
Using the Jordan forms of \(\operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y_{0})\) and \(\operatorname{ad}_{\mathfrak{l}^{l-1}/\mathfrak{l}^{l}}(y_{0})\), and similar argument as in the proof of Lemma 4.1, we can get:
1. For each \(u\in\mathfrak{l}^{0}\otimes\mathbb{C}\), there exists \(p\in\mathbb{N}\cup\{0\}\), \(n_{i}\in\mathbb{N}\cup\{0\}\), \(\lambda_{i}\in\mathbb{C}\) with \(\operatorname{Re}\lambda_{i}>0\), \(u_{i}\in\mathfrak{l}^{0}\otimes\mathbb{C}\), \(\forall 1\leq i\leq p\), such that \(e^{t\operatorname{ad}_{\mathfrak{l}^{0}/\mathfrak{l}^{1}}(y_{0})}(\operatorname {pr}_{1}(u))=\sum_{i=1}^{p}t^{n_{i}}e^{\lambda_{i}t}\text{pr}_{1}(u_{i})\), i.e., \[e^{t\operatorname{ad}(y_{0})}(u)=\sum_{i=1}^{p}t^{n_{i}}e^{\lambda_{i}t}u_{i} \pmod{\mathfrak{l}^{1}\otimes\mathbb{C}};\] (4.9)
2. For each \(v\in\mathfrak{l}^{l-1}\otimes\mathbb{C}\), there exists \(q\in\mathbb{N}\cup\{0\}\), \(m_{j}\in\mathbb{N}\cup\{0\}\), \(\mu_{j}\in\mathbb{C}\) with \(\operatorname{Re}\mu_{j}>0\), \(v_{j}\in\mathfrak{l}^{l-1}\otimes\mathbb{C}\), \(\forall 1\leq j\leq q\), such that \(e^{t\operatorname{ad}_{\mathfrak{l}^{l-1}/\mathfrak{l}^{l}}(y_{0})}(\text{pr}_{ l}(v))=\sum_{j=1}^{q}t^{mj}e^{\mu_{j}t}\text{pr}_{l}(v_{j})\), i.e., \[e^{t\operatorname{ad}(y_{0})}(v)=\sum_{j=1}^{q}t^{mj}e^{\mu_{j}t}v_{j} \pmod{\mathfrak{l}^{l}\otimes\mathbb{C}}.\] (4.10)
Any vector \(w\in\mathfrak{l}^{l}\otimes\mathbb{C}\), \(\operatorname{mod}\mathfrak{l}^{l+1}\otimes\mathbb{C}\), is a complex linear combination of vectors of the form \([u,v]\), with \(u\in\mathfrak{l}^{0}\otimes\mathbb{C}\) and \(v\in\mathfrak{l}^{l-1}\otimes\mathbb{C}\). Since \(e^{t\operatorname{ad}(y_{0})}\) is a complex Lie algebra automorphism for each \(t\in\mathbb{R}\), for \(u\) and \(v\) in (4.9) and (4.10) respectively, we have
\[e^{t\operatorname{ad}(y_{0})}([u,v]) = [e^{\operatorname{ad}(y_{0})}(u),e^{\operatorname{ad}(y_{0})}(v)]\] \[= [\sum_{i=1}^{p}t^{n_{i}}e^{\lambda_{i}t}u_{i},\sum_{j=1}^{q}t^{mj}e^ {\mu_{j}t}v_{j}]\pmod{\mathfrak{l}^{l+1}\otimes\mathbb{C}}\] \[= \sum_{i=1}^{p}\sum_{j=1}^{q}t^{n_{i}+m_{j}}e^{(\lambda_{i}+\mu_{j} )t}[u_{i},u_{j}]\pmod{\mathfrak{l}^{l+1}\otimes\mathbb{C}}.\]
Now we assume that \(w\) is chosen from \(\mathfrak{l}^{l}\otimes\mathbb{C}\backslash\mathfrak{l}^{l+1}\otimes\mathbb{C}\), then for each \(t\in\mathbb{R}\), \(e^{t\operatorname{ad}(y_{0})}(w)\in\mathfrak{l}^{l}\otimes\mathbb{C}\backslash \mathfrak{l}^{l+1}\otimes\mathbb{C}\). Above calculations and observations provide an integer \(r>0\), \(k_{i}\in\mathbb{N}\cup\{0\}\)
\(\xi_{i}\in\mathbb{C}\) with \(\mathrm{Re}\xi_{i}>0\), \(w_{i}\in\mathfrak{l}^{l}\otimes\mathbb{C}\backslash\mathfrak{l}^{l+1}\otimes \mathbb{C},\,\forall 1\leq i\leq r\), such that the pairs in \(\{(k_{i},\xi_{i}),\forall 1\leq i\leq r\}\) are all distinct and
\[e^{t\mathrm{ad}(y_{0})}(w)=\sum_{i=1}^{r}t^{k_{i}}e^{\xi_{i}t}w_{i},\pmod{l^{ \mathfrak{l}+1}\otimes\mathbb{C}}. \tag{4.11}\]
The equality (4.11) can equivalently presented as
\[e^{t\mathrm{ad}_{il^{\mathfrak{l}/l+1}}(y_{0})}(\mathrm{pr}_{l+1}(w))=\sum_{i= 1}^{r}t^{k_{i}}e^{\xi_{i}t}\mathrm{pr}_{l+1}(w_{i}),\]
where \(\mathrm{pr}_{l+1}(w_{i})\) is nonzero in \(\mathfrak{l}^{l}\otimes\mathbb{C}/\mathfrak{l}^{\mathfrak{l}+1}\otimes \mathbb{C}\), \(\forall 1\leq i\leq r\). Lemma 4.2 provides a sequence \(t_{n}\in\mathbb{R}\) satisfying
\[\lim_{n\to+\infty}t_{n}=+\infty\quad\text{and}\quad\lim_{n\to+\infty}e^{t \mathrm{ad}_{\mathfrak{l}/l^{\mathfrak{l}+1}}(y_{0})}(\mathrm{pr}_{l+1}(w))= \lim_{n\to+\infty}\sum_{i=1}^{r}t_{n}^{k_{i}}e^{\xi_{i}t_{n}}\mathrm{pr}_{l+1}( w_{i})=\infty.\]
By (3)\(\Rightarrow\)(1) in Lemma 4.1, \(\mathrm{ad}_{\mathfrak{l}^{l}/\mathfrak{l}^{\mathfrak{l}+1}}(y_{0})\) only has eigenvalues with positive real parts. This ends the proof of Lemma 4.4 by induction.
The spectrum (i.e., eigenvalue set, counting multiplicities) of \(\mathrm{ad}_{\mathfrak{l}^{0}}(y_{0}):\mathfrak{l}^{0}\to\mathfrak{l}^{0}\) is the union of those of \(\mathrm{ad}_{\mathfrak{l}^{l}/\mathfrak{l}^{\mathfrak{l}+1}}(y_{0}):\mathfrak{ l}^{l}/\mathfrak{l}^{1+1}\to\mathfrak{l}^{l}/\mathfrak{l}^{1+1}\), \(\forall 0\leq l\leq k-1\). So the second statement in (3) of Theorem 4.4 follows after Lemma 4.4 immediately.
Finally, we remark that (3)\(\Rightarrow\)(1) in Theorem 1.3 is an immediate corollary of Theorem 2.7. This ends the proof of Theorem 1.3.
**Acknowledgement**. This paper is supported by Beijing Natural Science Foundation (No. 1222003), National Natural Science Foundation of China (No. 12131012, No. 11821101).
### Declarations
#### Data Availability
Not applicable.
#### Conflict of interest
Not applicable.
|
2307.10042 | Fast Algorithms for a New Relaxation of Optimal Transport | We introduce a new class of objectives for optimal transport computations of
datasets in high-dimensional Euclidean spaces. The new objectives are
parametrized by $\rho \geq 1$, and provide a metric space
$\mathcal{R}_{\rho}(\cdot, \cdot)$ for discrete probability distributions in
$\mathbb{R}^d$. As $\rho$ approaches $1$, the metric approaches the Earth
Mover's distance, but for $\rho$ larger than (but close to) $1$, admits
significantly faster algorithms. Namely, for distributions $\mu$ and $\nu$
supported on $n$ and $m$ vectors in $\mathbb{R}^d$ of norm at most $r$ and any
$\epsilon > 0$, we give an algorithm which outputs an additive $\epsilon
r$-approximation to $\mathcal{R}_{\rho}(\mu, \nu)$ in time $(n+m) \cdot
\mathrm{poly}((nm)^{(\rho-1)/\rho} \cdot 2^{\rho / (\rho-1)} / \epsilon)$. | Moses Charikar, Beidi Chen, Christopher Re, Erik Waingarten | 2023-07-14T04:13:04Z | http://arxiv.org/abs/2307.10042v1 | # Fast Algorithms for a New Relaxation of Optimal Transport
###### Abstract
We introduce a new class of objectives for optimal transport computations of datasets in high-dimensional Euclidean spaces. The new objectives are parametrized by \(\rho\geq 1\), and provide a metric space \(\mathcal{R}_{\rho}(\cdot,\cdot)\) for discrete probability distributions in \(\mathbb{R}^{d}\). As \(\rho\) approaches \(1\), the metric approaches the Earth Mover's distance, but for \(\rho\) larger than (but close to) \(1\), admits significantly faster algorithms. Namely, for distributions \(\mu\) and \(\nu\) supported on \(n\) and \(m\) vectors in \(\mathbb{R}^{d}\) of norm at most \(r\) and any \(\varepsilon>0\), we give an algorithm which outputs an additive \(\varepsilon r\)-approximation to \(\mathcal{R}_{\rho}(\mu,\nu)\) in time \((n+m)\cdot\mathrm{poly}((nm)^{(\rho-1)/\rho}\cdot 2^{\rho/(\rho-1)}/\varepsilon)\).
20236th Annual Conference on Learning Theory
Gergely Neu and Lorenzo Rosasco
Optimal transport, Earth Mover's distance, Sinkhorn distance
## 1 Introduction
This paper is about algorithms for optimal transport problems in high dimensional Euclidean spaces. At a very high level, optimal transport problems provide a convenient metric space between probability distributions supported on vectors in geometric spaces. The most classical such problem is the Earth Mover's Distance (EMD). Let \(\mu\) and \(\nu\) be two distributions supported on vectors in \(\mathbb{R}^{d}\). The Earth Mover's Distance between \(\mu\) and \(\nu\), also known as the Wasserstein-\(1\) distance, is given by minimizing the average distance between pairs of points sampled from a coupling \(\gamma\) of \(\mu\) and \(\nu\):
\[\mathsf{EMD}(\mu,\nu)=\min\left\{\underset{(\mathbf{x},\mathbf{y})\sim\gamma}{\mathbf{ E}}[\|\mathbf{x}-\mathbf{y}\|_{2}]:\;\gamma\;\text{is a coupling of $\mu$ and $\nu^{1}$}\right\}. \tag{1}\]
Importantly, the Earth Mover's distance is a metric on the space of probability distributions supported on \(\mathbb{R}^{d}\), which takes a "ground metric" (in this case, the Euclidean distances) and defines a metric over the space of distributions supported on the ground metric. The resulting notion of similarity or dissimilarity is then used to formulate problems on approximating or learning a distribution supported in \(\mathbb{R}^{d}\).
It is no surprise that the optimal transport has become ubiquitous in machine learning. We refer the reader to the monograph Peyre and Cuturi (2019) for a comprehensive overview, but a few notable examples include Kusner et al. (2015); Courty et al. (2016); Arjovsky et al. (2017). As argued in Peyre and Cuturi (2019), the most recent progress on optimal transport for machine learning has been due to new formulations and approximation algorithms which can scale to larger problem instances. Specifically, there has been a focus on the so-called entropy-regularized optimal transport, also known as "Sinkhorn distances," and (accurate) approximation algorithms which run in quadratic time (in the original representation for Euclidean inputs) Cuturi (2013). The goal of this work is to further explore such optimal transport questions from the computational perspective, where we will seek much faster _sub-quadratic_ algorithms for computing optimal transport distances.
As we explain next, the algorithmic landscape for optimal transport remains very much unknown. On the one hand, the algorithms community has devoted a significant effort (Charikar (2002); Indyk and Thaper (2003); Indyk (2004); Andoni et al. (2008, 2009); Sharathkumar and Agarwal (2012); Agarwal and Sharathkumar (2014); Andoni et al. (2014); Backurs and Indyk (2014); Andoni et al. (2015); Khesin et al. (2019); Backurs et al. (2020); Chen et al. (2022b); Agarwal et al. (2022)) to developing fast algorithms for approximating \(\mathsf{EMD}\). We expand on these shortly, but, at a high level, all approaches rely on efficient spanner constructions or approximate nearest neighbor data structures. These algorithm are fast (approaching linear time), but they run into a serious approximation bottleneck. For high-dimensional Euclidean spaces, almost-linear time algorithms incur large constant-factor approximations, making these approaches undesirable.2 Instantiating these techniques for accurate, \((1\pm\varepsilon)\)-approximations degrades the algorithmic performance to essentially quadratic time.3
Footnote 2: For example, a \(2\)-approximation which is already oftentimes too big, incur a polynomial overhead of \((n+m)^{1/7}\)Andoni and Razenshteyn (2015).
Footnote 3: An algorithm for \(\mathsf{EMD}(\mu,\nu)\) (or any problem whose output is a positive real number) which achieves approximation factor \(c>1\) is an algorithm which outputs a number which is larger than \(\mathsf{EMD}(\mu,\nu)\) and is at most \(c\cdot\mathsf{EMD}(\mu,\nu)\) with high probability. These are _multiplicative_ approximations, and we will also refer to _additive_\(\varepsilon r\)-approximations which outputs a quantity which is up to \(\pm\varepsilon r\) from a desired quantity.
On the other hand, algorithms for the entropy-regularized optimal transport do achieve accurate additive \(\pm\varepsilon r\) approximations for datasets of diameter \(r\), but have running times which are quadratic in the original representation of the input. In particular, the input distributions are specified by the vectors in their support and the probabilities with which they are sampled. However, the first step of the algorithm involves explicitly materializing the distance matrix encoding all pairwise distances between the vectors. As the support of these distributions grows, this first step is already a major hurdle. While there have been approaches to avoid materializing the entire matrix Bonneel et al. (2015); Altschuler et al. (2019); Paty and Cuturi (2019), these methods consider a projection of the points onto a low-dimensional space and the resulting optimization costs (of the low-dimensional \(\mathsf{EMD}\) or \(\mathsf{Sinkhorn}\) distances) cannot be related back to the original distribution without a significant loss in approximation.
This work seeks to explore the best of both worlds from the algorithmic perspective. We will give a new class of objectives for optimal transport problems which also provide metric spaces for probability distributions of high-dimensional Euclidean spaces (like the Earth Mover's distance and Sinkhorn distances). The main benefit is that (i) these metrics smoothly perturb the Earth Mover's distance, (ii) admit efficient algorithms with running times which are significantly sub-quadratic (like the Earth Mover's distance), and (iii) give accurate \(\pm\varepsilon r\)-approximations for distri
butions whose supports have diameter at most \(r\) (like the Sinkhorn distances). The key will be to never explicitly compute the quadratic-size distance matrix. Instead, we show how one may implement a Sinkhorn-like update procedure using recent algorithms for the problem of kernel density estimation.
### Related Work: The Spanner Approach for Emd
The Earth Mover's Distance can be naturally cast as an uncapacitated minimum cost flow problem. The reduction is straight-forward. One may consider the (weighted) complete bipartite graph \(G=(U,V,E=U\times V,w)\) where each vertex of \(U\) is a vector in the support of \(\mu\) and each vertex in \(V\) is a vector in the support of \(\nu\) and the weights (or cost) \(w\) of an edge \(e=(i,j)\) is \(w(e)=\|x_{i}-y_{j}\|_{2}\). The distributions may then be written as vectors \(\mu\in\mathbb{R}^{n}\) and \(\nu\in\mathbb{R}^{m}\) which encode the "supply" and "demand", and the Earth Mover's Distance is the minimum cost flow on \(G\) according to the supply/demands \(\mu\) and \(\nu\) with costs \(w\) (there is no need for capacities in this reduction). Over the years, graph algorithms have become incredibly efficient, so applying graph-based min-cost flow solvers with the reduction above gives exact algorithms for EMD running in time \((nm)^{1+o(1)}\).4
Footnote 4: The relevant citation for a fast min-cost flow algorithm is the recent breakthrough of Chen et al. (2022). These give exact algorithms for graphs whose time in almost-linear in the number of edges, \(nm\) of the graph. The other relevant citation is Sherman (2017), giving algorithms for \(1+\varepsilon\)-approximation to uncapacitated min-cost flow in the same amount of time, which suffices for EMD.
The above approach paves the way for faster approximation algorithms by using graph spanners. For any \(c>1\), one seeks a graph \(H\) with substantially fewer edges on the vertex set \(U\) and \(V\). The desired property is that for any \(i\in U\) and \(j\in V\), the total length of the shortest path between \(i\in U\) and \(j\in V\) along edges of \(H\) should be a factor of \(c\)-approximation to the distance between the underlying vectors \(x_{i}\) and \(y_{j}\). Running the min-cost flow algorithms on \(H\) is faster (since there are fewer edges), and give a \(c\)-approximation for EMD. While sparse spanners for Euclidean distances do exist, as the approximation \(c\) approaches \(1+\varepsilon\), the size of these spanners become \(mn\).
Instead, the focus has been on obtaining sparse spanners for (large) constant factor approximations. For example, for any \(c>1\), Har-Peled et al. (2013) gives \(c\)-spanners of size \((n+m)^{1+1/c^{2}}\) for Euclidean spaces (\(\ell_{2}\)) in time \(\tilde{O}((n+m)^{1+1/c^{2}})\) (which is fast when we allow a large \(c\)). The other approach, taken in Agarwal and Sharathkumar (2014), does not explicitly use a spanner, but uses an approximate nearest neighbor search data structure. The resulting time and approximation depends on the time and approximation for nearest neighbor search, but similarly to before, the approximation is large when the algorithms are fast.
### Related Work: Sinkhorn Distances
The algorithm which is widely used for computing an optimal transport is the Sinkhorn algorithm for entropy-regularized optimal transport Cuturi (2013); Altschuler et al. (2017) (see also, the recent work Kiem et al. (2020); Le et al. (2021)). Given two distributions \(\mu\) and \(\nu\) supported on vectors in \(\mathbb{R}^{d}\), the entropy-regularized optimal transport introduces an entropic regularization term to the the Earth Mover's distance. Specifically, for any \(\eta\geq 0\), it optimizes
\[\mathsf{SNK}_{\eta}(\mu,\nu)=\min\left\{\underset{(\mathbf{x},\mathbf{y})\sim\gamma}{ \mathbf{E}}[\|\mathbf{x}-\mathbf{y}\|_{2}]-\eta H(\gamma):\;\gamma\;\text{is a coupling of $\mu$ and $\nu$}\right\}.\]
The main benefit is that the algorithm for optimizing \(\mathsf{SNK}_{\eta}(\mu,\nu)\) performs extremely well. The algorithm used is iterative, and uses \(\mathrm{poly}(1/(\eta\varepsilon))\) iterations to output a solution which is an additive \(\pm\varepsilon r\)-approximation (where \(r\) is the maximum distance between any pair of points in the support of \(\mu\) and \(\nu\)). Oftentimes, the maximum distance \(r\) is not too large (for example, it is at most \(2\) on the unit sphere), making the algorithm very desirable in practice. However, the main downside is that the algorithm explicitly computes the \(nm\)-distance matrix of pairwise distances of vectors in the support of \(\mu\) and \(\nu\). Indeed, the algorithm does not use the fact that distances are Euclidean and generalizes to non-Euclidean metrics. The main downside is that, for distributions on Euclidean spaces, the description of the input (of size \(O(d(n+m))\)) is blown up to a quadratic \(nm\)-size distance matrix, which can be a major bottleneck in the computation if \(n\) and \(m\) are very large. Finally, it is important to note that, we currently do not know whether the original Earth Mover's distance admits a similar \(\varepsilon r\)-approximation for bounded datasets in time substantially smaller than \(nm\).
### Our Contributions
This paper addresses the following questions:
1. Do there exists optimal transport metrics which do admit good approximations in significantly sub-quadratic time? In particular, can we match the approximation guarantees from Sinkhorn distances with the algorithmic techniques from the Earth Mover's distance?
2. Can one combine techniques, like locality-sensitive hashing (LSH) and embeddings, with the alternating updates procedure in Sinkhorn's algorithm even though approximations incurred from using LSH and embeddings tend to incur large constant factors?
Our main contribution is introducing a class of objective functions for optimal transport computations. The new objectives \(\mathcal{R}_{\rho}(\mu,\nu)\) are parametrized by \(\rho\geq 1\) and provide metric spaces over discrete distributions in \(\mathbb{R}^{d}\). As \(\rho\) approaches 1, \(\mathcal{R}_{\rho}(\mu,\nu)\) approaches \(\mathsf{EMD}(\mu,\nu)\), but enjoys favorable computational properties. In particular, we will show that \(\mathcal{R}_{\rho}(\mu,\nu)\) may be approximated up to additive \(\varepsilon r\)-error for datasets of diameter at most \(r\) in time which is near-linear (for small \(\rho\) close to \(1\)). We view \(\rho\) as introducing a new "knob" for the Earth Mover's distance: as \(\rho\to 1\), the metrics \(\mathcal{R}_{\rho}(\cdot,\cdot)\) approach \(\mathsf{EMD}(\cdot,\cdot)\); however, for \(\rho\) close to (but not too close to) 1, very fast algorithms with accurate approximations are possible. Thus, our new algorithm gives a positive answer to Question 1. Namely, if one is willing to change the problem slightly, one can achieve the approximation guarantees of Sinkhorn distances with the running times like the Earth Mover's distance.
While Question 2 is inherently vague, such techniques are known in a related algorithmic context. One of our main conceptual contributions is drawing a connection to _kernel density estimation_Charikar and Siminelakis (2017); Backurs et al. (2018); Siminelakis et al. (2019); Charikar et al. (2020); Backurs et al. (2021); Bakshi et al. (2022). The algorithms developed in that context use locality-sensitive hashing and embeddings, but are still able to output \((1\pm\varepsilon)\)-approximations. In particular, a key feature of those works is that the distortion incurred by locality-sensitive hashing and embeddings factors into the running time of the algorithm and not the final approximation. In summary, our main conceptual contributions may be summarized as follows:
* There exists a class of optimal transport metrics parametrized by \(\rho\) which smoothly perturb the Earth Mover's distance (approaching \(\mathsf{EMD}\) as \(\rho\to 1\)).
* For a small setting of \(\rho>1\), these problems can be optimized in significantly sub-quadratic time to arbitrarily accurate additive approximations for bounded datasets.
We believe the new problem formulation and the ideas behind the algorithm will lead to improvements in practical algorithms for optimal transport metrics. We emphasize that there are no algorithmic approaches that achieve \((1\pm\varepsilon)\)-approximations or \(\varepsilon r\)-additive approximations for either EMD nor SNK in time \(n^{1.99}\). In addition, there is some reason to believe that this may be impossible for EMD Rohatgi (2019). By changing the problem and allowing a small additive error, we avoid the large constant factors. We also suggest looking at Section 4 of Backurs et al. (2020), who group algorithms by their running times; the new techniques achieve the accurate approximations of the "quadratic time" algorithms, even though they run much faster (at least in theory).
Outline.The next section gives the new objective \(\mathcal{R}_{\rho}(\mu,\nu)\) and states our main Theorem 1. We will overview the components of the proof in the next section. Then, we give a description of the main algorithm while assuming algorithms for estimating the gradients and the penalty term.
## 2 The Definition of \(\ell_{\rho}\)-Optimal Transports
For any dimension \(d\in\mathbb{N}\), let \(\mu\) and \(\nu\) denote two discrete distributions supported on \(n\) and \(m\) point masses in \(\mathbb{R}^{d}\), respectively. More specifically, \(\mu\) is specified by \(n\) points \(x_{1},\ldots,x_{n}\in\mathbb{R}^{d}\) and corresponding weights \(\mu_{1},\ldots,\mu_{n}\in\mathbb{R}_{>0}\) where \(\sum_{i=1}^{n}\mu_{i}=1\), and \(\nu\) is specified by \(m\) points \(y_{1},\ldots,y_{m}\in\mathbb{R}^{d}\) with the corresponding weights \(\nu_{1},\ldots,\nu_{m}\in\mathbb{R}_{>0}\) with \(\sum_{i=1}^{m}\nu_{i}=1\) (note that we can always assume that \(\mu_{i}\) and \(\nu_{j}\) are strictly positive by a linear-time scan which can remove points of weight-\(0\)). One ought to think of \(d=\omega(\log n)\), so we seek algorithms which overcome the "curse of dimensionality" and do not have running times which scale exponentially in \(d\).
For any parameter \(\rho>1\), we seek to optimize the following objective, which will specify a metric space over probability distributions which relax the optimal transport problem (Lemma 5 in Appendix A):
\[\mathcal{R}_{\rho}(\mu,\nu)=\min\left\{\left(\underset{\begin{subarray}{c} \boldsymbol{i}\sim\mu\\ \boldsymbol{j}\sim\nu\end{subarray}}{\mathbf{E}}\left[\left(\frac{\gamma_{ \boldsymbol{i}\boldsymbol{j}}}{\mu_{\boldsymbol{i}}\nu_{\boldsymbol{j}}} \cdot\|x_{\boldsymbol{i}}-y_{\boldsymbol{j}}\|_{2}\right)^{\rho}\right] \right)^{1/\rho}:\text{$\gamma$ is a coupling of $\mu$ and $\nu$}\right\}. \tag{2}\]
In words, for any coupling \(\gamma\) between the distributions \(\mu\) and \(\nu\), one may associate an \(nm\)-dimensional vector encoding the costs associated with each point-mass. Each point \(x_{i}\) from \(\mu\) and \(y_{j}\) from \(\nu\), the coupling \(\gamma\) transports \(\gamma_{ij}\) "mass" from \(x_{i}\) to \(y_{j}\) and pays a function of the distance between \(x_{i}\) and \(y_{i}\) times \(\gamma_{ij}/(\mu_{i}\nu_{j})\). In \(\mathcal{R}_{\rho}(\mu,\nu)\), we optimize the normalized \(\ell_{\rho}\)-norm of the cost vector. Notice that, when \(\rho=1\), \(\mathcal{R}_{\rho}(\mu,\nu)\) is the Earth Mover's distance distance distance between \(\mu\) and \(\nu\). As we vary \(\rho\geq 1\), one may relate the \(\ell_{\rho}\)- and \(\ell_{1}\)-norm, implying
\[\mathsf{EMD}(\mu,\nu)\leq\mathcal{R}_{\rho}(\mu,\nu)\leq\sup_{i,j}\left|\frac {1}{\mu_{i}\nu_{j}}\right|^{(\rho-1)/\rho}\mathsf{EMD}(\mu,\nu).\]
When \(\rho>1\), we will obtain a sequence of (as we will see) computationally easier metric spaces which approach \(\mathsf{EMD}(\mu,\nu)\). The key is that performing this modification will allow for significantly faster algorithms in terms of \(n\) and \(m\) (the number of points), while having a dependence on \(\rho\) which will be \(2^{O(\rho/(\rho-1))}\).
We view \(\rho>1\) as a desired computational "knob," which allows one to tradeoff the running time of an algorithm and the metric's relation to \(\mathsf{EMD}\). Note that, in a \(c\)-approximation algorithm for \(\mathsf{EMD}\), \(c\) also trades-off faster/slower running times for looser/tighter relations to \(\mathsf{EMD}\). The difference, however, is that for any \(\rho>1\), \(\mathcal{R}_{\rho}(\cdot,\cdot)\) is still a metric space over probability distributions (and the same cannot be said of a \(3\)-approximation to \(\mathsf{EMD}\)). The specific choice of metric space (\(\mathsf{EMD}\), Wasserstein-\(p\), or \(\mathsf{SNK}_{\eta}\)) is oftentimes flexible, so long as it captures the desired notion of similarity/dissimilarity of distributions. The hope is that for moderate values of \(\rho\), \(\mathcal{R}_{\rho}(\mu,\nu)\) suffices for downstream applications, and captures the desired properties of an optimal-transport \(\gamma\).
From a more technical perspective, (2) encourages couplings \(\gamma\) whose contribution to the cost vector is "spread", so that the \(\ell_{p}\)-norm will be small. The main advantage is that, using a connection to recent work on kernel density estimation in high-dimensions Backurs et al. (2018) and scaling approaches to entropy regularized optimal transport Cuturi (2013); Altschuler et al. (2017), we give very efficient (and simple) algorithms for approximating \(\mathcal{R}_{\rho}(\mu,\nu)\).
Notation for Running Time Bounds.We will use the following notation in order to describe the running time bounds. The focus is on improving on the dependence on \(n\) and \(m\) when estimating optimal transports, so we use the notation \(\operatorname{poly}^{*}(f)\) to denote a fixed polynomial function of \(f\), and which hides poly-logarithmic factors \(n,m\), \(\delta\) (the failure probability), \(\varepsilon\) (the accuracy) and \(r\) (the radius of the dataset). In addition, since we will incur a polynomial dependence on \(\varepsilon\), we will automatically apply the Johnson-Lindenstrauss lemma and assume that \(d=O(\log(nm)/\varepsilon^{2})\).
**Theorem 1**: _There exists a randomized algorithm with the following guarantees. The algorithm receives as input_
* _Two sets of points_ \(\{x_{1},\ldots,x_{n}\}\) _and_ \(\{y_{1},\ldots,y_{m}\}\) _in_ \(\mathbb{R}^{d}\) _where the maximum pairwise distance between points_ \(\sup_{i,j}\|x_{i}-y_{j}\|_{2}\leq r\)_._
* _Two vectors_ \(\mu\in\mathbb{R}^{n}_{\geq 0}\) _and_ \(\nu\in\mathbb{R}^{m}_{\geq 0}\) _whose coordinates sum to_ \(1\) _and encode the distributions over_ \(\{x_{1},\ldots,x_{n}\}\) _and_ \(\{y_{1},\ldots,y_{n}\}\)_, respectively._
* _An accuracy parameter_ \(\varepsilon>0\)_, a failure probability_ \(\delta>0\)_, and a parameter_ \(\rho\in[1,2]\)_._
_The algorithm runs in time \((n+m)\cdot\operatorname{poly}^{*}((nm)^{(\rho-1)/\rho}\cdot 2^{\rho/(\rho-1) }/\varepsilon)\), and outputs an estimate \(\widehat{\boldsymbol{\eta}}>0\) which satisfies_
\[|\widehat{\boldsymbol{\eta}}-\mathcal{R}_{\rho}(\mu,\nu)|\leq\varepsilon\cdot r\]
_with probability at least \(1-\delta\)._
The main advantage of Theorem 1 is that it does not pay the quadratic \(nm\)-factor in the running time and at the same time obtains accurate approximations. In particular, suppose we consider a setting of \(\rho\) which is \(\rho=1+1/\sqrt{\log(nm)}\), then the corresponding running time of Theorem 1 to approximate \(\mathcal{R}_{\rho}(\mu,\nu)\) up to an additive \(\pm\varepsilon r\) becomes
\[(n+m)^{1+o(1)}\cdot\operatorname{poly}(1/\varepsilon).\]
Generally, as \(\rho\) becomes close to \(1\), the metric \(\mathcal{R}_{\rho}(\cdot,\cdot)\) approaches \(\mathsf{EMD}(\cdot,\cdot)\) and the dependence on \(n\) and \(m\) becomes better, since \((n+m)\cdot(nm)^{O((\rho-1)/\rho)}\). However, one does not want to set \(\rho\) to be too close to \(1\), since the factor of \(2^{O(\rho/(\rho-1))}\) may begin to dominate.
**Remark 2** (Challenges when \(\rho\to 1\)): _In order to use \(\mathcal{R}_{\rho}(\cdot,\cdot)\) to approximate \(\mathsf{EMD}(\cdot,\cdot)\) up to \((1+\varepsilon)\)-factor, one would need to set \(\rho\) to roughly \(1+O(\varepsilon/\log(nm))\); however, this approach runs into a technical challenge. There is a concrete sense in which the parameter \(\rho\geq 1\) adds a certain "smoothness" which is not present in \(\mathsf{EMD}\). At a very high level, we show that an additive approximation of \(\mathcal{R}_{\rho}\) reduces to queries for "smooth" kernel density evaluation Backurs et al. (2018) which suffer an exponential dependence on \(\rho/(\rho-1)\). With \(\rho=1+O(\varepsilon/\log(nm))\), this dependence would become \((nm)^{O(1/\varepsilon)}\)--worse than the \((nm)^{1+o(1)}\) time required from prior work._
### Proof of Theorem 1 Overview
We overview the major components of the proof of Theorem 1. While (relatively minor) technical challenges arise when fleshing out the details, the structure and algorithm proceed with the following plan.
The Duals of \(\mathsf{EMD}(\mu,\nu)\) and \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\).The challenge in optimizing \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\) (which also appears in \(\mathsf{EMD}(\mu,\nu)\)) is that an algorithm cannot even write down the explicit description of the optimization, nor can it explicitly maintain a coupling \(\gamma\), since this requires \(\Omega(nm)\) values. On the other hand, both \(\mathsf{EMD}(\mu,\nu)\) and \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\) only have \(n+m\) equality constraints, so the duals are maximization problems over \(n+m\) variables (one for each constraint). The approach will be to show that, using data structures for kernel density estimation, we can implicitly maximize the dual of \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\) while only maintaining the \(n+m\) dual variables.
To see the connection, we first write down the dual for \(\mathsf{EMD}(\mu,\nu)\), which has \(n+m\) variables \(\alpha_{1},\ldots,\alpha_{n}\) and \(\beta_{1},\ldots,\beta_{m}\) and asks to maximize
\[\mathsf{EMD}(\mu,\nu)=\max_{\begin{subarray}{c}\alpha\in\mathbb{R}^{n}\\ \beta\in\mathbb{R}^{m}\end{subarray}}\left\{\sum_{i=1}^{n}\mu_{i}\alpha_{i}- \sum_{j=1}^{m}\nu_{j}\beta_{j}:\forall(i,j)\in[n]\times[m],\alpha_{i}-\beta_{ j}\leq\|x_{i}-y_{j}\|_{2}\right\}. \tag{3}\]
For \(\rho>1\), the Holder conjugate \(s>1\), is the number satisfying \(1/\rho+1/s=1\). The dual for \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\) is the following unconstrained maximization problem on \(n+m\) variables \(\alpha_{1},\ldots,\alpha_{n}\) and \(\beta_{1},\ldots,\beta_{m}\),
\[\mathcal{R}_{\rho}(\mu,\nu)^{\rho}=\max_{\begin{subarray}{c} \alpha\in\mathbb{R}^{n}\\ \beta\in\mathbb{R}^{m}\end{subarray}}\left\{\sum_{i=1}^{n}\mu_{i}\alpha_{i}- \sum_{j=1}^{m}\nu_{j}\beta_{j}-\frac{1}{s}\left(1-\frac{1}{s}\right)^{s-1} \sum_{i=1}^{n}\sum_{j=1}^{m}\mu_{i}\nu_{j}\left(\frac{(\alpha_{i}-\beta_{j}) ^{+}}{\|x_{i}-y_{j}\|_{2}}\right)^{s}\right\}, \tag{4}\]
where we consider \(\frac{0}{0}=0\), and \((\alpha_{i}-\beta_{j})^{+}\) is \(\alpha_{i}-\beta_{j}\) if positive and \(0\) otherwise. Note the difference: in (3), there are \(nm\) hard constraints which enforce \((\alpha_{i}-\beta_{j})^{+}/\|x_{i}-y_{j}\|_{2}\leq 1\) for every \(i\neq j\). In (4), the \(nm\) constraints are relaxed. The optimization is allowed to set \(\alpha_{i}-\beta_{j}\) larger than \(\|x_{i}-y_{j}\|_{2}\), but pays a penalty in the objective proportional to \(((\alpha_{i}-\beta_{j})^{+}/\|x_{i}-y_{j}\|_{2})^{s}\). As \(\rho\) gets closer to \(1\), the Holder conjugate \(s\) becomes larger, and the penalty becomes more pronounced. For simplicity in the notation, we will write
\[g(\alpha,\beta)\stackrel{{\mathrm{def}}}{{=}}\sum_{i=1}^{n}\mu_ {i}\alpha_{i}-\sum_{j=1}^{m}\nu_{j}\beta_{j}-\frac{1}{s}\left(1-\frac{1}{s} \right)^{s-1}\sum_{i=1}^{n}\sum_{j=1}^{m}\mu_{i}\nu_{j}\left(\frac{(\alpha_{i} -\beta_{j})^{+}}{\|x_{i}-y_{j}\|_{2}}\right)^{s}.\]
Partial Derivatives via Kernel Density EstimationSince (4) is a concave maximization problem, a simple approach is to simulate a gradient ascent algorithm on the dual variables \(\alpha_{1},\ldots,\alpha_{n}\) and \(\beta_{1},\ldots,\beta_{m}\), where we update in the direction of the partial derivatives. The partial derivatives with respect to \(\alpha_{i}\) and \(\beta_{j}\) are given by
\[\frac{\partial g}{\partial\alpha_{i}} =\mu_{i}\left(1-\left(1-\frac{1}{s}\right)^{s-1}\sum_{j=1}^{m}\nu _{j}\cdot\frac{((\alpha_{i}-\beta_{j})^{+})^{s-1}}{\|x_{i}-y_{j}\|_{2}^{s}}\right) \tag{5}\] \[\frac{\partial g}{\partial\beta_{j}} =-\nu_{j}\left(1-\left(1-\frac{1}{s}\right)^{s-1}\sum_{i=1}^{n} \mu_{i}\cdot\frac{((\alpha_{i}-\beta_{j})^{+})^{s-1}}{\|x_{i}-y_{j}\|_{2}^{s}} \right). \tag{6}\]
Importantly, the partial derivatives depend on \(\mu_{i}\) and \(\nu_{j}\) and a weighted sum of \(1/\|x_{i}-y_{j}\|_{2}^{s}\). First, note that we receive \(\mu\) and \(\nu\) as input, so \(\mu_{i}\) and \(\nu_{j}\) are \(n+m\) constants throughout the execution. The weighted sums are the more challenging parts, and for these we use the kernel density estimation data structures. We interpret \(\mathsf{K}(x_{i},y_{j})=1/\|x_{i}-y_{j}\|_{2}^{s}\) as a "smooth" kernel, similar to the Student-\(t\) Kernel studied in Backurs et al. (2018). These smooth kernels decay polynomially as a function of the distance \(\|\cdot\|_{2}\) and admit very efficient data structures. Specializing the results of Backurs et al. (2018) for \(\mathsf{K}\), they give data structures which preprocess a set of points \(P\) and can support \((1\pm\varepsilon)\)-approximate kernel evaluation queries of the form \(\sum_{x\in P}\mathsf{K}(x,y)\) for any \(y\in\mathbb{R}^{d}\). The query complexity is \(\mathrm{poly}^{*}(2^{s}/\varepsilon)\) and \(s\) becomes \(\rho/(\rho-1)\). In order to use these for Theorem 1, we incorporate the weights \(((\alpha_{i}-\beta_{j})^{+})^{s-1}\) by augmenting those data structures in Section C (we overview the augmentations shortly). Once this is done, the algorithm can initialize \(\alpha\in\mathbb{R}^{n}\) and \(\beta\in\mathbb{R}^{m}\) to \(0\) and effectively update \(\alpha\) and \(\beta\) in the directions of the partial derivatives in order to increase the objective function.
The only remaining challenge is setting the step size of the update, and ensuring that the function is smooth enough. Note that because of the non-linear penalty term, there is no global Lipschitz constant, but we will argue that our optimization always remains within a smooth enough region if the step size is set appropriately. We do this final argument by applying a simple preprocessing step. The preprocessing will guarantee that the distance between any \(x_{i}\) and \(y_{j}\) is always between \(\varepsilon r\) and \(r\) (which changes \(\mathcal{R}_{\rho}(\mu,\nu)\) by at most \(\varepsilon r\)), and that every non-zero element of the support of \(\mu\) and \(\nu\) is sampled with at least some probability. This means that an update which changes some \(\alpha\) or \(\beta\) does not change the penalty term significantly (because the fact that the distance \(\|x_{i}-y_{j}\|_{2}\) in the denominator is at least \(\varepsilon r\) ensures the penalty does not blow up).
Augmenting Kernel Density Estimates to Incorporate WeightsFor \(s>1\), we want to maintain a set of points \(P=\{x_{1},\ldots,x_{n}\}\) in \(\mathbb{R}^{d}\), where each point is associated with a weight \(\alpha_{1},\ldots,\alpha_{n}\in\mathbb{R}\) and a parameter \(\mu_{1},\ldots,\mu_{i}\) which are between \(1/\mathrm{poly}(n)\) and \(1\). A query is specified by another vector \(y\in\mathbb{R}^{d}\) and its weight \(\beta\), and the task is to output
\[\sum_{i=1}^{n}\mu_{i}\cdot((\alpha_{i}-\beta)^{+})^{s-1}\cdot\mathsf{K}(x_{i}, y), \tag{7}\]
where \(\mathsf{K}(x_{i},y)=1/\|x_{i}-y\|_{2}^{s}\). We will augment the data structures from Backurs et al. (2018) as follows. First, partition \(P\) into \(O(\log n/\varepsilon)\) ranges which partition \([1/\mathrm{poly}(n),1]\) according to powers of \(1+\varepsilon\) so as to assume that \(\mu_{j}\) is the same within each range. Note that we know the weights \(\mu_{1},\ldots,\mu_{n}\) during the preprocessing, so that we may perform this partition; however, since
we do not know \(\beta\) during the preprocessing, we cannot similarly partition according to the value of \((\alpha_{i}-\beta)^{s}\).
Instead, we will proceed with the following. For each range \(j\), the resulting set \(P_{j}\) is stored sorted in a binary tree according to the weights \(\alpha\), and let \(\alpha_{\max}\) be the largest weight. Each internal node holds a data structure of Backurs et al. (2018) maintaining points in its subtree. When a query \((y,\beta)\in\mathbb{R}^{d}\times\mathbb{R}\) comes, one may perform the following:
1. Let \(\mathbf{\xi}\) be uniformly drawn from the interval \([0,(\alpha_{\max}-\beta)^{s-1}]\).
2. Find the value \(\beta+\mathbf{\xi}^{1/(s-1)}\) in the binary tree, and we consider the \(k=O(\log n)\) nodes which partition the interval \([\beta+\mathbf{\xi}^{1/(s-1)},\alpha_{\max}]\).
3. Query all \(k\) kernel evaluation data structures stored at those nodes. If \(\widehat{\mathbf{\eta}}_{1},\ldots,\widehat{\mathbf{\eta}}_{k}\) are the estimates output by the \(k\) data structures with \(y\), output \((\alpha_{\max}-\beta)^{s-1}\sum_{\ell=1}^{k}\widehat{\mathbf{\eta}}_{\ell}\).
The main observation is that the sampling automatically incorporates weights. For example, suppose the data structures of Backurs et al. (2018) were exact, then our estimate is an unbiased estimator of (7):
\[\mathbf{E}\left[(\alpha_{\max}-\beta)^{s-1}\sum_{\ell=1}^{k}\widehat{\mathbf{\eta} }_{\ell}\right]=\sum_{i=1}^{m}(\alpha_{\max}-\beta)^{s-1}\cdot\mathbf{Pr}_{ \mathbf{\xi}}\left[\alpha_{i}\geq\beta+\mathbf{\xi}^{1/(s-1)}\right]\cdot\mathsf{K}(x _{i},y),\]
and the probability that \(\alpha_{i}\geq\beta+\mathbf{\xi}^{1/(s-1)}\) is exactly \(((\alpha_{i}-\beta)^{+})^{s-1}/(\alpha_{\max}-\beta)^{s-1}\). The variance of the above estimation is too large (which occurs because \(\alpha_{\max}\gg\beta\)), so we make the following minor modification. We partition the interval \([\beta,\alpha_{\max}]\) into poly-logarithmic, geometrically increasing groups, and perform the above process for each group. This is then enough to bound the variance.
## 3 A Gradient Ascent Algorithm
### A Simple Preprocessing
Before we give the description of the algorithm, we will run a simple preprocessing step which simplifies our input. We will think of \(\mu\) and \(\nu\) as the distribution over \(\{x_{1},\ldots,x_{n}\}\) and \(\{y_{1},\ldots,y_{m}\}\), respectively. For small parameters \(\sigma,\sigma_{\mu},\sigma_{\nu}>0\), we will define the distributions \(\mu^{\prime}\) and \(\nu^{\prime}\) in the following way:
* First, we consider the points \(x_{1}^{\prime},\ldots,x_{n}^{\prime}\) and \(y_{1}^{\prime},\ldots,y_{m}^{\prime}\) in \(\mathbb{R}^{d+1}\) where we append a coordinate and we let \(x_{i}^{\prime}=(x_{i},\sigma r)\) and \(y_{j}^{\prime}=(y_{j},0)\). This way, we guarantee that for every \(i\in[n]\) and \(j\in[m]\), we satisfy \(\sigma r\leq\|x_{i}^{\prime}-y_{j}^{\prime}\|_{2}\leq r\sqrt{1+\sigma^{2}}\) (where the upper bound follows from the fact \(\|x_{i}-y_{j}\|_{2}\leq r\)).
* We define the sets \(L_{\mu}\subset[n]\) and \(L_{\nu}\subset[m]\) for the indices of \(\mu\) and \(\nu\) which have low probability, i.e., \(L_{\mu}=\{i\in[n]:\mu_{i}<\sigma_{\mu}/n\}\) and \(L_{\nu}=\{j\in[m]:\nu_{j}<\sigma_{\nu}/m\}\). We denote \(\zeta_{\mu}=\sum_{i\in L_{\mu}}\mu_{i}\leq\sigma_{\mu}\) and \(\zeta_{\nu}=\sum_{j\in L_{\nu}}\nu_{j}\leq\sigma_{\nu}\). The distribution \(\mu^{\prime}\) is supported on the points \(x_{1}^{\prime},\ldots,x_{n}^{\prime}\), and \(\nu^{\prime}\) is supported on the points \(y_{1}^{\prime},\ldots,y_{n}^{\prime}\) given by \[\mu_{i}^{\prime}=\left\{\begin{array}{cc}0&i\in L_{\mu}\\ \mu_{i}/(1-\zeta_{\mu})&i\in[n]\setminus L_{\mu}\end{array}\right.\qquad \text{and}\qquad\nu_{j}^{\prime}=\left\{\begin{array}{cc}0&j\in L_{\nu}\\ \nu_{j}/(1-\zeta_{\nu})&j\in[m]\setminus L_{\nu}\end{array}\right..\]
The above transformations has the benefit that we now have a lower bound on the minimum distance between any point from the support of \(\mu^{\prime}\) and any point from the support of \(\nu^{\prime}\), while only increasing the maximum distance by at most a factor of \(\sqrt{1+\sigma^{2}}\). Furthermore, the distributions \(\mu^{\prime}\) and \(\nu^{\prime}\) have all elements of their support with probability at least \(\sigma_{\mu}/n\) and \(\sigma_{\nu}/m\), respectively, since we have removed the low-probability items. Thus, the algorithm below will apply the above perturbation, and we may assume throughout the execution the corresponding properties of \(\mu\) and \(\nu\). Note that as long as we ensure the parameter
\[\left(n^{\rho-1}\cdot\frac{\sigma}{\sigma_{\mu}^{\rho-1}}+\sigma_{\mu}\right) ^{1/\rho}\leq\varepsilon\qquad\text{and}\qquad\sigma_{\nu}^{1/\rho}\leq\varepsilon,\]
then by the triangle inequality, we will have \(\mathcal{R}_{\rho}(\mu^{\prime},\nu^{\prime})\) is up to an additive \(2\varepsilon r\), the same as \(\mathcal{R}_{\rho}(\mu,\nu)\). In particular, we can let \(\sigma_{\nu}\) be \(\varepsilon^{\rho}\) and \(\sigma_{\mu}=\varepsilon^{\rho}/n\) and \(\sigma=\varepsilon^{\rho}\).
### Description of the Algorithm
We will assume hence-forth that our input distributions \(\mu\) and \(\nu\), whose support is \(\{x_{1},\ldots,x_{n}\}\) and \(\{y_{1},\ldots,y_{n}\}\) satisfy:
* Every \(i\in[n]\) and \(j\in[m]\), the distance \(\|x_{i}-y_{j}\|_{2}\) is always between \(\sigma r\) and \(r\) (for a small parameter \(\sigma>0\), we have \(r\sqrt{1+\sigma^{2}}\leq 2r\) so, in order to simplify the notation, one may think of \(\sigma\) as being decreased by a factor of \(2\)).
* The distributions \(\mu\) and \(\nu\) have a "granularity" property, so that every \(i\in[n]\) for which \(\mu_{i}\) is non-zero is at least \(\sigma_{\mu}/n\), and every \(j\in[m]\) for which \(\nu_{j}\) is non-zero is at least \(\sigma_{\nu}/m\). This will allow us to upper bound \(1/(\mu_{i}\nu_{j})\leq mn/(\sigma_{\mu}\sigma_{\nu})\).
The algorithm will maintain a setting of the dual variables \((\alpha_{t},\beta_{t})\in\mathbb{R}^{n+m}\) which it will update in each iteration, and it will seek to maximize
\[g(\alpha,\beta)=\sum_{i=1}^{n}\mu_{i}\alpha_{i}-\sum_{j=1}^{m}\nu_{j}\beta_{j} -C_{s}\sum_{i=1}^{n}\sum_{j=1}^{m}\mu_{i}\nu_{j}\left(\frac{(\alpha_{i}-\beta _{j})^{+}}{\|x_{i}-y_{j}\|_{2}}\right)^{s}.\]
In the description of the algorithm below, we will assume access to three sub-routines Est-Alpha, Est-Beta, and Est-Penalty which we specify later (see Subsection B.1 for a description of the guarantees). At a high level, the sub-routines Est-Alpha will help us get an approximation of the gradient \(\nabla g(\alpha_{t},\beta_{t})\) along directions in \(\alpha\), and Est-Beta will help us get an approximation of the gradient \(\nabla g(\alpha_{t},\beta_{t})\) along directions in \(\beta\). The sub-routine Est-Penalty will come in at the end, since we will need to estimate the "penalty" term in order to output an approximation to \(g(\alpha_{t},\beta_{t})\). We will instantiate the algorithm with the following parameters:
* **Accuracy of Terminating Condition**: we denote this parameter \(\varepsilon_{2}>0\), which will be set to \(c_{0}\cdot\varepsilon\cdot(\sigma_{\mu}\sigma_{\nu}/(mn))^{(\rho-1)/\rho}\) for a small enough constant \(c_{0}>0\). This parameter will dictate when our algorithm has found a dual solution which is close enough to the optimal one.
* **Accuracy for Estimation**: There are two parameters which specify the accuracy needed in the estimations Est-Alpha and Est-Beta. We let \(\varepsilon_{1}>0\) denote the multiplicative error bound which we will tolerate, set to \(c_{1}\varepsilon_{2}/s\) for a small enough constant \(c_{1}\), and \(\tau\) which will
be an additive error bound will may be interpreted as a granularity condition on the weights \(\alpha,\beta\). It will suffice to set \(\tau=c_{2}\varepsilon_{2}\), but the final dependence on \(\tau\) will be poly-logarithmic in \(1/\tau\), the notation \(\operatorname{poly}^{*}(\cdot)\) will suppress it.
* **Step Size of Gradient Ascent**: The parameter \(\lambda\geq 0\) will denote the step size of our gradient ascent algorithm. We set \(\lambda=c_{3}\varepsilon_{2}\cdot(\sigma/s)^{2}\cdot r^{\rho}\), for a small constant \(c_{3}>0\).
We will also consider a small enough parameter \(\delta>0\) which will denote the failure probabilities of our estimation algorithms. The final dependence on \(\delta\) is only poly-logarithmic, so it will suffice to set \(\delta\) to be a small enough polynomial factor of all parameters of the algorithm (i.e., \(n,m,1/\varepsilon,1/\sigma,s\)) such that all executions of Est-Alpha, Est-Beta, and Est-Penalty succeed with high probability. For simplicity in the notation, we will drop \(\delta\) from the notation, and assume that all executions of our (randomized) sub-routines succeed.
### Analysis of the Algorithm
We now show that the algorithm presented at the top of Subsection 3.2 finds an approximately optimal maximizer of \(g\), assuming the lemmas on the guarantees of the subroutines of Subsection B.1. In particular, this section shows two lemmas. The first lemma shows that if the algorithm does not perform an update, then the value \((\alpha_{t},\beta_{t})\) that the algorithm holds is an approximate maximizer of \(g\), this will then imply, from Lemma 12 that we can output an estimate of \(g(\alpha_{t},\beta_{t})\). The second lemma says that if the algorithm performs an update, then the value of the objective function \(g\) increases by \(\Omega(\varepsilon_{2}\cdot\lambda)\). In particular, since the objective function is a maximization problem which is always at most \(r^{\rho}\), this implies that the algorithm performs at most \(O(r^{\rho}/(\varepsilon_{2}\cdot\lambda))\) updates before it must terminate. In addition, when it terminates, Lemma 7 implies that the quantity \(\boldsymbol{\omega}\) output by Est-Penalty is at most \(O(r^{\rho})\). This means that in the final estimate, for \(\boldsymbol{\omega}\), it suffices to set \(\tau\) to \(\varepsilon r^{\rho}\).
**Lemma 3** (Termination Condition): _Suppose \((\alpha_{t},\beta_{t})\in\mathbb{R}^{n+m}\) satisfies \(g(\alpha_{t},\beta_{t})\geq 0\) and the algorithms Est-Alpha and Est-Beta produce a sequence of quantities \(\boldsymbol{\eta}_{1},\ldots,\boldsymbol{\eta}_{n}\) and \(\boldsymbol{\xi}_{1},\ldots,\boldsymbol{\xi}_{m}\) which satisfy the guarantees of Lemma 10 and Lemma 11, and_
\[\sum_{i=1}^{n}\mu_{i}\left|\boldsymbol{\eta}_{i}-1\right|\leq\varepsilon_{2} \qquad\text{and}\qquad\sum_{j=1}^{m}\nu_{j}\left|\boldsymbol{\xi}_{j}-1\right| \leq\varepsilon_{2}.\]
_Then, letting \((\alpha^{*},\beta^{*})\) be the maximizer of \(g(\alpha^{*},\beta^{*})\), we have_
\[g(\alpha^{*},\beta^{*})-g(\alpha_{t},\beta_{t})\leq O(1)\cdot\left(\frac{nm}{ \sigma_{\mu}\sigma_{\nu}}\right)^{(\rho-1)/\rho}\cdot r^{\rho}\cdot\left( \varepsilon_{2}+\tau+\frac{\varepsilon_{1}s}{\sigma}\right)\]
**Lemma 4** (Updates Increase Objective): _Suppose \((\alpha_{t},\beta_{t})\in\mathbb{R}^{n+m}\) satisfies \(g(\alpha_{t},\beta_{t})\geq 0\) and \((\alpha_{t+1},\beta_{t})\in\mathbb{R}^{n+m}\) is a vector, for which the algorithms Est-Alpha produce the sequence of outputs \(\boldsymbol{\eta}_{1},\ldots,\boldsymbol{\eta}_{n}\) which satisfy the guarantees of Lemma 10 and_
\[\sum_{i=1}^{n}\mu_{i}\left|\boldsymbol{\eta}_{i}-1\right|\geq\varepsilon_{2}.\]
_Then, \(g(\alpha_{t+1},\beta_{t})-g(\alpha_{t},\beta_{t})\geq\Omega(\lambda\cdot \varepsilon_{2})\)._
### Proof of Theorem 1
Consider the algorithm which first runs the preprocessing step of Subsection 3.1 and then executes the main iterative sub-routine of Figure 1 in order to estimate \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\). When the algorithm from Figure 1 outputs, we output
\[\left(\sum_{i=1}^{n}\mu_{i}(\alpha_{t})_{i}-\sum_{j=1}^{m}\nu_{j}(\beta_{t})_{j} -\mathbf{\omega}\right)^{1/\rho}.\]
First, we note the running time of the algorithm is as specified. In particular, the preprocessing step takes \(O(n+m)\) time. Notice that each iteration of Figure 1 takes \(O(n+m)\cdot\mathrm{poly}^{*}(2^{s}/\varepsilon_{1})\) time, which is \(O(n+m)\cdot\mathrm{poly}^{*}(2^{\rho/(\rho-1)}(mn)^{(\rho-1)/\rho}/\varepsilon)\) by setting of \(\varepsilon_{1}\), \(s\), and \(\sigma_{\mu},\sigma_{\nu}\) and \(\sigma\). Furthermore, since \(g(\alpha_{0},\beta_{0})=0\) and \(g(\alpha_{t},\beta_{t})\leq r^{\rho}\), Lemma 4 will imply that the number of iterations is at most \(O(r^{\rho}/(\lambda\varepsilon_{2}))\), and by the setting of \(\varepsilon_{2}\) and \(\lambda\), this is at most \(\mathrm{poly}^{*}((mn)^{(\rho-1)/\rho}/\varepsilon)\). The total running time then follows.
In order to show correctness, note that the setting of \(\varepsilon_{2}\), \(\varepsilon_{1}\), when the algorithm terminates, we have
\[\mathcal{R}_{\rho}(\mu,\nu)^{\rho}-\sum_{i=1}^{n}\mu_{i}(\alpha_{t})_{i}-\sum_ {j=1}^{m}\nu_{j}(\beta_{t})_{j}-C_{s}\sum_{i=1}^{n}\sum_{j=1}^{m}\mu_{i}\nu_{j} \left(\frac{((\alpha_{t})_{i}-(\beta_{t})_{j})^{+}}{\|x_{i}-y_{j}\|_{2}} \right)^{s}\leq\varepsilon\cdot r^{\rho},\]
and we are guaranteed by Lemma 12 and Lemma 7 and the setting of \(\tau\) for Est-Penalty that
\[\left|\mathbf{\omega}-C_{s}\sum_{i=1}^{n}\sum_{j=1}^{m}\mu_{i}\nu_{j}\left(\frac{ ((\alpha_{t})_{i}-(\beta_{t})_{j})^{+}}{\|x_{i}-y_{j}\|_{2}}\right)^{s}\right| \leq O(\varepsilon\cdot r^{\rho}).\]
Therefore, our output (using the fact \(\mathcal{R}_{\rho}(\mu,\nu)^{\rho}\leq r^{\rho}\)), will satisfy
\[\left|\left(\sum_{i=1}^{n}\mu_{i}(\alpha_{t})_{i}-\sum_{j=1}^{m}\nu_{j}(\beta _{t})_{j}-\mathbf{\omega}\right)^{1/\rho}-\mathcal{R}_{\rho}(\mu,\nu)\right|\leq O (\varepsilon\cdot r).\]
## 4 Open Problems
We hope that our approach, of slightly changing the problem, will prove useful for other Euclidean problems for which we do not have fast algorithms with \((1+\varepsilon)\)-approximations. We mention two immediate open problems:
* Multiplicative \((1+\varepsilon)\)-approximations for \(\mathcal{R}_{\rho}(\mu,\nu)\). Our algorithms achieved additive \(\varepsilon r\)-approximations for datasets bounded within distance \(r\), but a more accurate multiplicative \((1+\varepsilon)\)-approximation would be desired when the dataset may not necessarily be bounded. Does there exists an algorithm which is just as fast as Theorem 1 and outputs a number \(\widehat{\mathbf{\eta}}\) which is between \(\mathcal{R}_{\rho}(\mu,\nu)\) and \((1+\varepsilon)\mathcal{R}_{\rho}(\mu,\nu)\) with high probability?
* Accurate Approximations for EMD. It is still possible that for any \(\varepsilon>0\), there exists an algorithm which can estimate the cost of EMD\((\mu,\nu)\) up to a multiplicative \((1+\varepsilon)\)-factor in time
\(n\cdot\mathrm{poly}(d\log n/\varepsilon)\). Does there exist such an algorithm, or is there compelling complexity-theoretic reasons why this may not be possible? We note that Rohatgi (2019) shows that, in the case \(\mu\) and \(\nu\) are uniform on a support of size \(n\), such an algorithm should not be able to output a \((1+\varepsilon)\)-approximate matching between points of \(\mu\) and \(\nu\) (assuming the Hitting Set conjecture). However, no such evidence against near-linear time algorithms for the _cost_ of EMD exists.
## Acknowledgments
Part of this work was done while Erik Waingarten was a postdoc at Stanford University, supported by an NSF postdoctoral fellowship and by Moses Charikar's Simons Investigator Award.
|
2305.06059 | A characterization of socular highest weight modules and Richardson
orbits of classical types | Let $\mathfrak{g}$ be a simple complex Lie algebra of classical type with a
Cartan subalgebra $\mathfrak{h}$. We fix a standard parabolic subalgebra
$\mathfrak{p}\supset \mathfrak{h}$. The socular simple modules are just those
highest weight modules with largest possible Gelfand-Kirillov dimension in the
corresponding parabolic category $\mathcal{O}^{\mathfrak{p}}$. In this article,
we will give an explicit characterization for these modules. When the module is
integral, our characterization is given by the information of the corresponding
Young tableau associated to the given highest weight module. When the module is
nonintegral, we still have some characterization by using the results in the
integral case. In our characterization, we define a particular Young diagram
called Z-diagram. From this diagram, we can describe the partition type of the
unique Richardson orbit associated to the given parabolic subalgebra
$\mathfrak{p}$. | Zhanqiang Bai, Shaoyi Zhang | 2023-05-10T11:25:13Z | http://arxiv.org/abs/2305.06059v1 | # A characterization of secular highest weight modules and Richardson orbits of classical types
###### Abstract.
Let \(\mathfrak{g}\) be a simple complex Lie algebra of classical type with a Cartan subalgebra \(\mathfrak{h}\). We fix a standard parabolic subalgebra \(\mathfrak{p}\supset\mathfrak{h}\). The secular simple modules are just those highest weight modules with largest possible Gelfand-Kirillov dimension in the corresponding parabolic category \(\mathcal{O}^{\mathfrak{p}}\). In this article, we will give an explicit characterization for these modules. When the module is integral, our characterization is given by the information of the corresponding Young tableau associated to the given highest weight module. When the module is nonintegral, we still have some characterization by using the results in the integral case. In our characterization, we define a particular Young diagram called Z-diagram. From this diagram, we can describe the partition type of the unique Richardson orbit associated to the given parabolic subalgebra \(\mathfrak{p}\).
Key words and phrases:Highest weight module; Gelfand-Kirillov dimension; Young tableau; Parabolic category; Richardson orbit.
example, see Refs. [1, 2, 10, 17, 16, 18, 19]. Another motivation for us to study these modules is that a scalar generalized Verma module \(M_{I}(\lambda)\) (here'scalar' means that \(\dim F(\lambda)=1\)) is reducible if and only if the Gelfand-Kirillov dimension of its irreducible quotient \(L(\lambda)\) is strictly less than \(\dim(\mathfrak{u}_{I})\) (see [1]). From [1], we can write \(\lambda=w\mu\) for a unique anti-dominant \(\mu\in\mathfrak{h}^{*}\) and a unique minimal length element \(w\in W_{[\lambda]}\), where \(W_{[\lambda]}\) is the integral Weyl group of \(\lambda\). It is known that a highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(w\) belongs to the Kazhdan-Lusztig right cell containing \(w_{0}^{\mathfrak{p}}\), where \(w_{0}^{\mathfrak{p}}\) is the longest element in the parabolic subgroup of \(W\) corresponding to the parabolic subalgebra \(\mathfrak{p}\), see [17, Theorem 48]. But outside of type \(A\) there is no nice combinatorial description of KL right cells (like via the RS-correspondence in type \(A\)). Garfinkle [11, 12, 13] used domino tableaux to describe KL right cells, which are not easy for people to use.
Recently, Bai-Xie [1] and Bai-Xiao-Xie [1] generalized the famous Robinson Schensted algorithm and found some practical combinatorial algorithms to compute the Gelfand-Kirillov dimension of any simple highest weight module when \(\mathfrak{g}\) is a classical Lie algebra. In this article, we will use their algorithms to give an explicit characterization for these secular simple modules.
Now we let \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\). We choose \(\Phi^{+}=\{e_{i}-e_{j}|1\leq i<j\leq n\}\cup\{e_{i}|1\leq i\leq n\}\) and a simple system \(\Delta=\{\alpha_{i}:=e_{i}-e_{i+1}|1\leq i\leq n-1\}\cup\{\alpha_{n}:=e_{n}\}\). We choose a subset \(I\subset\Delta\). There will exist some positive integers \(n_{1},n_{2},...,n_{k-1}\) with \(n_{1}+n_{2}+...+n_{k-1}\leq n\) such that \(\Delta\setminus I=\{\alpha_{p_{1}},\alpha_{p_{2}},...,\alpha_{p_{k-1}}\}\), where \(p_{t}=\sum_{i=1}^{t}n_{i}\). This subset \(I\) will generate a subsystem \(\Phi_{I}\subset\Phi\). Let \(\mathfrak{p}_{I}\) be the standard parabolic subalgebra corresponding to \(I\) with Levi decomposition \(\mathfrak{p}_{I}=\mathfrak{l}_{I}\oplus\mathfrak{u}_{I}\). We call \(\mathfrak{p}_{I}\) a _standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\)_ with \(n_{k}=n-\sum_{i=1}^{k-1}n_{i}\). Note that we may have \(n_{k}=0\). We can similarly define standard parabolic subalgebras for other type Lie algebras.
For an integral weight \(\lambda\in\mathfrak{h}^{*}\), we write \(\lambda=(\lambda_{1},...,\lambda_{n})\) and \(\lambda^{-}=(\lambda_{1},...,\lambda_{n},-\lambda_{n},\)\(...,-\lambda_{1})\). By using the famous R-S algorithm in [1], we can get a Young tableau \(P(\lambda)\), see SS2.1. We use \(p(\lambda)=(p_{1},...,p_{N})\) to denote its shape. We say \(q=(q_{1},\cdots,q_{N})\) is the shape of the dual diagram of a Young diagram \(P\) with shape \(p=(p_{1},\cdots,p_{N})\) and write \(q=p^{t}\) if \(q_{i}\) is the length of \(i\)-th column of the Young diagram \(P\).
We recall the definition of Hollow diagrams in [1].
**Definition 1.1**.: For a Young diagram \(P\) with shape \(p\), use \((k,l)\) to denote the box in the \(k\)-th row and the \(l\)-th column. We say the box \((k,l)\) is _even_ (resp. _odd_) if \(k+l\) is even (resp. odd). Let \(p_{i}^{\rm ev}\) (resp. \(p_{i}^{\rm odd}\)) be the number of even (resp. odd) boxes in the \(i\)-th row of the Young diagram \(P\). One can easily check that
\[p_{i}^{\rm ev}=\begin{cases}\left\lfloor\frac{p_{i}}{2}\right\rfloor&\text{ if $i$ is odd,}\\ \left\lfloor\frac{p_{i}}{2}\right\rfloor&\text{ if $i$ is even,}\end{cases}\quad p_{i}^{\rm odd }=\begin{cases}\left\lfloor\frac{p_{i}}{2}\right\rfloor&\text{ if $i$ is odd,}\\ \left\lceil\frac{p_{i}}{2}\right\rceil&\text{ if $i$ is even.}\end{cases}\]
Here for \(a\in\mathbb{R}\), \(\left\lfloor a\right\rfloor\) is the largest integer \(n\) such that \(n\leq a\), and \(\left\lceil a\right\rceil\) is the smallest integer \(n\) such that \(n\geq a\). For convenience, we set
\[p^{\rm ev}=(p_{1}^{\rm ev},p_{2}^{\rm ev},\cdots)\quad\text{and}\quad p^{\rm odd }=(p_{1}^{\rm odd},p_{2}^{\rm odd},\cdots).\]
**Example 1.2**.: Let \(p=(5,5,4,3,3)\) be the shape of a Young diagram \(P\). The even and odd boxes in \(P\) are marked as follows:
\[\begin{array}{|c|c|c|c|}\hline E&O&E&O&E\\ \hline O&E&O&E&O\\ \hline E&O&E&O\\ \hline O&E&O&\\ \hline E&O&E&\\ \hline\end{array}.\]
Then \(p^{\rm ev}=(3,2,2,1,2)\) and \(p^{\rm odd}=(2,3,2,2,1)\).
Now we can give our characterization of secular highest weight modules.
**Theorem 1.3**.: _Let \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\). Suppose \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). A simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(p(\lambda)^{t}=(m_{1},...,m_{k})\), where \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order._
Since this result is well-known in the language of KL right cells [10], we will omit the proof of it in this article.
**Theorem 1.4**.: _Let \(\mathfrak{g}=\mathfrak{sp}(n,\mathbb{C})\) or \(\mathfrak{so}(2n+1,\mathbb{C})\). Suppose \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). A simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same odd boxes with a Z-diagram of type \((n_{k};n_{1},...,n_{k-1})\)._
Here Z-diagram means a Young diagram defined in Definition 3.3. The numbers of odd boxes and even boxes in a Z-diagram of type \((n_{k};n_{1},...,n_{k-1})\) will be given in Corollary 3.6.
Similarly we have the following.
**Theorem 1.5**.: _Let \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\). \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). When \(n_{k}\neq 1\), a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same even boxes with a Z-diagram of type \((n_{k};n_{1},...,n_{k-1})\). When \(n_{k}=1\), a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same even boxes with a Z-diagram of type \((0;n_{1},...,n_{k-2},n_{k-1}+1)\)._
The second purpose of this article is about Richardson orbits. Let \(G\) be a complex reductive Lie group with Lie algebra \(\mathfrak{g}\). Suppose \(\mathfrak{p}=\mathfrak{I}\oplus\mathfrak{u}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). The \(G\) saturation of \(\mathfrak{u}\) is a nilpotent orbit of \(\mathfrak{g}\), which meets \(\mathfrak{u}\) in an open dense set. Such an orbit is called a _Richardson orbit_. It is uniquely determined by \(\mathfrak{p}\). They play an important role in the representation theory of \(G\) and attracted considerable amount of attention (e.g., [14, 15, 16, 17, 18]). In this article, from the Z-diagram of type \((n_{k};n_{1},...,n_{k-1})\), we will use the H-algorithm defined in [BMW] to get the Richardson orbit associated to the given parabolic subalgebra \(\mathfrak{p}\) of type \((n_{1},...,n_{k})\). We find that our result only uses collapse of partitions during the process of H-algorithm.
The paper is organized as follows: in SS2, we recall the algorithms of computing Gelfand-Kirillov dimensions of highest weight modules in [1] and the H-algorithm (see [BMW]) from which we can construct a special partition (in the sense of Lusztig) from a domino type partition. In SS3, we prove our main theorem for types \(B\) and \(C\) after we define the Z-diagram for a parabolic subalgebra \(\mathfrak{p}\) of type \((n_{1},...,n_{k})\). In SS4, we prove our main theorem for type \(D\). In SS5, we will give a characterization for the nonintegral secular highest weight modules. In SS6, we
give a characterization for the partition of the Richardson orbit associated to the parabolic subalgebra \(\mathfrak{p}\) of type \((n_{1},...,n_{k})\).
## 2. Preliminaries
Before we prove our main theorems, we first recall some useful results about Gelfand-Kirillov dimension. The details can be found in [20] and [1].
### Gelfand-Kirillov dimension
Let \(M\) be a \(U(\mathfrak{g})\)-module generated by a finite-dimensional subspace \(M_{0}\) (that is, \(M\) is finitely generated). Let \(U_{n}(\mathfrak{g})\) be the standard filtration of \(U(\mathfrak{g})\). Set \(M_{n}=U_{n}(\mathfrak{g})\cdot M_{0}\) and \(\operatorname{gr}(M)=\bigoplus\limits_{n=0}^{\infty}\operatorname{gr}_{n}M\), where \(\operatorname{gr}_{n}M=M_{n}/M_{n-1}\). Thus \(\operatorname{gr}(M)\) is a graded module of \(\operatorname{gr}(U(\mathfrak{g}))\simeq S(\mathfrak{g})\).
**Definition 2.1**.: The _Gelfand-Kirillov dimension_ of \(M\) is defined by
\[\operatorname{GKdim}M=\varlimsup_{n\to\infty}\frac{\log\dim(U_{n}(\mathfrak{ g})M_{0})}{\log n}.\]
It is easy to see that the above definition is independent of the choice of \(M_{0}\).
Denote \(\varphi_{M,M_{0}}(n)=\dim(U_{n}(\mathfrak{g})M_{0})\). By a theorem of Hilbert and Serre, there exists a unique polynomial \(\tilde{\varphi}_{M,M_{0}}(n)\) such that \(\varphi_{M,M_{0}}(n)=\tilde{\varphi}_{M,M_{0}}(n)\) for large \(n\). The leading term of \(\tilde{\varphi}_{M,M_{0}}(n)\) is \(\frac{c(M)}{(d_{M})!}n^{d_{M}}\), where \(c(M)\) is an integer, called _Bernstein degree_. The integer \(d_{M}\) is the Gelfand-Kirillov dimension of \(M\), that is, \(d_{M}=\operatorname{GKdim}(M)\).
Now we let \(\mathfrak{g}\) be a complex simple Lie algebra. We choose a subset \(I\subset\Delta\). This \(I\) will generate a subsystem \(\Phi_{I}\subset\Phi\). Let \(\mathfrak{p}_{I}\) be the standard parabolic subalgebra corresponding to \(I\) with Levi decomposition \(\mathfrak{p}_{I}=\mathfrak{l}_{I}\oplus\mathfrak{u}_{I}\). Let \(F(\lambda)\) be a finite-dimensional irreducible \(\mathfrak{l}_{I}\)-module with highest weight \(\lambda-\rho\in\mathfrak{h}^{*}\). The generalized Verma modules \(M_{I}(\lambda)=U(\mathfrak{g})\otimes_{U(\mathfrak{p}_{I})}F(\lambda)\) has maximal possible Gelfand-Kirillov dimension in \(\mathcal{O}^{\mathfrak{p}}\) (we will omit \(I\) if there is no confusion). That is, \(\operatorname{GKdim}(M(\lambda))=\dim(\mathfrak{u})\).
**Lemma 2.2**.: _Let \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\) or \(\mathfrak{sp}(n,\mathbb{C})\). Suppose \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). Then we have_
\[|\Phi_{I}^{+}|=\frac{1}{2}\sum\limits_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2}.\]
**Lemma 2.3**.: _Let \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\). Suppose \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). Then we have_
\[|\Phi_{I}^{+}|=\begin{cases}\frac{1}{2}\sum\limits_{j=1}^{k-1}n_{j}(n_{j}-1)+ n_{k}^{2}-n_{k}&\text{if }n_{k}\neq 1,\\ \frac{1}{2}\sum\limits_{j=1}^{k-2}n_{j}(n_{j}-1)+\frac{1}{2}(n_{k-1}+1)n_{k-1} &\text{if }n_{k}=1.\end{cases}\]
From Lemma 2.3, when \(n_{k}=1\), we can regard \(\mathfrak{p}_{I}\) as a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k-2},n_{k-1}+1,0)\). These two parabolic subalgebras are isomorphic. From now on, we only consider the case of \(n_{k}=0\) in this article.
So \(\dim(\mathfrak{u})=n^{2}-|\Phi_{I}^{+}|\) for types \(B\) and \(C\), and \(\dim(\mathfrak{u})=n^{2}-n-|\Phi_{I}^{+}|\) for type \(D\). We denote this number by \(d_{m}(\mathfrak{p})\). Now our problem is to find out all simple modules \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) with maximal Gelfand-Kirillov dimension \(d_{m}(\mathfrak{p})\).
From Bai-Xie [1], we know that for each integral weight \(\lambda\), by using Robinson-Schensted insertion algorithm, there is a Young tableau \(P(\lambda)\) corresponding to it. We recall this method from Bai-Xie [1]. Write \(\lambda=(\lambda_{1},...,\lambda_{n})\). Let \(P_{0}\) be an empty Young tableau. Assume that we have constructed Young tableau \(P_{k}\) associated to \((\lambda_{1},\cdots,\lambda_{k})\), \(0\leq k<n\). Then \(P_{k+1}\) is obtained by adding \(\lambda_{k+1}\) to \(P_{k}\) as follows. First add \(\lambda_{k+1}\) to the first row of \(T_{k}\) by replacing the leftmost entry \(x\) in the first row which is strictly bigger than \(\lambda_{k+1}\). (If there is no such an entry \(x\), we just add a box with entry \(\lambda_{k+1}\) to the right side of the first row, and end this process.) Then add \(x\) to the next row as the same way of adding \(\lambda_{k+1}\) to the first row. Then we put \(P(\lambda)=P_{n}\). \(\lambda\) and \(\mu\) are called _Young equivalent_, written as \(\lambda\overset{Y}{\cong}\mu\), if \(P(\lambda)=P(\mu)\). The shape of \(P(\lambda)\) is denoted by \(p(\lambda)\). We identify \(p(\lambda)\) with the partition \(\mathbf{p}\) corresponding to the Young diagram \(P(\lambda)\).
**Definition 2.4**.: Let \(x\in\operatorname{Seq}_{n}(\Gamma)\). Define
\[F_{b}(x): =\sum_{i\geq 1}(i-1)p_{i}^{\operatorname{odd}}=\sum_{2|i}(q_{i}^{ \operatorname{odd}})^{2}+\sum_{2|i}q_{i}^{\operatorname{odd}}(q_{i}^{ \operatorname{odd}}-1),\] \[F_{d}(x): =\sum_{i\geq 1}(i-1)p_{i}^{\operatorname{ev}}=\sum_{2|i}q_{i}^{ \operatorname{ev}}(q_{i}^{\operatorname{ev}}-1)+\sum_{2|i}(q_{i}^{ \operatorname{ev}})^{2},\]
where \(p=p(x)=(p_{1},p_{2},\cdots)\) and \(q=q(x)=p(x)^{t}=(q_{1},q_{2},\cdots)\).
For \(x=(x_{1},x_{2},\cdots,x_{n})\in\operatorname{Seq}_{n}(\Gamma)\), set
\[x^{-}= (x_{1},x_{2},\cdots,x_{n-1},x_{n},-x_{n},-x_{n-1},\cdots,-x_{2},- x_{1}),\] \[{}^{-}x= (-x_{n},-x_{n-1},\cdots,-x_{2},-x_{1},x_{1},x_{2},\cdots,x_{n-1},x_{n}).\]
**Proposition 2.5** ([1, Theorem 1.5]).: _Let \(\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{n})\in\mathfrak{h}^{*}\) be an integral weight. Then_
\[\operatorname{GKdim}L(\lambda)=\begin{cases}n^{2}-F_{b}(\lambda^{-})&\text{ if }\Phi=B_{n}/C_{n},\\ n^{2}-n-F_{d}(\lambda^{-})&\text{ if }\Phi=D_{n}.\end{cases}\]
Let \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\) or \(\mathfrak{sp}(n,\mathbb{C})\). When \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\), the maximal Gelfand-Kirillov dimension of simple modules in \(\mathcal{O}^{\mathfrak{p}}\) will be \(\dim(\mathfrak{u})=n^{2}-(\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2})= n^{2}-F_{b}(\lambda^{-})\).
So a simple module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is ocular if and only if \(F_{b}(\lambda^{-})=|\Phi_{I}^{+}|\), which is
\[\sum_{i\geq 1}(i-1)p_{i}^{\operatorname{odd}}=\frac{1}{2}\sum_{j=1}^{k-1}n_{j} (n_{j}-1)+n_{k}^{2}.\]
### H-algorithm
For an element \(w\) in the Weyl group \(W\) of some Lie algebras, we have a partition \(p(^{-}w)\). We call it a _partition of domino type_ since it has the same shape with a domino tableau [1, Proposition 4.6]. In general, a partition \(\mathbf{p}\) (Young diagram \(P\)) is also called a partition (Young diagram) of domino type if it has the same corresponding diagram with some partition \(p(^{-}w)\) for some \(w\in W\). We recall the H-algorithms defined in [1] which can associate a special partition (in the sense of Lusztig, see [1] or [1]) to a given domino type partition such that they have the same odd (resp. even) boxes for type \(B\) and \(C\) (resp. type \(D\)).
**Definition 2.6** (H-algorithm of type \(B\)).: Let \(\mathbf{p}\) be a partition of domino type (whose Young diagram is \(P\)) of \(2n\), then we can get a special partition \(\mathbf{p}^{s}\) of type \(B_{n}\) by the following steps:
1. Construct the hollow diagram \(P^{\mathrm{odd}}\) consisting of odd boxes;
2. Labeling the rows starting from \(1\) but avoiding all the consecutive rows ending with the shape \(\boxed{O}\);
3. Keep even labeled rows unchanged and put \(\boxed{E}\) on the end of each odd labeled row;
4. Fill the holes. Then if there are only \(2n\) boxes in our new Young diagram, we put a box \(\boxed{E}\) below the last row and we are done. If there are \(2n+1\) boxes in our new Young diagram, we are done.
We call the above algorithm _H-algorithm of type \(B\)_.
**Example 2.7**.: Let \(\mathbf{p}=[6,4^{3},2,2,1,1]\) be a partition of domino type of \(24\). Then we have
Thus \(\mathbf{p}^{s}=[7,4,4,3,3,1,1,1,1]\) is a special partition of type \(B_{12}\).
**Definition 2.8** (H-algorithm of type \(C\)).: Let \(\mathbf{p}\) be a partition of domino type (whose Young diagram is \(P\)) of \(2n\), then we can get a special partition \(\mathbf{p}^{s}\) of type \(C\) by the following steps:
1. Construct the hollow diagram \(P^{\mathrm{odd}}\) consisting of odd boxes;
2. Labeling the rows starting from \(1\) but avoiding all the consecutive rows ending with the shape \(\boxed{O}\) (when two consecutive rows has the shape \(\boxed{E}\) in \(P\), these two rows will not be labeled);
3. Keep odd labeled rows unchanged and put \(\boxed{E}\) on the end of each even labeled row;
4. Fill the holes and we are done.
We call the above algorithm _Hollow diagram algorithm_ or _H-algorithm of type \(C\)_.
**Definition 2.9** (H-algorithm of type \(D\)).: Let \(\mathbf{p}\) be a partition of domino type (whose Young diagram is \(P\)) of \(2n\), then we can get a special partition \(\mathbf{p}^{s}\) of type \(D_{n}\) by the following steps:
1. Construct the hollow diagram \(P^{\mathrm{ev}}\) consisting of even boxes;
2. Labeling the rows starting from \(1\) but avoiding all the consecutive rows ending with the shape \(\boxed{E}\).
3. Keep odd labeled rows unchanged and put \(\boxed{O}\) on the end of each even labeled row;
4. Fill the holes. Then if there are only \(2n-1\) boxes in our new Young diagram, we put a box \(\boxed{O}\) below the last row and we are done. If there are \(2n\) boxes in our new Young diagram, we are done.
We call the above algorithm _H-algorithm of type \(D\)_.
Given two partitions \(\mathbf{d}=[d_{1},...,d_{k}]\) and \(\mathbf{f}=[f_{1},...,f_{k}]\) of some integer \(m\), we say that \(\mathbf{d}\)_dominates_\(\mathbf{f}\) if the following condition holds:
\[\sum_{1\leq j\leq l}d_{j}\geq\sum_{1\leq j\leq l}f_{j}\]
for \(1\leq l\leq k\).
**Definition 2.10** (Collapse).: Let \(\mathbf{d}=[d_{1},...,d_{k}]\) be a partition of \(2n+1\). There is a unique largest partition of \(2n+1\) of type \(B_{n}\) dominated by \(\mathbf{d}\). If \(\mathbf{d}\) is not a partition of type \(B_{n}\), then one of its even parts must occur with odd multiplicity. Let \(q\) be the largest such part. Then replace the last occurrence of \(q\) in \(\mathbf{d}\) by \(q-1\) and the first subsequent part \(r\) strictly less than \(q-1\) by \(r+1\). Repeat this process until a partition of type \(B_{n}\) is obtained. This new partition of type \(B_{n}\) is called the _\(B\)-collapse_ of \(\mathbf{d}\), and we denote it by \(\mathbf{d}_{B}\). Similarly there are \(D\)-collapse and \(C\)-collapse of \(\mathbf{d}\).
Let \(X\) stand for \(B,C\) or \(D\). There is another concept called _expansion_ which can compute the smallest special partition dominating a given partition \(\mathbf{q}\) of type \(X\). This special partition is denoted by \(\mathbf{q}^{X}\).
Some more properties for the collapse and expansion of partitions can be found in [10].
**Remark 2.11**.: _If during the process of collapse and expansion, the odd boxes are fixed for types \(B\) and \(C\) (resp. even boxes are fixed for type \(D\)) and the moving even box can not meet another even box in the same row (resp. the moving odd box can not meet another odd box in the same row for type \(D\)). We may call these two operations restricted collapse and expansion, denoted them by \(\mathbf{q}_{\bar{X}}\) and \(\mathbf{q}^{\bar{X}}\). Then \(\mathbf{q}\), \(\mathbf{q}_{\bar{X}}\) and \(\mathbf{q}^{\bar{X}}\) have the same odd (resp. even) boxes for type \(B\) and \(C\) (resp. type \(D\)). When \(\mathbf{p}\) is a domino type partition, \(H(\mathbf{p})\) will be a special partition which has the same odd (resp. even) boxes for type \(B\) and \(C\) (resp. type \(D\)). Thus we have \(H(\mathbf{p})=(\mathbf{p}_{\bar{X}})^{\bar{X}}\)._
## 3. Proof of Theorem 1.4
In this section, we will prove Theorem 1.4. Let \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\) or \(\mathfrak{sp}(n,\mathbb{C})\) in this section.
Firstly, we need two lemmas.
**Lemma 3.1**.: _Suppose \(\{m_{1},m_{2},...,m_{k}\}\) is a sequence of decreasing positive integers. We have a function \(f(x_{1},...,x_{k})=\sum_{j=1}^{k}x_{j}^{2}\), where \((x_{1},x_{2},...,x_{k})\in D\) with_
\[D=\{(x_{1},...,x_{k})\in\mathbb{R}^{k}\ |x_{1}\geq m_{1},x_{1}+x_{2}\geq m_{1} +m_{2},...,\sum_{j=1}^{k-1}x_{j}\geq\sum_{j=1}^{k-1}m_{j},\]
\[\sum_{j=1}^{k}x_{j}=\sum_{j=1}^{k}m_{j},x_{1}\geq x_{2}\geq...\geq x_{k}\geq 0\}.\]
_Then \(f\) will take the minimal value \(\sum_{j=1}^{k}m_{j}^{2}\) if and only if \(x_{j}=m_{j}\) for all \(1\leq j\leq k\)._
Proof.: When \(f\) is a function of two variables, the conclusion is obvious. We assume the conclusion is true for all functions containing less that or equal to \(k-1\) variables.
Now we assume \(f\) is a function of \(k\) variables. We denote \(m=\sum_{j=1}^{k}m_{j}\). If there is no restriction, the function \(f\) will take the minimal value at the point \(P_{0}=(\frac{m}{k},...,\frac{m}{k})\). Now the problem is equivalent to finding out all points on the given plane \(\Pi:\sum_{j=1}^{k}x_{j}=m\) with some restriction conditions such that the distance from the origin to these points will take the minimal value \(d=\sqrt{\sum_{j=1}^{k}m_{j}^{2}}\). From the condition \(x_{1}\geq m_{1}\), \(x_{1}+x_{2}\geq m_{1}+m_{2}\),..., \(\sum_{j=1}^{k-1}x_{j}\geq\sum_{j=1}^{k-1}m_{j}\), \(x_{1}\geq x_{2}\geq...\geq x_{k}\geq 0\), we know the domain \(D\subseteq\Pi\) of our function \(f\) is a bounded connected closed subset in the first quadrant and \(P_{0}\notin D\) (unless all the integers \(m_{i}\) are equal). So \(f\) will take its minimal value \(d^{2}\) at the boundary \(\partial D\) of \(D\).
When \(x_{1}=m_{1}\), we have \(f(m_{1},x_{2},...,x_{k})=m_{1}^{2}+\sum_{j=2}^{k}x_{j}^{2}\). We denote
\[f_{2}(x_{2},...,x_{k})=\sum_{j=2}^{k}x_{j}^{2},\]
where
\[(x_{2},...,x_{k})\in D_{2} =\{(x_{2},...,x_{k})\in\mathbb{R}^{k-1}|x_{2}\geq m_{2},x_{2}+x_{ 3}\geq m_{2}+m_{3},...,\] \[\sum_{j=2}^{k-1}x_{j}\geq\sum_{j=2}^{k-1}m_{j},\sum_{j=2}^{k}x_{ j}=\sum_{j=2}^{k}m_{j},x_{2}\geq x_{3}\geq...\geq x_{k}\geq 0\}.\]
So \(f_{2}\) is a function of \(k-1\) variables. By our induction, \(f_{2}\) will take its minimal value if and only if \(x_{j}=m_{j}\) for all \(2\leq j\leq k\). So \(f\) will take its minimal value at the boundary \(x_{1}=m_{1}\) if and only if \(x_{j}=m_{j}\) for all \(1\leq j\leq k\).
When \(x_{1}+x_{2}=m_{1}+m_{2}\), we have
\[f(x_{1},x_{2},...,x_{k})=(x_{1}^{2}+x_{2}^{2})+\sum_{j=3}^{k}x_{j}^{2}:=g_{2}( x_{1},x_{2})+f_{3}(x_{3},...,x_{k}).\]
This is a sum of two functions which satisfy our induction. Then we know \(g_{2}\) will take its minimal value if and only if \(x_{1}=m_{1},x_{2}=m_{2}\) and \(f_{3}\) will take its minimal value if and only if \(x_{j}=m_{j}\) for all \(3\leq j\leq k\). So \(f\) will take its minimal value at the boundary \(x_{1}+x_{2}=m_{1}+m_{2}\) if and only if \(x_{j}=m_{j}\) for all \(1\leq j\leq k\).
We continue this process and finally when \(\sum_{j=1}^{k}x_{j}=m=\sum_{j=1}^{k}m_{j}\), we will have
\[f(x_{1},x_{2},...,x_{k}) =x_{k}^{2}+\sum_{j=1}^{k-1}x_{j}^{2}\] \[=(m-\sum_{j=1}^{k-1}x_{j})^{2}+\sum_{j=1}^{k-1}x_{j}^{2}\] \[:=f_{k}(x_{1},...,x_{k-1}),\]
where
\[(x_{1},...,x_{k-1})\in D_{k}=\{(x_{1},...,x_{k-1})\in\mathbb{R}^{k-1}|x_{1}\geq m _{1},x_{1}+x_{2}\geq m_{1}+m_{2},...,\]
\[\sum\limits_{j=1}^{k-1}x_{j}\geq\sum\limits_{j=1}^{k-1}m_{j},x_{1}\geq x_{2}\geq...\geq x_{k-1}\geq 0\}.\]
If there is no restriction, the function \(f_{k}\) will take the minimal value at the point \(Q_{0}=(\frac{m}{k},...,\frac{m}{k})\). But \(Q_{0}\notin D_{k}\). So the function \(f_{k}\) will take the minimal value at the boundary \(\partial D_{k}\) of \(D_{k}\). When \(x_{1}=m_{1}\),..., \(\sum_{j=1}^{k-2}x_{j}=\sum_{j=1}^{k-2}m_{j}\), the arguments are the same with the case of \(f\). When \(\sum_{j=1}^{k-1}x_{j}=\sum_{j=1}^{k-1}m_{j}\), we have
\[f(x_{1},x_{2},...,x_{k})=(\sum\limits_{j=1}^{k-1}x_{j})+m_{k}^{2}:=g_{k-1}+m_{ k}^{2},\]
where \(g_{k-1}\) is a function of \(k-1\) variables which satisfies our induction.
Thus we have proved our lemma.
**Lemma 3.2**.: _Suppose \(\{m_{1},m_{2},...,m_{k}\}\) is a sequence of integers with \(m_{s-1}-1\geq 2m_{s}\geq m_{s+1}\) and \(m_{i}\geq m_{i+1}\)\((i\neq s,s-1)\). We say \(\{m_{1},m_{2},...,m_{k}\}\) is sorted 'almost' descending. Denote \(h_{t}:=\sum\limits_{1\leq j\leq t}m_{j}\). We define a function_
\[f(x_{1},...,x_{k}):=\frac{1}{2}\sum\limits_{1\leq j\leq k,j\neq s}x_{j}(x_{j}- 1)+x_{s}^{2},\]
_where \((x_{1},x_{2},...,x_{k})\in D\) with_
\[D=\{(x_{1},...,x_{k})\in\mathbb{R}^{k}\;|x_{1}\geq m_{1},x_{1}+x_{2}\geq m_{1 }+m_{2},...,\sum\limits_{j=1}^{k-1}x_{j}\geq\sum\limits_{j=1}^{k-1}m_{j},\]
\[\sum\limits_{j=1}^{k}x_{j}=\sum\limits_{j=1}^{k}m_{j},x_{i}\geq x_{i+1}(i\neq s,i+1\neq s),\]
\[x_{s-1}-1\geq 2x_{s}\geq x_{s+1}\}.\]
_Then \(f\) will take the minimal value \(\frac{1}{2}\sum\limits_{1\leq j\leq k,j\neq s}m_{j}(m_{j}-1)+m_{s}^{2}\) if and only if \(x_{j}=m_{j}\) for all \(1\leq j\leq k\)._
Proof.: When \(f\) is a function of two variables, the conclusion is obvious. We assume the conclusion is true for all functions containing less that or equal to \(k-1\) variables.
Now we assume \(f\) is a function of \(k\) variables. We denote \(n=\sum\limits_{1\leq j\leq k}m_{j}\).
If \(k=s\),
\[f(x_{1},...,x_{k})= \frac{1}{2}\sum\limits_{1\leq j\leq k,j\neq s}x_{j}(x_{j}-1)+x_{s }^{2}\] \[= \frac{1}{2}\sum\limits_{j=1}^{k-1}x_{j}(x_{j}-1)+(n-\sum\limits_{ j=1}^{k-1}x_{j})^{2}\] \[:= g(x_{1},...,x_{k-1}).\]
Thus \(g_{i}:=\frac{\partial g}{\partial x_{i}}=x_{i}-\frac{1}{2}-2(n-\sum_{j=1}^{k-1}x_{j })=x_{i}-\frac{1}{2}-2x_{s}>0\). Then we have
\[f(x_{1},...,x_{k})= g(x_{1},...,x_{k-1})\] \[\geq g(x_{1},...,x_{k-2},h_{k-1}-\sum_{j=1}^{k-2}x_{j})\] \[= f(x_{1},...,x_{k-2},h_{k-1}-\sum_{j=1}^{k-2}x_{j},m_{k}).\]
The equality holds if and only if \(x_{k-1}=h_{k-1}-\sum_{j=1}^{k-2}x_{j}\), which is equivalent to \(x_{k}=m_{k}\).
Thus from Lemma 3.1, we have
\[f(x_{1},...,x_{k-2},h_{k-1}-\sum_{j=1}^{k-2}x_{j},m_{k})\geq f(m_{1},...,m_{k}).\]
The equality holds if and only if \(x_{i}=m_{i}\).
If \(k\neq s\),
\[f(x_{1},...,x_{k})= \frac{1}{2}\sum_{1\leq j\leq k,j\neq s}x_{j}(x_{j}-1)+x_{s}^{2}\] \[= \frac{1}{2}\sum_{j=1}^{k-1}x_{j}^{2}+\frac{1}{2}x_{s}^{2}+\frac{ 1}{2}x_{s}-\frac{1}{2}(n-\sum_{j=1}^{k-1}x_{j})^{2}-\frac{1}{2}n\] \[:= g(x_{1},...,x_{k-1}).\]
Then
\[g_{s}= 2x_{s}+\frac{1}{2}-(n-\sum_{1\leq j\leq k-1}x_{j})\] \[= 2x_{s}+\frac{1}{2}-x_{k}>0\] \[g_{i}= x_{i}-(n-\sum_{j=1}^{k-1}x_{j})\] \[= x_{i}-x_{k}\geq 0,(i\neq s).\]
Thus we have
\[f(x_{1},...,x_{k})= g(x_{1},...,x_{k-1})\] \[\geq g(x_{1},...,x_{k-2},h_{k-1}-\sum_{j=1}^{k-2}x_{j})\] \[= f(x_{1},...,x_{k-2},h_{k-1}-\sum_{j=1}^{k-2}x_{j},m_{k}).\]
The equality holds if and only if \(x_{k-1}=h_{k-1}-\sum_{j=1}^{k-2}x_{j}\), which is equivalent to \(x_{k}=m_{k}\). Then by induction \(f(x_{1},...,x_{k-2},h_{k-1}-\sum_{j=1}^{k-2}x_{j},m_{k})\geq f(m_{1},...,m_{k})\) and the equality holds if and only if \(x_{i}=m_{i}\).
Thus we have proved our lemma.
We use to denote a vertical rectangle consisting of two adjacent boxes and to denote a horizontal rectangle consisting of two adjacent boxes, called _A-domino_ and _B-domino_ respectively.
**Definition 3.3** (Z-diagram).: We construct a Z-diagram of type \((a_{0};b_{1},...,b_{k-1})\) by the following steps:
1. Put \(b_{i}\)\(B\)-dominos in two columns and denote this diagram by \(A_{i}\) for \(1\leq i\leq k-1\) ;
2. Put \(a_{0}\)\(A\)-dominos in one column and denote this diagram by \(A_{k}\);
3. Rearrange these diagrams such that they are descending by the heights of columns;
4. Put these diagrams together to construct a domino type partition.
The above diagram is called a _Z-diagram of type_\((a_{0};b_{1},...,b_{k-1})\).
It is clear that a Z-diagram gives us a domino type partition.
**Example 3.4**.: The followings are 3 Z-diagrams of type \((4;1,3)\), \((3;3,7,1)\) and \((0;2,7)\) respectively:
In a Z-diagram, it is obvious that the subdiagram consisting of \(A\)-dominos always lies in an odd column.
**Lemma 3.5**.: _Suppose \(P\) is a Z-diagram type of \((a_{0};b_{1},...,b_{k-1})\).Then we have_
\[F_{b}(P) =a_{0}^{2}+\frac{1}{2}\sum_{i}b_{i}(b_{i}-1),\] \[F_{d}(P) =a_{0}^{2}-a_{0}+\frac{1}{2}\sum_{i}b_{i}(b_{i}-1).\]
Proof.: Recall in Definition 2.4, we have
\[F_{b}(x): =\sum_{i\geq 1}(i-1)p_{i}^{\mathrm{odd}}=\sum_{2\mid i}(q_{i}^{ \mathrm{odd}})^{2}+\sum_{2\mid i}q_{i}^{\mathrm{odd}}(q_{i}^{\mathrm{odd}}-1),\] \[F_{d}(x): =\sum_{i\geq 1}(i-1)p_{i}^{\mathrm{ev}}=\sum_{2\mid i}q_{i}^{ \mathrm{ev}}(q_{i}^{\mathrm{ev}}-1)+\sum_{2\mid i}(q_{i}^{\mathrm{ev}})^{2},\]
where \(p=p(x)=(p_{1},p_{2},\cdots,p_{n})\) is the shape of the Young diagram \(P(x)\) obtained by using R-S algorithm. Thus \(F_{b}(x)\) and \(F_{d}(x)\) only depend on the shape of the Young diagram \(P(x)\). So we can similarly define \(F_{b}(T)\) for a Young diagram \(T\).
Now \(P\) is a Z-diagram of type \((a_{0};b_{1},...,b_{k-1})\). We can regard it as a Young diagram of shape \(p=(p_{1},..,p_{k})\) where \(p^{t}=(a_{0}^{\prime},b_{1}^{\prime},b_{1}^{\prime},b_{2}^{\prime},b_{2}^{ \prime},..,b_{k-1}^{\prime},b_{k-1}^{\prime})\) and
\((a_{0}^{\prime},b_{1}^{\prime},b_{2}^{\prime},..,b_{k-1}^{\prime})\) is the arrangement of the sequence \((a_{0},b_{1},b_{2},..,b_{k-1})\) in descending order. We use \(F_{b}(P|_{Q})\) to denote the value of the subdiagram \(Q\) in \(F_{b}(P)\). Similarly denote \(F_{d}(P|_{Q})\).
Note that the subdiagram \(A_{k}\) has \(a_{0}\) odd boxes, \(a_{0}\) even boxes, and always lies in an odd column. Thus, \(F_{b}(P|_{A_{k}})=a_{0}^{2}\) and \(F_{d}(P|_{A_{k}})=a_{0}^{2}-a_{0}\).
The subdiagram \(A_{i}\)\((i\neq k)\) has \(b_{i}\) odd boxes and \(b_{i}\) even boxes, and \(A_{i}\) always lies in an odd column and an even column. Thus we have \(F_{b}(P|_{A_{i}})=\frac{1}{2}b_{i}(b_{i}-1)=F_{d}(P|_{A_{i}})\).
Sum up these results, we have proved the result.
**Corollary 3.6**.: _Suppose \(p=(p_{1},...,p_{N})\) is the shape of the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\) so that_
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},2m_{s},m_{s+1},m_{s+1},..., m_{k},m_{k}).\]
_Here \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order and \(m_{s}=n_{k}\). Then we have_
\[p^{\rm odd}=(\left\lfloor\frac{m_{1}}{2}\right\rfloor,\left\lceil\frac{m_{1} }{2}\right\rceil,...,\left\lceil\frac{m_{s-1}}{2}\right\rfloor,\left\lceil \frac{m_{s-1}}{2}\right\rceil,m_{s},\left\lceil\frac{m_{s+1}}{2}\right\rceil, \left\lfloor\frac{m_{s+1}}{2}\right\rfloor,\left\lceil\frac{m_{k}}{2}\right\rceil)\]
_and_
\[p^{\rm ev}=(\left\lceil\frac{m_{1}}{2}\right\rceil,\left\lfloor\frac{m_{1}}{ 2}\right\rfloor,...,\left\lceil\frac{m_{s-1}}{2}\right\rceil,\left\lfloor \frac{m_{s-1}}{2}\right\rfloor,m_{s},\left\lfloor\frac{m_{s+1}}{2}\right\rfloor, \left\lceil\frac{m_{s+1}}{2}\right\rceil,\left\lfloor\frac{m_{k}}{2}\right\rfloor,\left\lceil\frac{m_{k}}{2}\right\rceil).\]
**Lemma 3.7**.: _Suppose \(P_{1}\) is a Z-diagram of type \((a_{0};b_{1},...,b_{k-1})\) and \(P_{2}\) is a Z-diagram of type \((a_{0};c_{1},...,c_{k-1})\). Then we have \(P_{1}=P_{2}\) if and only if there exists some \(\sigma\in S_{k-1}\) such that \(c_{i}=\sigma(b_{i})\) for \(1\leq i\leq k-1\)._
Proof.: The case for \(k=2\) is trivial. From the construction of Z-diagram, \((c_{1},...,c_{k-1})\) is an arrangement of \((b_{1},...,b_{k-1})\), thus it is obvious to prove the result.
Suppose \(\Phi=B_{n}/C_{n}/D_{n}\) and \(\lambda\) is an integral weight. Then from [BMW], we know that the shape \(p=p(\lambda^{-})\) of the Young diagram \(P(\lambda^{-})\) gives a partition of domino type. By using H-algorithm, we can get a special partition from this Young diagram \(P(\lambda^{-})\).
**Lemma 3.8**.: _Suppose \(A\) is a Young diagram of domino type with two columns and \(c_{i}(A)\) (or \(c_{i}\)) is the number of boxes in the \(i\)-th column of \(A\). Then \(F_{b}(A)\) is a decreasing function of \(c_{2}(A)\) when \(c_{1}(A)+c_{2}(A)=2m\) is a fixed number. It will take the minimal value \(\frac{1}{2}m(m-1)\) if and only if \(c_{2}(A)=c_{1}(A)\) or \(c_{2}(A)=c_{1}(A)-2\)._
Proof.: Recall that \(F_{b}(x)=\sum_{2\nmid i}(q_{i}^{\rm odd})^{2}+\sum_{2\mid i}q_{i}^{\rm odd}(q _{i}^{\rm odd}-1)\). Since \(A\) is a Young diagram with two columns, we may write
\[F_{b}(A)=(c_{1}^{\rm odd})^{2}+c_{2}^{\rm odd}(c_{2}^{\rm odd}-1):=F(c_{1},c _{2})=F(n-c_{2},c_{2})=f(c_{2}).\]
When we move one box from the first column to the second column of \(A\), we will get a new diagram \(\bar{A}\). Thus \(\bar{c}_{1}=c_{1}-1\) and \(\bar{c}_{2}=c_{2}+1\). Since \(A\) is a Young diagram of domino type, there are two possibilities as follows:
* \(\bar{c}_{1}^{\rm odd}=c_{1}^{\rm odd}\), \(\bar{c}_{2}^{\rm odd}=c_{2}^{\rm odd}\);
* \(\bar{c}_{1}^{\rm odd}=c_{1}^{\rm odd}-1\), \(\bar{c}_{2}^{\rm odd}=c_{2}^{\rm odd}+1\).
In the second case, we have \(c_{1}^{\rm odd}\geq c_{2}^{\rm odd}+1\). Thus we have
\[f(\bar{c}_{2}) =(\bar{c}_{1}^{\rm odd})^{2}+\bar{c}_{2}^{\rm odd}(\bar{c}_{2}^{ \rm odd}-1)\] \[=(c_{1}^{\rm odd}-1)^{2}+(c_{2}^{\rm odd}+1)c_{2}^{\rm odd}\]
\[=(c_{1}^{\text{odd}})^{2}+c_{2}^{\text{odd}}(c_{2}^{\text{odd}}-1)+( -2c_{1}^{\text{odd}}+1+2c_{2}^{\text{odd}})\] \[\leq f(c_{2})+(-2c_{2}^{\text{odd}}-2+1+2c_{2}^{\text{odd}})\] \[=f(c_{2})-1\] \[<f(c_{2}).\]
This finishes the proof.
Suppose \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,\)\(n_{k})\). Suppose \(L(\lambda)\in\mathcal{O}^{\mathfrak{p}}\) is an integral highest weight module. Write \(c_{i}\) for the number of entries in the \(i\)-th column of \(P(\lambda)\). We say \(\lambda\) is a _maximal standard weight of type \((n_{1},n_{2},...,n_{k})\)_ if the sequence \((c_{1},..,c_{k})\) is the arrangement of the sequence \((n_{1},n_{2},...,n_{k})\) in descending order. When \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\) or \(\mathfrak{sp}(n,\mathbb{C})\), we can consider \(\lambda^{-}\) as a weight corresponding to a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k-1},2n_{k},n_{k-1},...,n_{1})\) (if \(n_{k}>0\)) or type \((n_{1},n_{2},...,n_{k-1},0,0,n_{k-1},...,n_{1})\) (if \(n_{k}=0\)) in \(\mathfrak{g}=\mathfrak{sl}(2n,\mathbb{C})\). Similarly we say \(\lambda\) is a _maximal standard weight of type \((n_{1},n_{2},...,n_{k})\)_ if \(\lambda^{-}\) is a maximal standard weight of type \((n_{1},n_{2},...,n_{k-1},2n_{k},n_{k-1},...,n_{1})\) (if \(n_{k}>0\)) or type \((n_{1},n_{2},...,n_{k-1},0,0,\)\(n_{k-1},...,n_{1})\) (if \(n_{k}=0\)).
**Lemma 3.9** (Uniqueness of Types \(B\) and \(C\)).: _Suppose \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\) or \(\mathfrak{sp}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). Suppose \(L(\lambda)\in\mathcal{O}^{\mathfrak{p}}\) is an integral highest weight module. The number \(F_{b}(\lambda^{-})\) will take the minimal value \(\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2}\) if and only if \(\lambda\) is a maximal standard weight of type \((n_{1},n_{2},...,n_{k})\). In other words, \(P(\lambda^{-})^{\text{odd}}\) is unique when \(F_{b}(\lambda^{-})\) takes the minimal value._
Proof.: Since \(L(\lambda)\in\mathcal{O}^{\mathfrak{p}}\), in \(\lambda^{-}\), for each \(n_{i}\) we have two decreasing subsequences
\[(\lambda_{n_{i-1}+1},\lambda_{n_{i-1}+2},...,\lambda_{n_{i-1}+n_{i}})\]
which is denoted by \(str_{i}^{+}\), and
\[(-\lambda_{n_{i-1}+n_{i}},-\lambda_{n_{i-1}+n_{i}-1},...,-\lambda_{n_{i-1}+1})\]
which is denoted by \(str_{i}^{-}\).
We sort \((n_{1},n_{2},...,n_{k})\) 'almost' descending to get a sequence \((m_{1},m_{2},...,m_{k})\) where \(m_{s}=n_{k}\), \(m_{s-1}-1\geq 2m_{s}\geq m_{s+1}\) and \(m_{i}\geq m_{i+1}\) (if \(i\neq s,s-1\)). Obviously, \(m_{i}\) is equal to some \(n_{j}\). We call it \(n_{j_{i}}\). And it is corresponding to two decreasing subsequences \(str_{j_{i}}^{+}\) and \(str_{j_{i}}^{-}\). If \(n_{j_{s}}=n_{k}>0\), the subsequence \((str_{j_{s}}^{+},str_{j_{s}}^{-})\) will be a decreasing subsequence.
Now we construct \(P(\lambda^{-})\), and we only focus on odd boxes.
**Step 1.** We construct a Young tableau \(Y_{1}\) corresponding to \(m_{1}\), which is corresponding to \(n_{j_{1}}\). Actually we choose the subsequence \((str_{j_{1}}^{+},str_{j_{1}}^{-})\) to construct the Young tableau \(Y_{1}\). Then we find that there are \(m_{1}\) odd boxes in \(Y_{1}\). If \(s=1\), the Young tableau \(Y_{1}\) contains only one column. If \(s\neq 1\), the Young tableau \(Y_{1}\) has at most two columns. There are \(m_{1}\) odd boxes in \(Y_{1}\). We denote the columns corresponding to \(Y_{1}\) by Area \(S_{1}\).
**Step 2.** We continue to construct the Young tableau with boxes corresponding to \(m_{2}\). We insert \(str_{j_{2}}^{+}\) and \(str_{j_{2}}^{-}\) into the subsequence \((str_{j_{1}}^{+},str_{j_{1}}^{-})\) according to their original order in \(\lambda^{-}\), then the new subsequence will generate a new Young tableau \(Y_{1}\). We call it \(n_{j_{1}}\) the \(n_{j_{2}}\)-th column of \(P(\lambda^{-})\). We call it \(n_{j_{1}}\) the \(n_{j_{2}}\)-th column of \(P(\lambda^{-})\).
tableau \(Y_{2}\). And we find that there are \(m_{1}+m_{2}\) odd boxes in \(Y_{2}\). Comparing to \(Y_{1}\), we get some new columns in \(Y_{2}\), and we denote them by Area \(S_{2}\). With some new elements coming in, there may be some more odd boxes in the new Area \(S_{1}\).
**Step 3.** Similarly, we continue these steps to \(m_{k}\). Then We have constructed \(P(\lambda^{-})\).
We use \(x_{i}\) to denote the number of odd boxes in the Area \(S_{i}\) of \(P(\lambda^{-})\). Note that the Area \(S_{s}\) contains only one column when it is not empty, and the other Areas contain two columns. By the steps of constructing \(P(\lambda^{-})\), we find that
\[x_{1}\geq m_{1},x_{1}+x_{2}\geq m_{1}+m_{2},...,\sum_{j=1}^{k-1}x_{j}\geq\sum_{ j=1}^{k-1}m_{j},\]
\[\sum_{j=1}^{k}x_{j}=\sum_{j=1}^{k}m_{j},x_{i}\geq x_{i+1}(i\neq s,s-1),x_{s-1} -1\geq 2x_{s}\geq x_{s+1}.\]
By Lemma 3.8, the Area \(S_{i}\) (\(i\neq s\)) will contain two columns if we want the value of \(F_{b}(\lambda^{-})\) to be as small as possible.
Then by Lemma 3.2 and Lemma 3.8, \(F_{b}(\lambda^{-})\) will take the minimal value
\[\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2}\]
if and only if \(x_{i}=m_{i}\) for all \(1\leq i\leq k\). Thus, the numbers of odd boxes in different areas are fixed.
This finishes the proof.
In Lemma 3.9, we constructed a Young diagram \(P(\lambda^{-})\) whose odd Young tableau \(P(\lambda^{-})^{\rm odd}\) is unique when \(F_{b}(\lambda^{-})\) takes the minimal value. In fact we can construct a Z-diagram which has same odd boxes with \(P(\lambda^{-})^{\rm odd}\).
**Theorem 3.10**.: _Suppose \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\) or \(\mathfrak{sp}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). Suppose \(L(\lambda)\in\mathcal{O}^{\mathfrak{p}}\) is an integral highest weight module. The Z-diagram \(P\) of type \((n_{k};n_{1},n_{2},...,n_{k-1})\) has the same odd boxes with the odd Young tableau \(P(\lambda^{-})^{\rm odd}\) when \(F_{b}(\lambda^{-})\) takes the minimal value._
Proof.: From Lemma 3.5, we know \(F_{b}(P)=F_{b}(\lambda^{-})\). Thus the result follows from Lemma 3.9 since \(P(\lambda^{-})^{\rm odd}\) is unique.
Now we have proved the main Theorem 1.4 for types \(B\) and \(C\).
**Example 3.11**.: Let \(\mathfrak{g}=\mathfrak{so}(9,\mathbb{C})\). Suppose \(\Delta\setminus I=\{\alpha_{2},\alpha_{3}\}\). Then the corresponding parabolic subalgebra \(\mathfrak{p}_{I}\) is a standard parabolic subalgebra of type \((2,1,1)\). By Theorem 1.4, a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same odd boxes with the following Z-diagram \(P\) of type \((1;2,1)\):
\[\begin{array}{|c|c|c|}\hline B&A&B\\ \hline B&A&\end{array}.\]
The shape of \(P(\lambda^{-})^{\rm odd}\) is \(p(\lambda^{-})^{\rm odd}=p^{\rm odd}=(2,2)\), where \(p\) is the shape of the Z-diagram \(P\).
Now suppose \(\lambda=(-5,-6,-4,2)\in\mathfrak{h}^{*}\). Then \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular. We can check that \(P(\lambda^{-})\) has the same odd boxes with the Z-diagram \(P\) of type \((1;2,1)\)
Actually, since \(\lambda^{-}=(-5,-6,-4,2,-2,4,6,5)\) and from the R-S algorithm, we have
\[\begin{array}{c}\framebox{$-6$}\\ \framebox{$-5$}\\ \end{array}\rightarrow\begin{array}{c}\framebox{$-6$}\\ \framebox{$-4$}\\ \framebox{$-5$}\\ \end{array}\rightarrow\begin{array}{c}\framebox{$-6$}\\ \framebox{$-4$}\\ \framebox{$-2$}\\ \framebox{$-5$}\\ \end{array}\rightarrow\begin{array}{c}\framebox{$-6$}\\ \framebox{$-4$}\\ \framebox{$-2$}\\ \framebox{$-5$}\\ \end{array}\rightarrow\begin{array}{c}\framebox{$-6$}\\ \framebox{$-4$}\\ \framebox{$-2$}\\ \framebox{$-5$}\\ \end{array}\rightarrow\begin{array}{c}\framebox{$-6$}\\ \framebox{$-4$}\\ \framebox{$-2$}\\ \framebox{$-5$}\\ \end{array}\]
\[\rightarrow\begin{array}{c}\framebox{$-6$}\\ \framebox{$-4$}\\ \framebox{$-2$}\\ \framebox{$-5$}\\ \end{array}\framebox{$4$}\\ \framebox{$-5$}\\ \end{array}=P(\lambda^{-}).\]
## 4. Proof of Theorem 1.5
In this section, let \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\). Suppose \(\mathfrak{p}\) is a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\)\((n_{k}\neq 1)\). Based on Proposition 2.5, the maximal Gelfand-Kirillov dimension of simple modules in \(\mathcal{O}^{\mathfrak{p}}\) will be
\[d_{m}(\mathfrak{p})=n^{2}-n-(\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2 }-n_{k})=n^{2}-n-F_{d}(\lambda^{-}).\]
So a simple module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if
\[F_{d}(\lambda^{-})=\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2}-n_{k},\]
which is
\[\sum_{i\geq 1}(i-1)p_{i}^{\rm ev}=\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k }^{2}-n_{k}.\]
**Lemma 4.1**.: _Suppose \(\{m_{1},m_{2},...,m_{k}\}\) is a sequence of integers with \(m_{s-1}\geq 2m_{s}\geq m_{s+1}+1\) and \(m_{i}\geq m_{i+1}(i\neq s,s-1)\). We say \(\{m_{1},m_{2},...,m_{k}\}\) is sorted 'almost' descending. Denote \(h_{t}:=\sum\limits_{1\leq j\leq t}m_{j}\). We define a function_
\[f(x_{1},...,x_{k}):=\frac{1}{2}\sum_{1\leq j\leq k,j\neq s}x_{j}(x_{j}-1)+x_{ s}^{2}-x^{s},\]
_where \((x_{1},x_{2},...,x_{k})\in D\) with_
\[D=\{(x_{1},...,x_{k})\in\mathbb{R}^{k}\ |x_{1}\geq m_{1},x_{1}+x_{2} \geq m_{1}+m_{2},...,\sum_{j=1}^{k-1}x_{j}\geq\sum_{j=1}^{k-1}m_{j},\] \[\sum_{j=1}^{k}x_{j}=\sum_{j=1}^{k}m_{j},x_{i}\geq x_{i+1}(i\neq s, s-1),\] \[x_{s-1}\geq 2x_{s}\geq x_{s+1}+1\}.\]
_Then \(f\) will take the minimal value \(\frac{1}{2}\sum\limits_{1\leq j\leq k,j\neq s}m_{j}(m_{j}-1)+m_{s}^{2}-m_{s}\) if and only if \(x_{j}=m_{j}\) for all \(1\leq j\leq k\)._
Proof.: Similar to the proof in Lemma 3.2.
Since \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\), we can consider \(\lambda^{-}\) as a weight corresponding to a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k-1},2n_{k},n_{k-1},...,n_{1})\) (if \(n_{k}>0\) and \(\lambda_{n}>0\)), or type \((n_{1},n_{2},...,n_{k-1},2n_{k}-1,1,n_{k-1},...,n_{1})\) (if \(n_{k}>0\) and \(\lambda_{n}\leq 0\)), or type \((n_{1},n_{2},...,n_{k-1},0,0,n_{k-1},...,n_{1})\) (if \(n_{k}=0\)) in \(\mathfrak{g}=\mathfrak{sl}(2n,\mathbb{C})\). Similarly we say \(\lambda\) is a _maximal standard weight of type \((n_{1},n_{2},...,n_{k})\)_ if \(\lambda^{-}\) is a maximal standard weight of type \((n_{1},n_{2},...,n_{k-1},2n_{k},n_{k-1},...,n_{1})\) (if \(n_{k}>0\) and \(\lambda_{n}>0\)), or type \((n_{1},n_{2},...,n_{k-1},2n_{k}-1,1,n_{k-1},...,n_{1})\) (if \(n_{k}>0\) and \(\lambda_{n}\leq 0\)), or type \((n_{1},n_{2},...,n_{k-1},0,0,n_{k-1},...,n_{1})\) (if \(n_{k}=0\)) in \(\mathfrak{g}=\mathfrak{sl}(2n,\mathbb{C})\).
**Lemma 4.2** (Uniqueness of Type \(D\)).: _Suppose \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\) (\(n_{k}\neq 1\)). Suppose \(L(\lambda)\in\mathcal{O}^{\mathfrak{p}}\) is an integral highest weight module. The number \(F_{d}(\lambda^{-})\) will take the minimal value \(\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2}-n_{k}\) if and only if \(\lambda\) is a maximal standard weight of type \((n_{1},n_{2},...,n_{k})\). In other words, \(P(\lambda^{-})^{\rm ev}\) is unique when \(F_{d}(\lambda^{-})\) takes the minimal value._
Proof.: The proof of this case is similar to that in types \(B\) and \(C\). However, we have to focus on even boxes other than odd boxes.
We sort \((n_{1},n_{2},...,n_{k})\) 'almost' descending to get a sequence \((m_{1},m_{2},...,m_{k})\) where \(m_{s}=n_{k}\), \(m_{s-1}\geq 2m_{s}\geq m_{s+1}+1\) and \(m_{i}\geq m_{i+1}(i\neq s,s-1)\). Obviously, \(m_{i}\) is equal to some \(n_{j}\). We call it \(n_{j_{i}}\). And it is corresponding to two decreasing subsequences \(str_{j_{i}}^{+}\) and \(str_{j_{i}}^{-}\).
Now we construct \(P(\lambda^{-})\), and we only focus on even boxes.
**Step 1.** We construct a Young tableau \(Y_{1}\) corresponding to \(m_{1}\), which is corresponding to \(n_{j_{1}}\). Actually we choose the subsequence \((str_{j_{1}}^{+},str_{j_{1}}^{-})\) to construct the Young tableau \(Y_{1}\). Then we find that there are \(m_{1}\) even boxes in \(Y_{1}\). If \(s=1\), the Young tableau \(Y_{1}\) contains only one column or two columns with \(c_{2}(Y_{1})=1\). We denote the first column in \(Y_{1}\) by Area \(S_{1}\). If \(s\neq 1\), the Young tableau \(Y_{1}\) has at most two columns. There are \(m_{1}\) even boxes in \(Y_{1}\). We denote the columns corresponding to \(Y_{1}\) by Area \(S_{1}\).
**Step 2.** We continue to construct the Young tableau with boxes corresponding to \(m_{2}\). We insert \(str_{j_{2}}^{+}\) and \(str_{j_{2}}^{-}\) into the subsequence \((str_{j_{1}}^{+},str_{j_{1}}^{-})\) according to their original order in \(\lambda^{-}\), then the new subsequence will generate a new Young tableau \(Y_{2}\). And we find that there are \(m_{1}+m_{2}\) even boxes in \(Y_{2}\). Comparing to Area \(S_{1}\), we get some other columns in \(Y_{2}\) (it may happen that \(c_{4}(Y_{2})=1\)). We use Area \(S_{2}\) to denote the new columns if \(c_{4}(Y_{2})=0\) or denote the first two new columns if \(c_{4}(Y_{2})>0\). With some new elements coming in, there may be some more even boxes in the new Area \(S_{1}\).
**Step 3.** Similarly, we continue these steps to \(m_{k}\). Then We have constructed \(P(\lambda^{-})\).
We use \(x_{i}\) to denote the number of even boxes in the Area \(S_{i}\) of \(P(\lambda^{-})\). Note that the Area \(S_{s}\) contains only one column containing even boxes when it is not empty, and the other Areas contain two columns. By the steps of constructing \(P(\lambda^{-})\), we find that
\[x_{1}\geq m_{1},x_{1}+x_{2}\geq m_{1}+m_{2},...,\sum_{j=1}^{k-1}x_{j}\geq\sum_{ j=1}^{k-1}m_{j},\]
\[\sum_{j=1}^{k}x_{j}=\sum_{j=1}^{k}m_{j},x_{i}\geq x_{i+1}(i\neq s,s-1),x_{s-1}\geq 2 x_{s}\geq x_{s+1}+1.\]
By Lemma 3.8, the Area \(S_{i}\)\((i\neq s)\) will contain two columns if we want the value of \(F_{d}(\lambda^{-})\) to be as small as possible.
Then by Lemma 4.1 and Lemma 3.8, \(F_{d}(\lambda^{-})\) will take the minimal value
\[\frac{1}{2}\sum_{j=1}^{k-1}n_{j}(n_{j}-1)+n_{k}^{2}-n_{k}\]
if and only if \(x_{i}=m_{i}\) for all \(1\leq i\leq k\). Thus, the numbers of even boxes in different areas are fixed.
This finishes the proof.
In Lemma 4.2, we constructed a Young diagram \(P(\lambda^{-})\) whose even Young tableau \(P(\lambda^{-})^{\mathrm{ev}}\) is unique when \(F_{d}(\lambda^{-})\) takes the minimal value. In fact we can construct a Z-diagram which has same even boxes with \(P(\lambda^{-})^{\mathrm{ev}}\).
**Theorem 4.3**.: _Suppose \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\) (\(n_{k}\neq 1\)). Suppose \(L(\lambda)\in\mathcal{O}^{\mathfrak{p}}\) is an integral highest weight module. The Z-diagram \(P\) of type \((n_{k};n_{1},n_{2},...,n_{k-1})\) has the same even boxes with the even Young tableau \(P(\lambda^{-})^{\mathrm{odd}}\) when \(F_{d}(\lambda^{-})\) takes the minimal value._
Proof.: From Lemma 3.5, we know \(F_{d}(P)=F_{d}(\lambda^{-})\). Thus the result follows from Lemma 4.2 since \(P(\lambda^{-})^{\mathrm{ev}}\) is unique.
Recall that the case of \(n_{k}=1\) can be reduced to the case of \(n_{k}=0\), so it is unnecessary to talk about it here. Thus we have proved the main Theorem 1.5 for type \(D\).
**Example 4.4**.: Let \(\mathfrak{g}=\mathfrak{so}(10,\mathbb{C})\). Suppose \(\Delta\setminus I=\{\alpha_{1},\alpha_{3},\alpha_{5}\}\). Then the corresponding parabolic subalgebra \(\mathfrak{p}_{I}\) is a standard parabolic subalgebra of type \((1,2,2,0)\). By Theorem 1.5, a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same even boxes with the following Z-diagram \(P\) of type \((0;1,2,2)\):
\[\begin{array}{|c|c|c|}\hline B&B&B\\ \hline B&B\\ \hline\end{array}.\]
The shape of \(P(\lambda^{-})^{\mathrm{ev}}\) is \(p(\lambda^{-})^{\mathrm{ev}}=p^{\mathrm{ev}}=(3,2)\), where \(p\) is the shape of the Z-diagram \(P\).
Now suppose \(\lambda=(-6,-4,-5,-2,-3)\in\mathfrak{h}^{*}\). Then \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular. We can check that \(P(\lambda^{-})\) has the same even boxes with the Z-diagram \(P\) of type \((0;1,2,2)\). Actually, since \(\lambda^{-}=(-6,-4,-5,-2,-3,3,2,5,4,6)\) and from the R-S algorithm, we have
\[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|}\hline-6&-4&\rightarrow&\begin{array}[ ]{|c|
\[\rightarrow\begin{array}{|c|c|c|c|c|c|c|}\hline-6&-5&-3&2&4&6\\ \hline-4&-2&3&5&\\ \hline\end{array}=P(\lambda^{-}).\]
**Example 4.5**.: Let \(\mathfrak{g}=\mathfrak{so}(10,\mathbb{C})\). Suppose \(\Delta\setminus I=\{\alpha_{1},\alpha_{4}\}\). Then the corresponding parabolic subalgebra \(\mathfrak{p}_{I}\) is a standard parabolic subalgebra of type \((1,3,1)\). We can regard it as a standard parabolic subalgebra of type \((1,4,0)\). By Theorem 1.5, a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same even boxes with the following Z-diagram \(P\) of type \((0;1,4)\):
\[\begin{array}{|c|c|c|c|}\hline B&B\\ \hline B&\\ \hline B&\\ \hline\end{array}.\]
The shape of \(P(\lambda^{-})^{\mathrm{ev}}\) is \(p(\lambda^{-})^{\mathrm{ev}}=p^{\mathrm{ev}}=(2,1,1,1)\), where \(p\) is the shape of the Z-diagram \(P\).
Now suppose \(\lambda=(-9,-5,-6,-7,8)\in\mathfrak{h}^{*}\). Then \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular. We can check that \(P(\lambda^{-})\) has the same even boxes with the Z-diagram \(P\) of type \((0;1,4)\). Actually, since \(\lambda^{-}=(-9,-5,-6,-7,8,-8,7,6,5,9)\) and from the R-S algorithm, we have
\[\rightarrow\begin{array}{|c|c|c|c|c|c|c|}\hline-9&-5\\ \hline-6&-5\\ \hline-5&-5\\ \hline\end{array}\rightarrow\begin{array}{|c|c|c|c|c|c|c|}\hline-9&-8&7 \\ \hline-7&8\\ \hline-5&-5\\ \hline\end{array}\rightarrow\begin{array}{|c|c|c|c|c|c|}\hline-9&-8&7 \\ \hline-7&8\\ \hline-5&-5\\ \hline\end{array}\]
## 5. The non-integral case
Let \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a parabolic subalgebra of size \((n_{1},n_{2},...,n_{k})\) with \(\mathfrak{q}_{\mathfrak{p}}\) being the corresponding decreasing parabolic subalgebra of size \((m_{1},m_{2},...,m_{k})\). When a weight \(\lambda\in\mathcal{O}^{\mathfrak{p}}\) is non-integral, from Bai-Xie [1], we can associate some Young tableaux (more than one Young tableau) to \(\lambda\). For any \(\lambda\in\mathfrak{h}^{*}\), we write \(\lambda=(\lambda_{1},\cdots,\lambda_{n})\). Let \(P(\lambda)\) be a set of some Young tableaux as follows. Let \(\lambda_{Y}:\lambda_{i_{1}},\lambda_{i_{2}},\ldots,\lambda_{i_{r}}\) be a maximal subsequence of \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) such that \(\lambda_{i_{k}}\), \(1\leq k\leq r\) are congruent to each other by \(\mathbb{Z}\). Then let \(P(\lambda_{Y})\) be the Young tableau obtained from \(\lambda_{Y}\) by using R-S algorithm, which is a Young tableau in \(P(\lambda)\).
Now we put these Young tableaux together and make them into one bigger Young tableau \(\bar{P}(\lambda)\) by inserting the columns of other Young tableaux into the first Young tableau such that \(c_{i}(\bar{P}(\lambda))\) is decreasing for \(1\leq i\leq k\). In other words, the Young tableau \(\bar{P}(\lambda)=\underset{P(\lambda_{Y})\in P(\lambda)}{\sqcup}\ P(\lambda_ {Y}).\) Here \(P_{1}\overset{c}{\sqcup}P_{2}\) denotes the Young tableau whose multiset of nonzero column lengths equals the union of the two Young tableaux \(P_{1}\) and \(P_{2}\).
Then from our Theorem 1.3, we have the following theorem and corollary.
**Theorem 5.1**.: _Let \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). A non-integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(c_{i}(\tilde{P}(\lambda))=m_{i}\) for \(1\leq i\leq k\), where \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order._
**Corollary 5.2**.: _Let \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). A non-integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if we divide \(\lambda=(\lambda_{1},...,\lambda_{n})\) into several subsequences such that any two entries of a subsequence has an integral difference and the restriction of \(\lambda\) to each subsequence is a secular weight in the corresponding parabolic category._
For the types \(B_{n},C_{n}\) and \(D_{n}\), let \([\lambda]\) be the set of maximal subsequences \(x\) of \(\lambda\) such that any two entries of \(x\) have an integral difference or sum. In this case, we set \([\lambda]_{1}\) (resp. \([\lambda]_{2}\)) to be the subset of \([\lambda]\) consisting of sequences with all entries belonging to \(\mathbb{Z}\) (resp. \(\frac{1}{2}+\mathbb{Z}\)). We set \([\lambda]_{1,2}=[\lambda]_{1}\cup[\lambda]_{2},\quad[\lambda]_{3}=[\lambda] \setminus[\lambda]_{1,2}\). Since there is at most one element in \([\lambda]_{1}\) and \([\lambda]_{2}\), we denote them by \((\lambda)_{(0)}\) and \((\lambda)_{(\frac{1}{2})}\).
Let \(x=(\lambda_{i_{1}},\lambda_{i_{2}},\cdots\lambda_{i_{r}})\in[\lambda]_{3}\). Let \(y=(\lambda_{j_{1}},\lambda_{j_{2}},\cdots,\lambda_{j_{p}})\) be the maximal subsequence of \(x\) such that \(j_{1}=i_{1}\) and the difference of any two entries of \(y\) is an integer. Let \(z=(\lambda_{k_{1}},\lambda_{k_{2}},\cdots,\lambda_{k_{q}})\) be the subsequence obtained by deleting \(y\) from \(x\), which is possible empty. Define
\[\tilde{x}=(\lambda_{j_{1}},\lambda_{j_{2}},\cdots,\lambda_{j_{p}},-\lambda_{k _{q}},-\lambda_{k_{q-1}},\cdots,-\lambda_{k_{1}}).\]
For \(\lambda_{Y}\in(\lambda)_{(0)}\cup(\lambda)_{(\frac{1}{2})}\), we can get a Young tableau \(P(\lambda_{Y}^{-})\).
We define
\[F_{a}(x)=\sum_{j\geq 1}\frac{c_{j}(c_{j}-1)}{2}=\sum_{k\geq 1}(k-1)p_{k},\]
where \(p(x)=(p_{1},p_{2},\cdots,p_{N})\) is the shape of the Young tableau \(P(x)\) and \(p(x)^{t}=(c_{1},...,c_{K})\).
**Proposition 5.3** ([1, Theorem 4.6] and [1, Theorem 5.7] ).: _The GK dimension of \(L(\lambda)\) can be computed as follows._
1. _If_ \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\)_,_ \[\mathrm{GKdim}L(\lambda)=\frac{n(n-1)}{2}-\sum_{x\in[\lambda]}F_{A}(x).\]
2. _If_ \(\mathfrak{g}=\mathfrak{sp}(n,\mathbb{C})\)_,_ \[\mathrm{GKdim}L(\lambda)=n^{2}-F_{b}((\lambda)_{(0)}^{-})-F_{d}((\lambda)_{( \frac{1}{2})}^{-})-\sum_{x\in[\lambda]_{3}}F_{a}(\tilde{x}).\]
3. _If_ \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\)_,_ \[\mathrm{GKdim}L(\lambda)=n^{2}-F_{b}((\lambda)_{(0)}^{-})-F_{b}((\lambda)_{( \frac{1}{2})}^{-})-\sum_{x\in[\lambda]_{3}}F_{a}(\tilde{x}).\]
4. _If_ \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\)_,_ \[\mathrm{GKdim}L(\lambda)=n^{2}-n-F_{d}((\lambda)_{(0)}^{-})-F_{d}((\lambda)_{( \frac{1}{2})}^{-})-\sum_{x\in[\lambda]_{3}}F_{A}(\tilde{x}).\]
From the above Proposition 5.3, we can see that a non-integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if all of the functions '\(F\)' in the summation of \(\mathrm{GKdim}L(\lambda)\) takes the minimal values.
Thus similar to type \(A\), we have the following result.
**Corollary 5.4**.: _Let \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\), \(\mathfrak{sp}(n,\mathbb{C})\) or \(\mathfrak{so}(2n+1,\mathbb{C})\). Let \(\mathfrak{p}\) be a parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). A non-integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \((\lambda)_{(0)}\), \((\lambda)_{(\frac{1}{2})}\) and all \(\tilde{x}\) for \(x\in[\lambda]_{3}\) are secular integral weights in the corresponding parabolic categories._
## 6. Richardson orbits associated to parabolic subalgebras
In this section, we will give the partition type for the Richardson orbit \(\mathcal{O}\) associated to a standard parabolic subalgebra \(\mathfrak{p}\) of type \((n_{1},n_{2},...,n_{k})\). We use \(\mathbf{p}=[p_{1},...,p_{k}]\) to denote a partition of an integer \(n\).
First we recall some results about nilpotent orbits of classical types. Some details can be found in [10].
**Proposition 6.1** ([10]).: _Nilpotent orbits in \(\mathfrak{so}(2n+1,\mathbb{C})\) are in one-to-one correspondence with the set of partitions of \(2n+1\) in which even parts occur with even multiplicity._
**Proposition 6.2** ([10]).: _Nilpotent orbits in \(\mathfrak{sp}(n,\mathbb{C})\) are in one-to-one correspondence with the set of partitions of \(2n\) in which odd parts occur with even multiplicity._
**Proposition 6.3** ([10]).: _Nilpotent orbits in \(\mathfrak{so}(2n,\mathbb{C})\) are in one-to-one correspondence with the set of partitions of \(2n\) in which even parts occur with even multiplicity, except that each "very even" partition \(\mathbf{d}\) (consisting of only even parts) correspond to two orbits, denoted by \(\mathcal{O}_{\mathbf{d}}^{I}\) and \(\mathcal{O}_{\mathbf{d}}^{II}\)._
**Proposition 6.4** ([10] and [10]).: _The Richardson orbit \(\mathcal{O}\) associated to a standard parabolic subalgebra \(\mathfrak{p}\) of type \((n_{1},n_{2},...,n_{k})\) is unique and special. We have \(\dim\mathcal{O}=2\dim(\mathfrak{u})\)._
**Theorem 6.5**.: _Suppose \(\mathfrak{g}=\mathfrak{sl}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). The Richardson orbit \(\mathcal{O}\) associated to \(\mathfrak{p}\) has partition \(\mathbf{p}=[m_{1},...,m_{k}]^{t}\), where \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order._
Proof.: See [10, Theorem 7.2.3].
**Theorem 6.6**.: _Suppose \(\mathfrak{g}=\mathfrak{so}(2n+1,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). The Richardson orbit \(\mathcal{O}\) associated to \(\mathfrak{p}\) has partition_
\[\mathbf{p}=[p_{1},p_{2},...,p_{{}_{2m_{s}}},p_{{}_{2m_{s}+1}}+1,p_{{}_{2m_{s}+ 2}},...,p_{N}]_{{}_{B}},\]
_where \(p=(p_{1},...,p_{N})\) is the shape of the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\) so that_
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},.2m_{s},m_{s+1},m_{s+1},...,m_{k},m_{k}).\]
_Here \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order and \(m_{s}=n_{k}\)._
Proof.: From Theorem 1.4, an integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same odd boxes with a Z-diagram of type \((n_{k};n_{1},...,n_{k-1})\). Recall that in this case, we can write \(\lambda=w\mu\) for a unique anti-dominant \(\mu\in\mathfrak{h}^{*}\) and a unique minimal length element \(w\in W\). By [14, Proposition 4.3] or [13, Theorem 48], \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(w\) belongs to the Kazhdan-Lusztig right cell containing \(w_{0}^{\mathfrak{p}}\), where \(w_{0}^{\mathfrak{p}}\) is the longest element in the parabolic subgroup of \(W\) corresponding to the parabolic subalgebra \(\mathfrak{p}\). By Springer correspondence, we have \(\mathcal{O}_{w}=\mathcal{O}_{w_{0}^{\mathfrak{p}}}\) being special, where \(\mathcal{O}_{w_{0}^{\mathfrak{p}}}\) is the desired Richardson orbit. From [1], \(\mathcal{O}_{w}\) has the same odd diagram with \(P(\lambda^{-})\). Thus \(\mathcal{O}_{w}\) has the same odd diagram with the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\). From the Z-diagram \(P\), by using H-algorithm, we can get the special partition \(\mathbf{p}\) corresponding to \(\mathcal{O}_{w}=\mathcal{O}_{w_{0}^{\mathfrak{p}}}\).
When \(n_{k}=0\), \(P\) is a Young diagram with shape \(p=(p_{1},...,p_{N})\) such that
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{k-1},m_{k-1}).\]
Note that this \(p\) contains only even parts. After the H-algorithm on \(p\), \(p_{1}\) becomes \(p_{1}+1\), some \(p_{2i}\) becomes \(p_{2i}-1\) and \(p_{2i+1}\) becomes \(p_{2i+1}+1\) (when \(p_{2i}\) occurs with odd multiplicity and \(p_{2i}>p_{2i+1}\)). Recall that a partition \(\mathbf{q}\) of type \(B\) is special if and only if its dual partition \(\mathbf{q}^{t}\) is of type \(B\). Thus from Proposition 6.1 and 6.4 we have \(\mathbf{p}=[p_{1}+1,p_{2},...,p_{N}]_{{}_{B}}\) since it is special and has the same odd diagram with the Z-diagram \(P\).
When \(n_{k}>0\), \(P\) is a Young diagram with shape \(p=(p_{1},...,p_{N})\) such that
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},.2m_{s},m_{s+1},m_{s+1},...,m_{k},m_{k}),\]
where \(m_{s}=n_{k}\). After the H-algorithm on \(p\), the first \(2m_{s}\) parts will not change since they are odd parts. We only need to do \(B\)-collapse on the rest parts. Thus we have
\[\mathbf{p}=H(p)=[p_{1},p_{2},...,p_{{}_{2m_{s}}},p_{{}_{2m_{s+1}}}+1,p_{{}_{2m _{s+2}}},...,p_{N}]_{{}_{B}}.\]
Combined the above two cases, we finishes the proof.
**Example 6.7**.: Let \(\mathfrak{g}=\mathfrak{so}(23,\mathbb{C})\). Suppose \(\Delta\setminus I=\{\alpha_{1},\alpha_{4},\alpha_{9}\}\). Then the corresponding parabolic subalgebra \(\mathfrak{p}_{I}\) is a standard parabolic subalgebra of type \((1,3,5,2)\). By Theorem 1.4, a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same odd boxes with the following Z-diagram \(P\) of type \((2;1,3,5)\):
\[\begin{array}{|c|c|c|}\hline B&A&B&B\\ \hline B&A&B\\ \hline B&A&B\\ \hline B&A&\\ \hline B&\\ \hline\end{array}.\]
The shape of \(P(\lambda^{-})^{\mathrm{odd}}\) is \(p(\lambda^{-})^{\mathrm{odd}}=p^{\mathrm{odd}}=(3,3,2,2,1)\), where \(p\) is the shape of the Z-diagram \(P\).
From this Z-diagram \(P\), we can get a Richardson orbit \(\mathcal{O}\) with partition \(\mathbf{p}=[p_{1},p_{2},...,p_{{}_{2m_{s}}},p_{{}_{2m_{s+1}}}+1,p_{{}_{2m_{s+2} }},...,p_{N}]_{{}_{B}}=[7,5,5,3,2+1]_{{}_{B}}=[7,5,5,3,3]\).
For example, suppose \(\lambda=(-12,-9,-10,-11,-4,-5,-6,-7,-8,4,3)\in\mathfrak{h}^{*}\). Then \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular. We can check that \(P(\lambda^{-})\) has the same odd boxes with the Z-diagram \(P\) of type \((2;1,3,5)\).
**Theorem 6.8**.: _Suppose \(\mathfrak{g}=\mathfrak{sp}(n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). The Richardson orbit \(\mathcal{O}\) associated to \(\mathfrak{p}\) has partition_
\[\mathbf{p}=[p_{1},p_{2},...,p_{N}]_{{}_{C}},\]
_where \(p=(p_{1},...,p_{N})\) is the shape of the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\) so that_
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},2m_{s},m_{s+1},m_{s+1},..., m_{k},m_{k}).\]
_Here \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order and \(m_{s}=n_{k}\)._
Proof.: Similarly from the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\), by using H-algorithm, we can get the special partition \(\mathbf{p}\) corresponding to \(\mathcal{O}_{w}=\mathcal{O}_{w_{0}^{\mathfrak{p}}}\).
When \(n_{k}=0\), \(P\) is a Young diagram with shape \(p=(p_{1},...,p_{N})\) such that
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{k-1},m_{k-1}).\]
Note that this \(p\) contains only even parts, which is already a special partition of type \(C\). Thus we have \(\mathbf{p}=[p_{1},p_{2},...,p_{N}]\).
When \(n_{k}>0\), \(P\) is a Young diagram with shape \(p=(p_{1},...,p_{N})\) such that
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},2m_{s},m_{s+1},m_{s+1},..., m_{k},m_{k}),\]
where \(m_{s}=n_{k}\). After the H-algorithm on \(p\), similar to type \(B\) we have \(\mathbf{p}=[p_{1},p_{2},...,p_{N}]_{{}_{C}}\) since the last \(m_{1}-2m_{s}\) parts are even parts and they will not change after H-algorithm.
Combined the above two cases, we finishes the proof.
**Theorem 6.9**.: _Suppose \(\mathfrak{g}=\mathfrak{so}(2n,\mathbb{C})\). Let \(\mathfrak{p}\) be a standard parabolic subalgebra of type \((n_{1},n_{2},...,n_{k})\). When \(n_{k}\neq 1\), the Richardson orbit \(\mathcal{O}\) associated to \(\mathfrak{p}\) has partition_
\[\mathbf{p}=[p_{1},p_{2},...,p_{N}]_{{}_{D}},\]
_where \(p=(p_{1},...,p_{N})\) is the shape of the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\) so that_
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},2m_{s},m_{s+1},m_{s+1},..., m_{k},m_{k}).\]
_Here \((m_{1},...,m_{k})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k})\) in descending order and \(m_{s}=n_{k}\)._
_When \(n_{k}=1\), the Richardson orbit \(\mathcal{O}\) associated to \(\mathfrak{p}\) has partition_
\[\bar{\mathbf{p}}=[p^{\prime}_{1},p^{\prime}_{2},...,p^{\prime}_{N}]_{{}_{D}},\]
_where \(\bar{p}=(p^{\prime}_{1},...,p^{\prime}_{N})\) is the shape of the Z-diagram \(\bar{P}\) of type \((0;n_{1},...,n_{k-2},n_{k-1}+1)\) so that_
\[\bar{p}^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{k-1},m_{k-1}).\]
_Here \((m_{1},...,m_{k-1})\) is the arrangement of the sequence \((n_{1},n_{2},..,n_{k-2},n_{k-1}+1)\) in descending order._
Proof.: Similarly from the Z-diagram \(P\) of type \((n_{k};n_{1},...,n_{k-1})\), by using H-algorithm, we can get the special partition \(\mathbf{p}\) corresponding to \(\mathcal{O}_{w}=\mathcal{O}_{w_{0}^{\mathfrak{p}}}\). In this case, \(\mathcal{O}_{w}\) has the same even diagram with the Z-diagram \(P\).
Since the case of \(n_{k}=1\) can be reduced to the case of \(n_{k}=0\), we will not talk about it here.
When \(n_{k}=0\), \(P\) is a Young diagram with shape \(p=(p_{1},...,p_{N})\) such that
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{k-1},m_{k-1}).\]
Note that this \(p\) contains only even parts. After the H-algorithm on \(p\), similar to type \(B\) we have \(\mathbf{p}=[p_{1},p_{2},...,p_{N}]_{{}_{D}}=p_{{}_{D}}\).
When \(n_{k}>1\), \(P\) is a Young diagram with shape \(p=(p_{1},...,p_{N})\) such that
\[p^{t}=(m_{1},m_{1},m_{2},m_{2},...,m_{s-1},m_{s-1},.2m_{s},m_{s+1},m_{s+1},..., m_{k},m_{k}),\]
where \(m_{s}=n_{k}\). After the H-algorithm on \(p\), we have
\[\mathbf{p}=H(p)=[p_{1},p_{2},...,p_{N}]_{{}_{D}}=p_{{}_{D}}\]
since the first \(2m_{s}\) parts are odd parts and they will not change after the H-algorithm.
Combined the above two cases, we finishes the proof.
**Remark 6.10**.: _When the associated Richardson orbit is a very even orbit, its numeral can be determined by [13, Theorem 7.3.3(ii)] since \(n_{k}=0\) or \(1\) in this case._
**Example 6.11**.: Let \(\mathfrak{g}=\mathfrak{so}(22,\mathbb{C})\). Suppose \(\Delta\setminus I=\{\alpha_{1},\alpha_{8}\}\). Then the corresponding parabolic subalgebra \(\mathfrak{p}_{I}\) is a standard parabolic subalgebra of type \((1,7,3)\). By Theorem 1.5, a simple integral highest weight module \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular if and only if \(P(\lambda^{-})\) has the same even even boxes with the following Z-diagram \(P\) of type \((3;1,7)\):
\[\begin{array}{|c|c|c|}\hline B&A&B\\ \hline B&A\\ \hline B&A\\ \hline B&A\\ \hline B&A\\ \hline B&\\ \hline\end{array}.\]
The shape of \(P(\lambda^{-})^{\mathrm{ev}}\) is \(p(\lambda^{-})^{\mathrm{ev}}=p^{\mathrm{ev}}=(3,1,2,1,2,1,1)\), where \(p\) is the shape of the Z-diagram \(P\).
From this Z-diagram \(P\), we can get a Richardson orbit \(\mathcal{O}\) with partition \(\mathbf{p}=[p_{1},p_{2},...,p_{N}]_{{}_{D}}=[5,3^{5},2]_{{}_{D}}=[5,3^{5},1,1]\).
For example, suppose \(\lambda=(-20,-5,-6,-7,-8,-9,-10,-11,3,2,1)\in\mathfrak{h}^{*}\). Then \(L(\lambda)\) in \(\mathcal{O}^{\mathfrak{p}}\) is secular. We can check that \(P(\lambda^{-})\) has the same even boxes with the Z-diagram \(P\) of type \((3;1,7)\).
### Acknowledgments
We would like to thank Volodymyr Mazorchuk for very helpful discussions about the secular highest weight modules in the case of type \(A\). The first author was supported in part by NSFC Grant No. 12171344 and the National Key R & D Program of China (No. 2018YFA0701700 and No. 2018YFA0701701).
|
2304.09043 | Continuous-Time Range-Only Pose Estimation | Range-only (RO) localization involves determining the position of a mobile
robot by measuring the distance to specific anchors. RO localization is
challenging since the measurements are low-dimensional and a single range
sensor does not have enough information to estimate the full pose of the robot.
As such, range sensors are typically coupled with other sensing modalities such
as wheel encoders or inertial measurement units (IMUs) to estimate the full
pose. In this work, we propose a continuous-time Gaussian process (GP)- based
trajectory estimation method to estimate the full pose of a robot using only
range measurements from multiple range sensors. Results from simulation and
real experiments show that our proposed method, using off-the-shelf range
sensors, is able to achieve comparable performance and in some cases outperform
alternative state-of-the-art sensor-fusion methods that use additional sensing
modalities. | Abhishek Goudar, Timothy D. Barfoot, Angela P. Schoellig | 2023-04-18T15:04:58Z | http://arxiv.org/abs/2304.09043v2 | # Continuous-Time Range-Only Pose Estimation
###### Abstract
Range-only (RO) localization involves determining the position of a mobile robot by measuring the distance to specific anchors. RO localization is challenging since the measurements are low-dimensional and a single range sensor does not have enough information to estimate the full pose of the robot. As such, range sensors are typically coupled with other sensing modalities such as wheel encoders or inertial measurement units (IMUs) to estimate the full pose. In this work, we propose a continuous-time Gaussian process (GP)-based trajectory estimation method to estimate the full pose of a robot using only range measurements from multiple range sensors. Results from simulation and real experiments show that our proposed method, using off-the-shelf range sensors, is able to achieve comparable performance and in some cases outperform alternative state-of-the-art sensor-fusion methods that use additional sensing modalities.
localization; sensor fusion; range-only; continuous time estimation;
## I Introduction
Accurate localization is essential for the reliable operation of any autonomous system. Generally, different sensing modalities are used for localization in different environments. In outdoor environments, the Global Positioning System (GPS) [1] is one of the preferred methods of localization. In recent years, localization against a map using lidars and cameras has also been widely adopted in self-driving cars. For indoor environments, the sensing modalities include vision, laser, magnetic tapes, and radio wave-based sensors such as WiFi and ultrawideband (UWB) [2].
The underlying principle of radio wave-based positioning technologies such as GPS and UWB is point-to-point range measurements between a transmitter and a receiver. In range-only (RO) localization, a robot with a range sensor such as a radio measures its distance to other radios known as _anchors_. The distance measurements are then combined to determine the position of the robot [1]. RO localization is challenging since range measurements are sparse and hence are typically used only for position estimation. In the case where the full pose of the robot is needed, range sensors are typically used in combination with additional sensors as wheel encoders [3] or inertial measurement units (IMUs) [4]. A common drawback of such sensor-fusion methods is that sufficient excitation or movement is required before the full pose can be determined unambiguously [5, 6]. For example, in tightly-coupled UWB-IMU systems, excitation of the accelerometer and gyroscope axes is necessary for full-state observability [6]. This can be a limiting factor for autonomous robots that frequently encounter _stop-and-go_ motion patterns in settings such as warehouses and factories. Additionally, sensors such as wheel encoders are susceptible to wheel slippage, which can result in poor performance, especially in slippery and off-road conditions.
In this work, we propose a continuous-time trajectory estimation method that is able to estimate the full pose of a robot using only range measurements from multiple range sensors. Unlike conventional multimodal sensor-fusion algorithms, the proposed method does not require excitation or motion for full pose estimation, although it may still benefit from it. Additionally, the proposed method is not affected by naturally occurring conditions such as wheel slippage and lack of adequate motion.
In summary, the main contributions of this work are _(i)_ a continuous-time approach to 2D and 3D pose estimation using only range measurements from multiple range sensors, and _(ii)_ demonstration of the proposed approach in simula
Figure 1: Our test platform for 2D trajectory estimation is a custom-built wheeled holonomic robot. It is equipped with two ultrawideband (UWB) radios to estimate the full 2D pose using continuous-time trajectory estimation. The robot also has 3 mecanum wheels with encoders, which are used for comparison with the baseline algorithm.
tion and real experiments.
The paper is organized as follows. We review the related work in Section II. We formulate our problem in Section III and describe our proposed method in Section IV. We present results from evaluation of our approach in simulation and real experiments in Section V. We include a discussion of the results and challenges of our approach in Section VI and conclude the paper in Section VII.
## II Related work
Range-only (RO) localization has a rich history since it is widely used in positioning technologies such as Global Position System (GPS) [1], and more recently in other wireless positioning technologies such as ultrawideband (UWB) [7].
As alluded to previously, the sparse nature of range measurements does not afford full pose estimation. As such, range sensors are typically combined with wheel odometry [8, 3, 9] for the localization of ground robots. Another common approach to pose estimation is to combine range sensors with inertial measurement units (IMUs) [4, 10, 6]. More recently, other sources of pose such as fiducial markers [11] and visual-inertial-odometry have been used as well [12, 13]. Common estimation frameworks used for positioning include parametric filtering methods, [4, 10, 6], nonparametric filtering methods [3], and more recently, optimization-based methods [12] have gained traction. Most of the previous works use a discrete-time formulation for trajectory estimation.
In recent years, there has been an interest in continuous-time approach to RO localization. A spline-based approach to the fusion of UWB and IMU data for continuous-time trajectory estimation is proposed in [14]. In [15], polynomial basis functions are used to parameterize the robot trajectory and to derive the conditions necessary for recovering the trajectory. More recently, continuous-time trajectory estimation based on Gaussian process (GP) regression [16] was applied to RO localization [17]. Unlike the previous works, [15, 17] use only range measurements to estimate the robot position over time. The application of range measurements for 2D relative pose estimation between multiple agents using multiple range sensors was recently shown in [18].
An alternative approach to pose estimation using range measurements from multiple range sensors is to combine multilateration [1] with 3D point set registration [19]. A limitation of this approach is that it requires synchronization between the anchors and the range sensors.
In this work, we take the GP regression approach to continuous-time trajectory estimation of [16, 20] and perform full 2D and 3D pose estimation using only range measurements from multiple range sensors. In contrast to previous methods, our method does not require additional sensing modalities nor does it require synchronization between the anchors and the range sensors. To the best of the authors' knowledge, continuous-time 2D and 3D pose estimation using only asynchronous range measurements from multiple range sensors has not appeared in the literature.
## III Problem statement
The setup we consider is that of a robot navigating in an environment where multiple anchors have been installed. The objective of this work is to estimate the pose of the robot using only range measurements made to the anchors. We assume that
* multiple (\(\geq 3\)) non-collated anchors are installed in the environment,
* the position of the anchors is known,
* each robot is equipped with at least 2 non-collocated range sensors for 2D pose estimation and at least 3 range sensors for 3D pose estimation, and
* the position of the range sensors in the robot body frame is known.
The proposed method does not require range measurements between the anchors and the range sensors to arrive simultaneously in a synchronized manner. Specifically, a single range measurement between a range sensor on the robot and an anchor is necessary at each time step.
## IV Methodology
### _Preliminaries_
We introduce the frame convention and the notation that will be used throughout the paper. We denote the world frame by \(\mathcal{F}_{\mathcal{W}}\) and the robot frame by \(\mathcal{F}_{i}\). We represent the robot pose in the world frame using elements of the special Euclidean _Lie_ group \(\mathbf{T}_{wi}\in SE(n)\), where \(n=2\) for 2D pose and \(n=3\) for 3D pose. We will use 3D poses for exposition with the understanding that the proposed method carries over to 2D poses. A generic pose element \(\mathbf{T}\) is parameterized as \(\mathbf{T}=\{\mathbf{p},\mathbf{R}\}\), where \(\mathbf{p}\in\mathbb{R}^{3\times 1}\) represents the position and \(\mathbf{R}\in SO(3)\), a member of the special orthogonal group, represents the orientation. We use the _right perturbation_ convention of [21] to represent perturbations
Figure 2: Factor graph for a range-only (RO) localization setup. In RO localization, a robot equipped with a range sensor, such as a wireless radio, estimates its position by measuring the distance to other wireless radios, known as _anchors_, installed in the environment. The trajectory consists of a set of nodes representing the robot state, \(\mathbf{x}(t)\). The anchor positions, \(\mathbf{p}_{a\#}\), are known, as indicated by the filled circles. Motion prior factors are denoted by \(\mathbf{o}(t)\) and the range measurements by factors \(r_{\#}(t)\), where \(\#\) denotes the anchor id.
around the nominal pose. Specifically, a generic pose is decomposed into a nominal pose, \(\bar{\mathbf{T}}\in SE(3)\), and a small perturbation \(\mathbf{\xi}\in\mathbb{R}^{6\times 1}\) as
\[\mathbf{T}=\bar{\mathbf{T}}\exp(\mathbf{\xi}^{\wedge}), \tag{1}\]
where the operator, \((\cdot)^{\wedge}\), maps an element of \(\mathbb{R}^{6\times 1}\) to an element of the _Lie algebra_, \(\mathfrak{se}(3)\). The \(\exp(\cdot)\) operator is a _retraction_ operation for \(SE(3)\) and maps an element of the Lie algebra \(\mathfrak{se}(3)\) back to the Lie group, \(SE(3)\).
As mentioned previously, the setup we consider is of a robot with multiple range sensors navigating in an environment with anchors. Range measurements between the range sensors on the robot and the anchors arrive in an asynchronous manner. Since a single range measurement is not sufficient to constrain the full state, we use motion priors as constraints between subsequent range measurements to constrain the full state. The factor graph for such a setup is shown in Figure 2. The motion priors are added as _binary_ factors, \(\mathbf{o}(t_{i})\), between two consecutive robot states, \(\mathbf{x}(t_{i})\) and \(\mathbf{x}(t_{i+1})\), and the range measurements are added as _unary_ factors, \(r_{j}(t_{i})\). We perform inference on the factor graph using _maximum a posteriori_ (MAP) estimation. Under the Gaussian noise assumption, this is equivalent to solving a nonlinear least-squares problem. Next, we describe the motion model used to generate the motion prior and the range measurement model.
### _Motion model_
We adopt the Gaussian process (GP) regression approach to continuous-time trajectory estimation of [16] and use the white-noise-on-acceleration (WNOA) motion model [20]. Since the pose variables are nonlinear, we use the _local pose variable_ formulation of [20] to define a linear time-invariant (LTI) motion model on the local pose variables, which are subsequently stitched together to generate the whole trajectory. The motivation for choosing such a motion prior is that the resulting system matrices are sparse and can be solved very efficiently. For completeness, we present here an overview of the motion prior generation for the right-perturbation scheme. A more detailed description using the left-perturbation scheme can be found in [21].
The robot state at any time \(t\) is given by \(\mathbf{x}(t)=\{\mathbf{T}(t),\mathbf{\varpi}(t)\}\in SE(3)\times\mathbb{R}^{6 \times 1}\), where, as before, \(\mathbf{T}(t)\) is the robot pose in frame \(\mathcal{F}_{\mathcal{W}}\) and \(\mathbf{\varpi}\) is the generalized body-centric velocity of the robot. We drop subscripts denoting the frames to reduce clutter. We define the local pose variables as perturbations around the nominal pose as
\[\mathbf{T}(t)=\mathbf{T}(t_{k})\exp(\mathbf{\xi}_{k}^{\wedge}(t)), \tag{2}\]
where \(\mathbf{\xi}_{k}\in\mathbb{R}^{6\times 1}\) is the local pose variable. We define a motion model on the local pose variables using the following LTI stochastic differential equation (SDE) [20, 21]:
\[\frac{d}{dt}\begin{bmatrix}\mathbf{\xi}_{k}(t)\\ \mathbf{\dot{\xi}}_{k}(t)\end{bmatrix}=\begin{bmatrix}\mathbf{0}&\mathbf{I}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}\underbrace{\begin{bmatrix}\mathbf{\xi}_{k}(t)\\ \mathbf{\dot{\xi}}_{k}(t)\end{bmatrix}}_{\mathbf{\gamma}_{k}(t)}+\begin{bmatrix} \mathbf{0}\\ \mathbf{I}\end{bmatrix}\mathbf{w}_{k}(t),\]
where \(\mathbf{I}\) is the identity matrix of appropriate dimensions, \(\mathbf{w}_{k}(t)\sim\mathcal{GP}(\mathbf{0},\mathbf{Q}(t-t^{\prime}))\) is a zero-mean GP with power spectral density matrix, \(\mathbf{Q}\). The local pose velocity, \(\mathbf{\dot{\xi}}_{k}(t)\), is related to the generalized body-centric velocity as \(\mathbf{\dot{\xi}}_{k}(t)=\mathbf{\mathcal{J}}_{r}^{-1}(\mathbf{\xi}_{k}(t))\mathbf{\varpi}(t )^{\vee}\), where \(\mathbf{\mathcal{J}}_{r}\) is the right jacobian of \(SE(3)\)[22], and the operator \((\cdot)^{\vee}\) maps an element of the Lie algebra \(\mathfrak{se}(3)\) to \(\mathbb{R}^{6\times 1}\). The above SDE can be integrated in closed form to obtain a sparse GP prior. The mean of the prior between two time instants is
\[\mathbf{\gamma}_{k-1}(t_{k})=\mathbf{\Phi}(t_{k},t_{k-1})\mathbf{\gamma}_{k-1}(t_{k-1}),\]
and the corresponding covariance is
\[\tilde{\mathbf{P}}_{k}(t_{k})=\mathbf{\Phi}(t_{k},t_{k-1})\tilde{\mathbf{P}}_{k}( t_{k-1})\mathbf{\Phi}(t_{k},t_{k-1})^{T}+\mathbf{Q}(t_{k}-t_{k-1}),\]
where the system transition matrix is
\[\mathbf{\Phi}(t_{k},t_{k-1})=\begin{bmatrix}\mathbf{I}&\mathbf{I}\Delta t_{k:k-1} \\ \mathbf{0}&\mathbf{I}\end{bmatrix},\]
and the noise between two time steps is
\[\mathbf{Q}(t_{k}-t_{k-1})=\begin{bmatrix}\frac{1}{2}\Delta t_{k:k-1}^{3}\mathbf{Q }&\frac{1}{2}\Delta t_{k:k-1}^{2}\mathbf{Q}\\ \frac{1}{2}\Delta t_{k:k-1}^{2}\mathbf{Q}&\Delta t_{k:k-1}\mathbf{Q}\end{bmatrix},\]
with \(\Delta t_{k:k-1}=t_{k}-t_{k-1}\).
The error terms corresponding to the motion prior required for MAP estimation are given by
\[\mathbf{e}_{p}=\begin{bmatrix}\left(\mathbf{\varpi}(t_{k-1})\Delta t_{k:k-1}- \ln\left(\mathbf{T}(t_{k-1})^{-1}\mathbf{T}(t_{k})\right)\right)^{\vee},\\ \mathbf{\varpi}(t_{k-1})^{\vee}-\mathbf{\mathcal{J}}_{r}^{-1}(\ln(\mathbf{T}(t_{k-1}) ^{-1}\mathbf{T}(t_{k}))^{\vee})\mathbf{\varpi}(t_{k})^{\vee}\end{bmatrix},\]
where \(\ln(\cdot)\) is the inverse retraction operation and converts a member of the Lie group to its Lie algebra.
### _Range measurement model_
The range measurement at any time \(t\) between the robot and anchor \(j\) is given by
\[r_{j}(t)=\|\mathbf{p}_{a_{j}}-\mathbf{R}(t)\mathbf{p}_{u}-\mathbf{p}(t)\|_{2}+ \eta_{rt}, \tag{3}\]
where \(\|\cdot\|_{2}\) is the \(\ell^{2}\) norm, \(\mathbf{T}(t)=\{\mathbf{p}(t),\mathbf{R}(t)\}\) is the robot pose at time \(t\), \(\mathbf{p}_{a_{j}}\in\mathbb{R}^{3\times 1}\) is the position of anchor \(j\) in world frame, and \(\mathbf{p}_{u}\) is the position of the range sensor w.r.t robot body frame \(\mathcal{F}_{i}\), a.k.a _lever arm_, and \(\eta_{r}(t)\sim\mathcal{N}(0,\sigma_{r}^{2})\) is an additive white Gaussian noise of variance, \(\sigma_{r}^{2}\).
From the measurement model (3), we can see that the only term influencing the orientation of the robot is the lever arm: a larger lever arm provides better orientation estimation. This is especially true in the presence of noisy range measurements. Additionally, for 3D pose estimation, multiple (\(\geq 3\)) noncollinear range sensors are needed in order to excite all three axes of orientation. The error term
for MAP estimation corresponding to the range measurement model is
\[e_{r}=r_{j}(t)-\|\mathbf{p}_{a_{j}}-\mathbf{R}(t)\mathbf{p}_{u}-\mathbf{p}(t)\|_{2}.\]
### _Inference_
To keep the computational cost low, we use a _fixed-lag smoother_ (FLS) to combine a window of range measurements and motion priors to estimate the robot trajectory. The size of the window is parameterized by time duration, \(\delta t_{\text{fs}}\). A complete description of MAP estimation done in each fixed window can be found in [21]. States older than \(\delta t_{\text{fs}}\) are marginalized out. We use the GTSAM [23] library to implement the FLS.
## V Experiments
In this section, we present results from simulations and real experiments to demonstrate RO pose estimation under different settings. In simulation, we evaluate the effect of the range measurement noise and the lever arm length on estimation accuracy. We evaluate the proposed approach for 2D and 3D trajectory estimation in real experiments.
### _Simulations_
In simulations, we evaluate the sensitivity of RO localization to the magnitude of noise in range measurements and the length of the lever arm. Our simulation environment consists of 8 anchors and a quadrotor with 3 noncollinear range sensors. The quadrotor is controlled using ground truth and is commanded a straight-line trajectory. The length of the lever arm is equal for the three range sensors. The lever arm lengths are varied from \(\|\mathbf{p}_{u}^{i}\|_{2}=0.014\,\mathrm{m}\) to \(\|\mathbf{p}_{u}^{i}\|_{2}=2.8\,\mathrm{m}\). For a given length of the lever arm, the range measurements are corrupted with Gaussian noise of increasing variance (\(\sigma_{r}^{2}=0\,\mathrm{m}\) to \(\sigma_{r}^{2}=0.01\,\mathrm{m}\)). Position and orientation root-mean-square error (RMSE) from multiple simulations are shown in Figure 3. The plots show that orientation estimation is more susceptible to noise in range measurements, especially for smaller lever arms. This is expected because the resolution of any angle relies on the physical distance between the range sensors. However, noise in range measurements can negate the effect of this physical separation. In contrast, the estimation of position is relatively more robust to the noise in range measurements as it does not require range sensors to be physically separated. However, increasing measurement noise results in a higher position RMSE.
### _Real experiments_
#### Setup
We use the DW1000-based [24] ultrawideband (UWB) radios from Bitcraze as both anchors and range sensors on the robot. The test space consists of an arena of dimensions \(7\,\mathrm{m}\times 8\,\mathrm{m}\times 3.5\,\mathrm{m}\) with 8 UWB anchors installed in the corners of the space. The UWB radios are operated in two-way-range (TWR) mode. The test space is also equipped with a Vicon motion capture system, which is used as a source of ground truth pose.
#### 2D Localization
For 2D localization, we use a custom-built wheeled holonomic robot, shown in Figure 1, as our test platform. The robot is equipped with two UWB radios and three wheel encoders. The length of the lever arm is \(\|\mathbf{p}_{u}^{i}\|_{2}=0.095\,\mathrm{m}\). We compare our method with sensor fusion of range measurements and wheel odometry data [25]. In this case, data from a single UWB radio is combined with
Figure 4: Position and orientation RMSE box plots from real experiments for our proposed method and the baseline (UWB+ODOM). The proposed method uses only range measurements whereas the baseline uses a combination of wheel encoder data and range measurements. The proposed method achieves better average position RMSE with lower deviation compared to the baseline. A similar trend can be observed in the orientation RMSE. Larger deviations of the baseline are due to wheel slippage.
Figure 3: Results from simulations demonstrating the effect of magnitude of range measurement noise and lever arm length on orientation (top) and position (bottom) root-mean-square error (RMSE). Lever arm lengths are indicated by dashed vertical lines. For a given length of the lever arm, the range measurements are corrupted with Gaussian noise of increasing covariance (denoted by \(\sigma_{r}^{2}\)). The plots show that orientation estimation is more susceptible to noise in range measurements for smaller lever arms, whereas position estimation is relatively less sensitive.
velocity measurements from the wheel encoders using the same inference method as before. We refer to this baseline algorithm as UWB+ODOM.
We performed multiple experiments where the robot was driven manually along different trajectories in the test space and the sensor data was recorded onboard for offline evaluation of the two methods. UWB range data was processed to remove large outliers and constant biases for both algorithms.
Box plots of the position and the orientation RMSE for the two methods from 6 experiments are shown in Figure 4. The proposed method achieves better average position RMSE with lower deviation compared to the baseline. A similar trend can be observed in the orientation RMSE. Larger deviations of the baseline method are due to frequent wheel slippage, which results in erroneous velocity measurements. The average RMSE values for the two methods are provided in Table I. Trajectory plots from one such experiment for the two methods are shown in Figure 5. The corresponding error plots with \(3\sigma\) covariance bounds for our proposed method are shown in Figure 6. The plots show that the estimated uncertainty bounds the observed error reliably.
The baseline achieves better tracking performance for trajectories involving rapid turns. This can be attributed to the lower update rate and sparsity of range measurements. Specifically, the wheel encoders provide linear and angular velocities at \(20\,\mathrm{Hz}\). In contrast, range measurements provide a single distance measurement at \(17\,\mathrm{Hz}\). With an increased range update rate, the proposed method should be able to track more aggressive trajectories.
### _3D Localization_
Our test platform for 3D localization is a sensor wand equipped with 3 UWB radios and an IMU. The UWB radios are mounted in a noncollinear manner as shown in Figure 7. We compare our method to the tightly-coupled fusion of UWB and IMU data [6]. Specifically, we combine measurements from a single UWB radio and an IMU using a fixed-lag smoother. We refer to this baseline as UWB+IMU.
_Dynamic trajectories:_ We performed multiple experiments where the sensor wand was moved manually along different trajectories to excite different axes of the IMU. The sensor data was recorded for offline evaluation of the two approaches.
Box plots for axes-wise and cumulative position and
Fig. 5: Trajectory estimation results from real experiments for our proposed method and the baseline (UWB+ODOM). The proposed method achieves similar tracking performance in position (left) and yaw angle (right) as the baseline without using wheel encoder data.
Fig. 6: Error plots for position (\(x\) and \(y\)) and yaw angle with the corresponding \(3\sigma\) covariance envelopes for our proposed method from a real experiment. The estimation is unbiased and the estimated uncertainty bounds the observed error reliably.
Fig. 7: Our setup for 3D trajectory estimation is a sensor wand equipped with three ultrawideband (UWB) radios. The wand is also equipped with an inertial measurement unit (IMU), which is used for the baseline algorithm. The maximum length of the lever arm is \(\|\mathbf{p}_{\mathrm{u}}^{z}\|_{2}=0.72\,\mathrm{m}\).
orientation RMSE for the two methods from 5 experiments are shown in Figure 8. The proposed method achieves a better position RMSE compared to the baseline. However, the baseline achieves a lower orientation RMSE compared to our method. This is expected as _(i)_ the IMU has angular rate measurements along _each_ body axis, and _(ii)_ the gravity vector is an accurate source of roll and pitch angle. However, the yaw angle RMSE of our proposed method is better compared to the baseline. This is because, in a tightly coupled UWB-IMU system, excitation of all the IMU axes is needed before the full pose becomes observable [6]. In contrast, the proposed method does not require explicit excitation but does benefit from it. The average RMSE values for the two methods are provided in Table II. Error plots along with the corresponding \(3\sigma\) covariance envelopes from one of the experiments are shown in Figure 9. The plots show that the observed error is bounded by the estimated uncertainty reliably.
_Sensor dropout:_ An advantage of the proposed continuous-time trajectory estimation method is that it can handle sensor dropouts. Specifically, in the absence of any sensor data, the motion prior is able to constrain the state. However, the baseline method is sensitive to IMU data dropout.
For this setup, we use batch trajectory estimation instead of the FLS. We simulated sensor dropout by removing 5 seconds of measurements from the recorded data. The proposed method estimates the trajectory reliably in this scenario as shown by the error plots in Figure 10. The effect of the sensor dropout is captured by the increased uncertainty around the state during the dropout period.
## VI Discussion
The results show that reliable 2D and 3D pose estimation can be achieved with the proposed method, using only range measurements from multiple range sensors. However, the combination of UWB and IMU outperforms the proposed
Figure 8: Individual and cumulative position and orientation RMSE from 3D trajectory estimation experiments for our proposed method and the baseline (UWB+IMU). The proposed method achieves lower yaw angle RMSE as the baseline method requires excitation for yaw to be estimated reliably. The proposed method has a higher roll and pitch angle RMSE due to the poor geometry of the test space along the z-axis and the range sensor separation on the sensor wand.
Figure 9: Error plots for the estimated position and orientation (with the corresponding \(3\sigma\) covariance envelopes) for our proposed method from a real experiment. The estimated uncertainty bounds the observed error.
method for dynamic trajectories. There are several factors that determine the efficacy of the proposed method in different settings. One of the factors is the bias in the measurements [26]. In this work, the data was preprocessed to remove any constant biases. Any remnant biases can affect the estimation accuracy and hence need to be estimated online. Another factor that affects the estimation accuracy at high speeds is the update rate of the range sensors, which can be addressed using high-bandwidth high-rate sensors. A third factor that influences the estimation performance is the geometry of the installed anchors. Nonetheless, the results are promising considering that the precision of the UWB radios used in the experiment is \(\pm 0.10\,\mathrm{m}\). With higher-precision radio frequency technologies such as millimeter-wave radar, we expect to achieve higher accuracy.
## VII Conclusion
In this work, we presented a continuous-time trajectory estimation method for 2D and 3D pose estimation using only range measurements from multiple range sensors. Through simulation and real experiments, we showed that pose estimation can be done reliably using only range measurements. Additionally, the results show that the proposed method, using off-the-shelf sensors, can achieve comparable performance and in some cases outperform conventional sensor fusion methods that require additional sensors.
There are many avenues for future work. One direction is to use different range-based measurement models such as time-difference-of-arrival [24], which is more scalable. Another future direction is to evaluate different motion models such as the white-noise-on-jerk [27] motion model. Extension of the current method to continuous-time range-only multi-agent relative localization is another prospective future direction.
|
2302.04831 | Cooperative Open-ended Learning Framework for Zero-shot Coordination | Zero-shot coordination in cooperative artificial intelligence (AI) remains a
significant challenge, which means effectively coordinating with a wide range
of unseen partners. Previous algorithms have attempted to address this
challenge by optimizing fixed objectives within a population to improve
strategy or behaviour diversity. However, these approaches can result in a loss
of learning and an inability to cooperate with certain strategies within the
population, known as cooperative incompatibility. To address this issue, we
propose the Cooperative Open-ended LEarning (COLE) framework, which constructs
open-ended objectives in cooperative games with two players from the
perspective of graph theory to assess and identify the cooperative ability of
each strategy. We further specify the framework and propose a practical
algorithm that leverages knowledge from game theory and graph theory.
Furthermore, an analysis of the learning process of the algorithm shows that it
can efficiently overcome cooperative incompatibility. The experimental results
in the Overcooked game environment demonstrate that our method outperforms
current state-of-the-art methods when coordinating with different-level
partners. Our demo is available at https://sites.google.com/view/cole-2023. | Yang Li, Shao Zhang, Jichen Sun, Yali Du, Ying Wen, Xinbing Wang, Wei Pan | 2023-02-09T18:37:04Z | http://arxiv.org/abs/2302.04831v4 | # Cooperative Open-ended Learning Framework for Zero-shot Coordination
###### Abstract
Zero-shot coordination in cooperative artificial intelligence (AI) remains a significant challenge, which means effectively coordinating with a wide range of unseen partners. Previous algorithms have attempted to address this challenge by optimizing fixed objectives within a population to improve strategy or behavior diversity. However, these approaches can result in a loss of learning and an inability to cooperate with certain strategies within the population, known as cooperative incompatibility. To address this issue, we propose the **C**ooperative **O**pen-ended **LE**arning (**COLE**) framework, which constructs open-ended objectives in cooperative games with two players from the perspective of graph theory to assess and identify the cooperative ability of each strategy. We further specify the framework and propose a practical algorithm that leverages knowledge from game theory and graph theory. Furthermore, an analysis of the learning process of the algorithm shows that it can efficiently overcome cooperative incompatibility. The experimental results in the Overcooked game environment demonstrate that our method outperforms current state-of-the-art methods when coordinating with different-level partners. Our code and demo are available at [https://sites.google.com/view/cole-2023/](https://sites.google.com/view/cole-2023/).
Machine Learning, Zero-shot coordination, Zero-shot coordination, Zero-shot coordination, Zero-shot coordination
## 1 Introduction
Zero-shot coordination (ZSC) is a major challenge of cooperative AI to train agents that have the ability to coordinate with a wide range of unseen partners (Legg and Hutter, 2007; Hu et al., 2020). The traditional method of self-play (SP) (Tesauro, 1994) involves iterative improvement of strategies by playing against oneself. While SP can converge to an equilibrium of the game (Fudenberg and Levine, 1998), the strategies often form specific behaviors and conventions to achieve higher payoffs (Hu et al., 2020). As a result, a fully converged SP strategy may not be adaptable to coordinating with unseen strategies (Lerer and Peysakhovich, 2018; Hu et al., 2020).
To overcome the limitations of SP, most ZSC methods focus on promoting strategic or behavioral diversity by introducing population-based training (PBT) to improve strategies' adaptive ability (Carroll et al., 2019; Canaan et al., 2022; Zhao et al., 2021; Lupu et al., 2021). PBT aims to improve cooperative outcomes with other strategies in the population to promote zero-shot coordination with unseen strategies. This is achieved by maintaining a set of strategies to break the conventions of SP (Tesauro, 1994) and optimizing the rewards for each pair in the population. Most state-of-the-art (SOTA) methods attempt to pre-train a diverse population (Strouse et al., 2021; Lupu et al., 2021) or introduce hand-crafted methods (Canaan et al., 2022; Zhao et al., 2021), which are used to master cooperative games by optimizing fixed objectives within the population. These methods have shown to be efficacious in addressing intricate cooperative tasks such as Overcooked (Carroll et al., 2019) and Hanabi (Bauza, 2010).
However, when optimizing a fixed population-level objective, such as expected rewards within population (Strouse et al., 2021; Lupu et al., 2021; Zhao et al., 2021), the coordination ability of strategies within the population may not be improved. Specifically, while overall performance may improve, the coordination ability within the population may not be promoted in a simultaneous manner. This phenomenon, which we term "_cooperative incompatibility_", highlights the importance of considering the trade-offs between overall performance and coordination ability when attempting to optimize a fixed population-level objective.
In addressing the problem of cooperative incompatibility, we reformulate cooperative tasks as Graphic-Form Games (GFGs). In GFGs, strategies are characterized as nodes, with the weight of the edges between nodes representing the mean cooperative payoffs of the two associated strategies. Additionally, by utilizing sub-graphs of GFGs referred to as preference Graphic-Form Games (P-GFGs), we are able to further profile each node's upper bound cooperative payoff within the graph, enabling us to evaluate cooperative
incompatibility and identify strategies that fail to collaborate. Furthermore, we propose the Cooperative Open-ended LEarning (**COLE**) framework, which iteratively generates a new strategy that approximates the best response to the empirical gamescapes of P-GFGs. We have proved that the COLE framework can converge to the optimal strategy with a Q-sublinear rate when using in-degree centrality as the preference evaluation metric.
To propose COLE framework to address the phenomenon of cooperative incompatibility, we implement a practical algorithm COLESV by combining the **S**hapley **V**alue solution (Shapley, 1971) with our GFG. COLESV comprises a simulator, a solver, and a trainer, specifically designed to master cooperative tasks with two players. The solver, utilizing the development of the intuitive solution concept Shapley value, evaluates the adaptive ability of strategies and calculates the cooperative incompatibility distribution. The trainer aims to approximate the best responses to the cooperative incompatibility distribution mixture in the most recent population. To evaluate the performance of the COLESV, we conducted experiments in Overcooked, a cooperative task environment (Carroll et al., 2019). We evaluated the adaptive ability of COLESV by testing its performance against different level partners. The middle-level partner is a commonly used behavior cloning model (Carroll et al., 2019), and the expert partners are strategies of current methods, i.e., SP, PBT, FCP, and MEP. The results of the experiments showed that COLESV outperforms the recent SOTA methods in both evaluation protocols. Additionally, through the analysis of GFGs and P-GFGs, the learning process of COLESV revealed that the framework efficiently overcomes cooperative incompatibility. The contributions in this paper can be summarized as follows.
* We introduce the concept of Graphic-Form Games (GFGs) and Preference Graphic-Form Games (P-GFGs) to intuitively reformulate cooperative tasks, which allows for a more efficient evaluation and identification of cooperative incompatibility during learning.
* We develop the concept of graphic-form gamescapes to help understand the objective and present the COLE framework to iteratively approximate the best responses preferred by most others.
* We prove that the algorithm will converge to the optimal strategy, and the convergence rate will be Q-sublinear when using in-degree preference centrality. Empirical experiments in the game Overcooked verify the proposed algorithm's effectiveness compared to SOTA methods.
## 2 Related Works
**Zero-shot coordination.** The goal of zero-shot coordination (ZSC) is to train a strategy that can coordinate effectively with unseen partners (Hu et al., 2020). Self-play (Tesauro, 1994; Carroll et al., 2019) is a traditional method of training a cooperative strategy, which involves iterative improvement of strategies by playing against oneself, but develops conventions between players and does not cooperate with other unseen strategies (Lerer and Peysakhovich, 2018; Hu et al., 2020). Other-play (Hu et al., 2020) is proposed to break such conventions by adding permutations to one of the strategies. However, this approach may be reduced to self-play if the game or environment does not have symmetries or has unknown symmetries. Another approach is population-based training (PBT) (Jaderberg et al., 2017; Carroll et al., 2019), which trains strategies by interacting with each other in a population. However, PBT does not explicitly maintain diversity and thus fails to coordinate with unseen partners(Strouse et al., 2021).
To achieve the goal of ZSC, recent research has focused on training robust strategies that use diverse populations of strategies (Strouse et al., 2021; Lupu et al., 2021; Zhao et al., 2021). Fictitious co-play (FCP) (Strouse et al., 2021) obtains a population of periodically saved checkpoints during self-play training with different seeds and then trains the best response to the pre-trained population. TrajeDi (Lupu et al., 2021) also maintains a pre-trained self-play population but encourages distinct behavior among the strategies. The maximum entropy population (MEP) (Zhao et al., 2021) method proposes population entropy rewards to enhance diversity during pre-training. It employs prioritized sampling to select challenging-to-collaborate partners to improve generalization to previously unseen policies. Furthermore, methods such as MAZE (Xue et al., 2022) and CG-MAS (Mahajan et al., 2022) have been proposed to improve generalization ability through coevolution and combinatorial generalization. In this paper, we propose a COLE framework that could dynamically identify strategies that fail to coordinate due to cooperative incompatibility and continually poses and optimizes objectives to overcome this challenge and improve adaptive capabilities.
**Open-ended learning.** Another related area of research is open-ended learning, which aims to continually discover and approach objectives (Srivastava et al., 2012; Team et al., 2021; Meier and Mujika, 2022). In MARL, most open-ended learning methods focus on zero-sum games, primarily posing adaptive objectives to expand the frontiers of strategies (Lanctot et al., 2017; Balduzzi et al., 2019; McAleer et al., 2020; Yang et al., 2021; Liu et al., 2021; McAleer et al., 2022). In the specific context of ZSC, the MAZE method (Xue et al., 2022) utilizes open-ended learning by maintaining two populations of strategies and partners and training them collaboratively throughout multiple generations. In each generation, MAZE pairs strategies and partners from the two populations and updates them together by optimizing a weighted sum of rewards and diversity.
This method co-evolves the two populations of strategies and partners based on naive evaluations such as best or worst performance with strategies in partners. Our proposed method, COLE framework, combines GFGs and P-GFGs in open-ended learning to evaluate and identify the cooperative ability of strategies to solve cooperative incompatibility efficiently with theoretical guarantee.
## 3 Preliminaries
**Normal-form Game:** A two-player normal-form game is defined as a tuple \((N,\mathcal{A},\mathbf{w})\), where \(N=\{1,2\}\) is a set of two players, indexed by \(i\), \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\) is the joint action space, and \(\mathbf{w}=(w_{1},w_{2})\) with \(w_{i}:\mathcal{A}\rightarrow\mathbb{R}\) is a reward function for the player \(i\). In a two-player common payoff game, two-player rewards are the same, meaning \(w_{1}(a_{1},a_{2})=w_{2}(a_{1},a_{2})\) for \(a_{1},a_{2}\in\mathcal{A}\).
**Empirical Game-theoretic Analysis (EGTA), Empirical Game and Empirical Games.** EGTA is the study of finding meta-strategies based on experience with prior strategies (Walsh et al., 2002; Tuyls et al., 2018). An empirical game is built by discovering strategies and meta-reasoning about exploring the strategy space (Lanctot et al., 2017). Furthermore, empirical gamesc (EGS) are introduced to represent strategies in functional form games geometrically (Balduzzi et al., 2019). Given a population \(\mathcal{N}\) of \(n\) strategies, the empirical gamesc is often defined as \(\mathcal{G}:=\{\text{convex mixture of rows of }\mathcal{M}\}\), where \(\mathcal{M}\) is the empirical payoff table recording the expected outcomes for each joint strategy.
**Shapley Value.** Shapley Value (Shapley, 1971) is one of the important solution concepts for coalition games(Chalkiadakis et al., 2011; Peleg and Sudholter, 2007). The Shapley Value aims to distribute fairly the collective value, like the rewards and cost of the team, of the team across individuals by each player's contribution. Taking into account a coalition game \((\mathcal{N},v)\) with a strategy set \(\mathcal{N}\) and characteristic function \(v\), the Shapley Value of a player \(i\in\mathcal{N}\) could be obtained by
\[SV(i)=\frac{1}{n!}\sum_{\pi\in\Pi_{\mathcal{N}}}v(P_{i}^{\pi}\cup\{i\})-v(P_{ i}^{\pi}), \tag{1}\]
where \(\pi\) is one of the one-to-one permutation mappings from \(\mathcal{N}\) to itself in the permutation set \(\Pi\) and \(\pi(i)\) is the position of player \(i\in\mathcal{N}\) in permutation \(\pi\). \(P_{i}^{\pi}=\{j\in\mathcal{N}|\pi(j)<\pi(i)\}\) is the set of all predecessors of \(i\) in \(\pi\).
## 4 Cooperative Open-Ended Learning
In this section, we first introduce graphic-form games to intuitively reformulate cooperative games, then create an open-ended learning framework to solve cooperative incompatibility and further improve zero-shot adaptive ability.
### Graphic-Form Games (GFGs)
It is important to evaluate cooperative incompatibility and identify those failed-to-collaborate strategies to conquer cooperative incompatibility. Therefore, we propose graphic-form games (GFGs) to reformulate normal-form cooperative games from the perspective of game theory and graph theory, which is the natural development of empirical games (Balduzzi et al., 2019). The definition of GFG is given below.
**Definition 4.1** (Graphic-Form Game).: Given a set of parameterized strategies \(\mathcal{N}=\{1,2,\cdots,n\}\), a two-player graphic-form game (GFG) is a tuple \(\mathcal{G}=(\mathcal{N},\mathbf{E},\mathbf{w})\), which could be represented as a directed weighted graph. \(\mathcal{N},\mathbf{E},\mathbf{w}\) are the set of nodes, edges, and weights, respectively. Given an edge \((i,j)\), \(\mathbf{w}(i,j)\) represents the expected results of \(i\) playing with \(j\). The graphic representation of GFG is called a game graph.
The payoff matrix of \(\mathcal{G}\) is denoted as \(\mathcal{M}\), where \(\mathcal{M}(i,j)=\mathbf{w}(i,j)\), \(\forall i,j\in\mathcal{N}\). Our goal is to improve the upper bound of other strategies' outcomes in the cooperation within the population, which implies that the strategy should be preferred over other strategies.
Moreover, we propose preference graphic-form games (P
Figure 1: The Game Graph, (sub-) preference graph and corresponding preference centrality matrix. The (sub-) preference graphs are for all four iterations in the training process, and the corresponding preference in-degree centrality matrix is based on them. As can be observed in the \(\mathcal{G}_{3}^{\prime}\) and \(\mathcal{G}_{4}^{\prime}\), the newly updated strategies fail to be preferred by others and have centrality values of 1, despite an increase in the mean of rewards with all others. In _(b)_, we illustrate an ideal learning process in which a newly generated strategy can achieve higher outcomes with all previous strategies.
GFGs) as an efficient tool to analyze the current learning state, which can profile the degree of preference for each node in GFGs. Specifically, P-GFG is a subgraph of GFG, where each node only retains the out-edge with maximum weight among all out-edges except for its self-loop. Given a GFG \((\mathcal{N},\mathbf{E},\mathbf{w})\), the P-GFG could be defined as \(\mathcal{G}^{\prime}=\{\mathcal{N},\mathbf{E}^{\prime},\mathbf{w}\}\), where \(\mathbf{E}^{\prime}=\{(i,j)|\arg\max_{j}\mathbf{w}(i,j),\forall j\in\{ \mathcal{N}\backslash i\},\forall i\in\mathcal{N}\}\) is the set of edges. The graphic representation of P-GFG is called a preference graph.
To deeply investigate the learning process, we further introduce the _sub-preference graphs_ based on P-GFGs, which aim to reformulate previous learning states and analyze the learning behavior of the algorithm. Suppose that there is a set of sequentially generated strategies \(\mathcal{N}_{n}=\{1,2,\cdots,n\}\), where the index also represents the number of iterations for simplicity. For each previous iteration \(i<n\), the sub-preference game form graph is denoted as \(\{\mathcal{N}_{i},\mathbf{E}_{i}^{\prime},\mathbf{w}_{i}\}\), where \(\mathcal{N}_{i}=\{1,2,\cdots,i\}\) is the set of strategies in iteration \(i\), and \(\mathbf{E}_{i}^{\prime},and\ \mathbf{w}_{i}\) are the corresponding edges and weights.
The semantics of the preference graph is that a strategy or node \(i\) prefers to play with the tailed node to achieve the highest results. In other words, the more in-edges one node has, the more cooperative ability this node can achieve. Ideally, if one strategy can adapt well to all others, all the other strategies in the preference graph will point to this strategy. To evaluate the adaptive ability of each node, the centrality concept is introduced into the preference graph to evaluate how a node is preferred.
**Definition 4.2** (Preference Centrality).: Given a P-GFG \(\{\mathcal{N},E^{\prime},\mathbf{w}\}\), preference centrality of \(i\in\mathcal{N}\) is defined as,
\[\eta(i)=1-\mathrm{norm}(d_{i}),\]
where \(d_{i}\) is a graph centrality metric to evaluate how the node is preferred, and \(\mathrm{norm}:=\mathbb{R}\rightarrow[0,1]\) is a normalization function.
Note that the \(d\) is a kind of centrality that could evaluate how much a node is preferred. A typical example of \(d\) is the centrality of degrees, which calculates how many edges point to the node.
Fig. 1 is an example of a common payoff game, showing the game graph, (sub-)preference graphs, and the preference centrality matrix for four sequentially generated strategies. Note that in the corresponding sub-preference graphs, the updated strategies fail to improve the outcome of others after the second iteration, and the preference centrality matrix also shows the same results. The example shows an existing cooperative incompatibility that presents as the value of \(\eta\) is kept at 1 in the matrix, meaning no nodes want to collaborate with the updated strategies. Ideally, all the other strategies should prefer latest strategy (Fig. 1 (b)) which means the monotonic improvement of cooperative ability.
Moreover, the analysis of the MEP algorithm, as shown in Fig. 2, discloses a cooperative incompatibility in the learning process in Overcooked environment (Carroll et al., 2019). In the preference indegree centrality matrix, a strategy is preferred by more strategies if its color is darker. In the learning process of MEP, although the mean rewards are always improving (as shown in the upper-right of Fig. 2), serious cooperative incompatibility problems occur after a period of training, where more strategies prefer to play with some previous strategies with a darker color rather than new strategies to obtain higher rewards.
### Cooperative Open-Ended Learning Framework
To tackle cooperative incompatibility by understanding the objective, we develop empirical gamescapes (Balduzzi et al., 2019) for GFGs, which geometrically represent strategies in graphic-form games. Given a GFG \(\{\mathcal{N},\mathbf{E},\mathbf{w}\}\), the empirical gamescapes (EGS) is defined as
\[\mathcal{\bar{G}}:=\left\{\text{convex mixture of rows of }\mathcal{M}\right\}. \tag{2}\]
However, learning directly with EGS to cooperate with these well-collaborated strategies is inefficient in improving adaptive ability. To conquer cooperative incompatibility, the natural idea is to learn with the mixture of cooperative incompatible distribution on the most recent population \(\mathcal{N}\). Given a population \(\mathcal{N}\), we present _cooperative incompatible solver_ to assess how strategies collaborate, especially with those strategies that are difficult to collaborate with. The solver derives the cooperative incompatible distribution \(\phi\), where strategies that do not coordinate with others have higher probabilities.
We also optimize the cooperative incompatible mixture over the individual objective, which is the cumulative self-play rewards to improve the adaptive ability with expert partners. To simplify, we name it the individual and cooperative incompatible mixture (IPI mixture). We use an approximate oracle to approach the best response over the IPI mixture. Given strategy \(s_{n}\), the oracle returns a new strategy \(s_{n+1}\) : \(s_{n+1}=\mathrm{oracle}(s_{n+1},\mathcal{J}(s_{n},\phi))\), with \(\eta(s_{n+1})=0\), if
Figure 2: The payoff matrix of each strategy during training and the corresponding preference centrality matrix of the MEP algorithm in the Overcooked. The darker the color in the payoff matrix, the higher the rewards. The darker the color in the preference centrality matrix, the lower the centrality value, and the more other strategies prefer it.
possible. \(\mathcal{J}\) is the objective function as follows,
\[\mathcal{J}(s_{n},\phi)=\mathbb{E}_{p\sim\phi}\mathbf{w}(s_{n},p)+\alpha\mathbf{ w}(s_{n},s_{n}), \tag{3}\]
where \(\alpha\) is the balance hyperparameter. The objective consists of the cooperative compatible objective and the individual objective. The cooperative compatible objective aims to train the best response to those failed-to-collaborate strategies, and the individual objective aims to improve the adaptive ability with expert partners. We call the best response the best-preferred strategy if \(\eta(s_{n+1})=0\).
However, arriving at the best-preferred strategy with \(\eta(s_{n+1})=0\) is hard or even impossible. Therefore, we seek to approximate the best-preferred strategies by relaxing the best strategy to the strategy whose preference centrality ranks top \(k\). The approximate oracle could be rewritten as \(s_{n+1}=\mathrm{oracle}(s_{n},\mathcal{J}(s_{n},\phi))\), with \(\mathcal{R}(\eta(s_{n+1}))>k\).
We extend the approximated oracle to open-ended learning and propose COLE framework (Fig. 3). The COLE framework iteratively updates new strategies that approximate the best-preferred strategies to the cooperative incompatible mixture and the individual objective. The simulator completes the pay-off matrix with the newly generated strategy and others in the population. The solver aims to derive the cooperative incompatible distribution of the Game Graph builder and the cooperative-incompatible solver. The trainer uses the oracle to approximate the best-preferred strategy to the cooperative incompatible mixture and individual objective and outputs a newly generated strategy which is added to the population for the next generation.
Although we relax the best-preferred strategy to the strategy in the top \(k\) centrality in the constraint, COLE framework still converges to a local best-preferred strategy with zero preference centrality. Formally, the local best-preferred strategy convergence theorem is given as follows.
**Theorem 4.3**.: _Let \(s_{0}\in\mathcal{S}\) be the initial strategy and \(s_{i}=\mathrm{oracle}(s_{i-1})\) for \(i\in\mathbb{N}\). Under the assumption that \(\lim_{n\to\infty}\mathcal{J}(s_{n},\phi)\geq\mathcal{J}(s_{n-1},\phi)\) holds, we can say that the sequence \(\{s_{i}\}\) for \(i\in\mathbb{N}\) converges to a local optimal strategy \(s^{*}\), i.e., the local best-preferred strategy._
Proof.: See Appendix B.
Besides, if we choose in-degree centrality as the preference centrality function, the convergence rate of COLE framework is Q-sublinear.
**Corollary 4.4**.: _Let \(\eta:\mathcal{G}^{\prime}\to\mathbb{R}^{n}\) be a function that maps a P-GFG to its in-degree centrality, the convergence rate of the sequence \(\{s_{i}\}\) is Q-sublinear concerning \(\eta\)._
Proof.: See Appendix C.
## 5 Practical Algorithm
To address common-payoff games with two players, we implemented COLE\({}_{\mathrm{SV}}\), where SV refers to _Shapley Value_, based on COLE framework that can overcome cooperative incompatibility and improve zero-shot coordination capabilities, focusing on the solver and trainer components. As shown in Fig. 3, at each generation, COLE\({}_{\mathrm{SV}}\) inputs a population \(\mathcal{N}\) and generates an approximate best-preferred strategy added to \(\mathcal{N}\) to expand the population. The simulator calculates the payoff matrix \(\mathcal{M}\) for the input population \(\mathcal{N}\). Each element \(\mathcal{M}(i,j)\) for \(i,j\in\mathcal{N}\) represents the cumulative rewards of the players \(i\) and \(j\) at both starting positions. The solver evaluates and identifies failed-to-collaborate strategies by calculating the incompatible cooperative distribution. To effectively evaluate the cooperative ability of each strategy with all others, we incorporate weighted PageRank (WPG) (Xing & Ghorbani, 2004) from graph theory into the Shapley Value to evaluate adaptability, particularly with failed-to-collaborate strategies. The trainer then approximates the best-preferred strategy over the recent population.
### Solver: Graphic Shapley Value
To approximate the best-preferred strategies over the recent population and overcome cooperative incompatibility, we need to calculate the cooperative incompatible distribution as the mixture. In this paper, we combine the Shapley
Figure 3: An overview of one generation in COLE framework: The solver derives the cooperative incompatible distribution \(\phi\) using a cooperative incompatibility solver, which can be any algorithm that evaluates cooperative contribution. The trainer then approximates the relaxed best response by optimizing individual and cooperative compatible objectives. The oracleβs training data is generated using partners selected based on the cooperative incompatibility distribution and the agentβs strategy. Finally, the approximated strategy \(s_{n+1}\) is added to the population, and the next generation begins.
Value (Shapley, 1971) solution, an efficient single solution concept for cooperative games to assign the obtained team value across individuals, with our GFG to evaluate and identify the strategies that did not cooperate. To apply the Shapley Value, we define an additional characteristic function to evaluate the value of the coalition. Formally, given a coalition \(C\subseteq\mathcal{N}\), we have the following: \(v(C)=\mathbb{E}_{i\sim C,j\sim C}\sigma(i)\sigma(j)\mathbf{w}(i,j),\) where \(\sigma\) is a mapping function that evaluates how badly a node performs on its game graph. We use the characteristic function to evaluate the coalition value of how it could cooperate with those hard-to-collaborate strategies.
We take the inverse of WPG (Xing & Ghorbani, 2004) on the game graph as the metric \(\sigma\). WPG is proposed to assess the popularity of a node in a complex network. The formula of WPG is given as follows: \(\hat{\sigma}(u)=(1-d)+d\sum_{v\in B(u)}\sigma(v)\frac{\int_{u}}{\sum_{p\in R(v )}I_{p}}\frac{O_{u}}{\sum_{p\in R(v)}O_{p}},\) where \(d\) is the damping factor set to \(0.85,\)\(B(u)\) is the set of nodes that point to \(u\), \(R(v)\) denotes the nodes to which \(v\) is linked, and \(I,O\) are the degrees of inward and outward of the node, respectively. Therefore, the metric \(\sigma\) evaluates how unpopular a node is and is equal to the inverse of the WPG value \(\hat{\sigma}\).
Then we calculate the Shapley Value of each node by taking a characteristic function in equation 1, named the graphic Shapley Value. We utilize the Monte Carlo permutation sampling (Castro et al., 2009) to approximate the Shapley Value, which can reduce the computation complexity from exponential time to linear time. After inverting the probabilities of the graphic Shapley Value, we get the cooperative incompatible distribution \(\phi\), where strategies that fail to collaborate with others have higher probabilities. We provide the Graphic Shapley Value algorithm in Appendix D.
### Trainer: Approximating Best-preferred Strategy
The trainer takes the cooperative incompatible distribution \(\phi\) as input and samples its teammates to learn to approach the best-preferred strategy on the IPI mixture.
Recall the oracle for \(s_{n}:s_{n+1}=\mathrm{oracle}(s_{n+1},\mathcal{J}(s_{n},\phi)),\) with \(\mathcal{R}(\eta(s_{n+1}))>k\). COLESV aims to optimize the best-preferred strategy over the IPI mixture. The \(\mathcal{J}(s_{n},\phi)\) is the joint objective that consists of individual and cooperative compatible objectives. The individual objective aims to improve the performance within itself and promote the adaptive ability with expert partners, formulated as follows: \(\mathcal{J}_{i}(s_{n})=\mathbf{w}(s_{n},s_{n}),\) where \(s_{n}\) is the strategy named ego strategy that needs to optimize in generation \(n\).
And the cooperative compatible objective aims to improve cooperative outcomes with those failed-to-collaborate strategies: \(\mathcal{J}_{c}=\mathbb{E}_{p\sim\phi}\mathbf{w}(s_{n},p),\) where the objective is the expected rewards of \(s_{n}\) with cooperative incompatible distribution-supported partners. \(\mathbf{w}\) estimates and records the mean cumulative rewards of multiple trajectories and starting positions. The expectation can be approximated as: \(\mathcal{J}_{c}=\sum_{p\sim\phi}^{b}\phi(p)\mathbf{w}(s_{t},p),\) where \(b\) is the number of sampling times.
To balance exploitation and exploration as the learning continues, we present the Sampled Upper Confidence Bound for Game Graph (SUCG) that combines the Upper Confidence Bound (UCB) and GFG to control the sampling for more strategies with higher probabilities or new strategies. Additionally, we view the SUCG value as the probability of sampling teammates instead of using the maximum item in typical UCB algorithms. Specifically, in the game graph, we keep the information on the times that a node has been visited. Therefore, the probability of each node considers both the Shapley Value and visiting times, denoted as \(\hat{p}\). The SUCG for any node \(u\) in \(\mathcal{N}\) could be calculated as follows:
\[\hat{\phi}(u)=\phi(u)+c\frac{\sqrt{\sum_{i\in\mathcal{N}}\mathbf{N}(i)}}{1+ \mathbf{N}(u)}, \tag{4}\]
where \(c\) is a hyperparameter that controls the degree of exploration and \(\mathbf{N}(i)\) is the visit times of node \(i\). SUCG could efficiently prevent COLESV from generating data with a few fixed strategies that did not cooperate, which could lead to a loss of adaptive ability.
We conclude the COLESV as Algorithm 1. Moreover, to verify the influence of different ratios of two objectives, we denote COLESV with different ratios as 0:4, 1:3, 2:2, and 3:1. Specifically, COLESV with \(a:b\) represents different partner sampling ratios for the combining objective, where \(a\) is the corresponding times to generate data using self-play for the individual objective, and \(b\) is the number of sampling times in \(\mathcal{J}_{c}\). For example, COLESV 1:3 trains by using self-play once, and sampling from the cooperative incompatible distribution as partners three times to generate data and update objectives.
## 6 Experiments
### Environment and Experimental Setting
In this paper, we conduct a series of experiments in the Overcooked environment (Carroll et al., 2019; Charakorn et al., 2020; Knott et al., 2021). The details of the Overcooked environment can be found in Appendix E. We construct evaluations with different ratios between individual and cooperative compatible objectives, such as 0:4, 1:3, 2:2, and 3:1. These studies demonstrate the effectiveness of optimizing both individual and cooperative incompatible goals. We also compare our method with other methods, including self-play (Tesauro, 1994; Carroll et al., 2019), PBT (Jaderberg et al., 2017; Carroll et al., 2019), FCP (Strouse et al., 2021), and MEP (Zhao et al., 2021), all of which use PPO (Schulman et al., 2017) as the RL algorithm. To thoroughly assess the ZSC ability, we evaluated the algorithms with unseen middle-level and expert partners. We use the human proxy model \(H_{proxy}\) proposed by Carroll et al.(Carroll et al., 2019) as middle-level partners and the models trained with baselines and \(\text{COLE}_{\text{SV}}\) as expert partners. The mean of the rewards is recorded as the performance of each method in collaborating with expert teammates. In the case study, we analyze the learning process of \(\text{COLE}_{\text{SV}}\), which shows that our method overcomes cooperative incompatibility. Furthermore, we visualize the trajectories with different ratios and play with expert teammates to analyze how the ratios affect the learned strategies. Appendix F and Appendix G give details of the implementation of \(\text{COLE}_{\text{SV}}\) and baselines.
### Combining Objectives' Effectiveness Evaluation
This section evaluated the effectiveness of different objective ratios, including 0:4, 1:3, 2:2, and 3:1 of two objectives. We divided each training batch into four parts, the ratio indicating the proportion of data generated by self-play and data generated by playing with strategies from the cooperative incompatible distribution. We omitted the 4:0 ratio as it would result in the framework degenerating into self-play. Fig. 4 shows the mean rewards of episodes over 400 time steps of gameplay when paired with the unseen middle-level partner \(H_{proxy}\)(Carroll et al., 2019). We found that \(\text{COLE}_{\text{SV}}\) with ratios 0:4 and 1:3 achieved better performance than the other ratios. In particular, \(\text{COLE}_{\text{SV}}\), with a ratio of 1:3, outperformed the other methods in the Cramped Room, Coordination Ring, and Counter Circuit layouts. On the Forced Coordination layout, which is particularly challenging for cooperation due to the separated regions, all four ratios performed similarly on average across different starting positions. Interestingly, \(\text{COLE}_{\text{SV}}\) with only the cooperative compatible objective (ratio 0:4) performed better on the Asymmetric Advantages and Forced Coordination layouts when paired with the middle-level partner. We discuss this phenomenon further in Section 6.3. The effectiveness evaluations indicate that combining individual and cooperatively compatible objectives is crucial to improving performance with unseen partners. In general, we choose the ratio of 1:3 as the best choice.
### Evaluation with Different Levels of Partners
To thoroughly evaluate the zero-shot cooperative ability of all methods, we adopted two sets of evaluation protocols. The first protocol involves playing with a trained human model \(H_{proxy}\) trained in behavior cloning.However, due to the quality and quantity of human data used for behavior cloning to train the human model is limited, the capabilities of the human proxy model can only be classified as middle-level. Therefore, we use an additional evaluation protocol to coordinate with unseen expert partners. We selected the best models of our reproduced baselines and \(\text{COLE}_{\text{SV}}\) 0:4 and 1:3 as expert partners.
Fig. 5 presents the performance of SP, PBT, MEP, and \(\text{COLE}_{\text{SV}}\) with 0:4 and 1:3 when cooperating with middle-level partners. We observed that different starting positions on the left and right in asymmetric layouts resulted in significant performance differences for the baselines. For example, in the Asymmetric Advantages, the cumulative rewards of all baselines in the left position were nearly one-third of those in the right position. On the contrary, \(\text{COLE}_{\text{SV}}\) performed well at the left and right positions.
Figure 4: The result of the combining objectivesβ effectiveness evaluation. Mean episode rewards over 400 timesteps trajectories for \(\text{COLE}_{\text{SV}}\) s with different objective ratios 0:4, 1:3, 2:2, and 3:1, paired with the unseen middle-level partner \(H_{proxy}\). The gray bars behind present the rewards of self-play.
Figure 5: Performance with middle-level partners. The performance of \(\text{COLE}_{\text{SV}}\) with middle-level partners is presented in terms of mean episode rewards over 400 timesteps trajectories for different objective ratios of 0:4 and 1:3, when paired with the unseen middle-level partner \(H_{proxy}\). The results include the mean and standard error over five different random seeds. The gray bars indicate the rewards obtained when playing with themselves; the hashed bars indicate the performance when starting positions are switched.
As shown in Fig. 5, \(\text{COLE}_{\text{SV}}\) outperforms other methods in all five layouts when paired with the middle-level partnership proxy model. Interestingly, \(\text{COLE}_{\text{SV}}\) 0:4 with only the cooperatively compatible objective achieves better performance than \(\text{COLE}_{\text{SV}}\) 1:3 on some layouts, such as Asymmetric Advantages. However, the self-play rewards of \(\text{COLE}_{\text{SV}}\) 0:4 are much lower than \(\text{COLE}_{\text{SV}}\) 1:3 and even other baselines. Furthermore, the performance with unseen experts of \(\text{COLE}_{\text{SV}}\) 0:4 as shown in Table 1, is sometimes lower than the baselines. We visualize the trajectories in the evaluation at the expert level and provide further analysis to explain this situation in Appendix H.
Table 1 presents the outcomes of each method when cooperating with expert partners. Each column in the table represents different expert groups, including four baselines and one \(\text{COLE}_{\text{SV}}\) with a ratio of 0:4 or 1:3. The last column, labeled "COLEs," represents the mean rewards of the corresponding \(\text{COLE}_{\text{SV}}\) when working with other baselines. The table displays the mean cumulative rewards of each method when working with all other models in the expert group. The results indicate that \(\text{COLE}_{\text{SV}}\) 1:3 outperforms the baselines and \(\text{COLE}_{\text{SV}}\) 0:4, except in the layout of Asymmetric Advantages. In the Asymmetric Advantages, \(\text{COLE}_{\text{SV}}\) 0:4 only achieved a four-point victory over \(\text{COLE}_{\text{SV}}\) 1:3, which can be considered insignificant considering the margin of error. In the other four layouts, the rewards obtained by \(\text{COLE}_{\text{SV}}\) 1:3 while working with expert partners are significantly higher than those of \(\text{COLE}_{\text{SV}}\) 4:0 and the baselines.
Our results suggest that \(\text{COLE}_{\text{SV}}\) 1:3 has a stronger adaptive ability with different levels of partners. Furthermore, individual objectives are crucial in zero-shot coordination with expert partners. In conclusion, \(\text{COLE}_{\text{SV}}\) 1:3 is more robust and flexible in real-world scenarios when working with partners of different levels.
### Effectively Conquer Cooperative Incompatibility
In our analysis of the learning process of \(\text{COLE}_{\text{SV}}\) 1:3 in the Overcooked environment, as shown in Fig. 6, we observe that the method effectively overcomes the problem of cooperative incompatibility. The figure on the left in Fig. 6 shows the payoff matrix of 50 uniformly sampled checkpoints during training, with the upper left corner representing the starting point of training. Darker red elements in the payoff matrix indicate higher rewards. The figure on the right displays the centrality matrix of preferences, which is calculated by analyzing the learning process. Unlike the payoff matrix, the darker elements in the centrality matrix indicate lower values, indicating that more strategies prefer them in the population. As shown in the figure, the darker areas cluster around the diagonal of the preference centrality matrix, indicating that most of the others prefer the updated strategy of each generation. Thus, we can conclude that our proposed \(\text{COLE}_{\text{SV}}\) effectively overcomes the problem of cooperative incompatibility.
## 7 Conclusion
In this paper, we propose graphic-form games and preference graphic-form games to intuitively reformulate cooperative games, which can efficiently evaluate and identify cooperative incompatibility. Furthermore, we develop empirical gamescapes for GFG to understand the objectives and present COLE framework to iteratively approximate the best response preferred by most others over the most recent population. Theoretically, we prove that COLE framework converges to the optimal strategy preferred by all others. Furthermore, if we choose the in-degree centrality as the preference centrality function, the convergence rate would be Q-sublinear. Empirically, our experiments on the Overcooked environment show that our algorithm \(\text{COLE}_{\text{SV}}\) outperformed SOTA ones and that \(\text{COLE}_{\text{SV}}\) efficiently over-came cooperative incompatibility. We include limitations and future work in Appendix I.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Layout**} & \multirow{2}{*}{**Ratio**} & \multicolumn{4}{c}{**Baselines**} & \multirow{2}{*}{**COLEs**} \\ \cline{3-3} \cline{5-6} & & **SP** & & **PBT** & & **FCP** & **MEP** \\ \hline \multirow{3}{*}{**Creamped Rm.**} & 0:4 & 153.00 & 198.50 & 199.83 & 178.83 & 169.76 \\ & 1:3 & 165.67 & 209.83 & 207.17 & 196.83 & **212.80** \\ \hline \multirow{3}{*}{**Asymmid.Av.**} & 0:4 & 108.17 & 164.83 & 175.50 & 179.83 & **182.80** \\ & 1:3 & 108.17 & 161.50 & 172.17 & 179.83 & 178.80 \\ \hline \multirow{3}{*}{**Coord. Ring**} & 0:4 & 132.00 & 106.83 & 142.67 & 130.67 & 118.08 \\ & 1:3 & 133.33 & 158.83 & 144.00 & 124.67 & **166.32** \\ \hline \multirow{3}{*}{**Forced Coord.**} & 0:4 & 58.33 & 61.33 & 50.50 & 79.33 & 46.40 \\ & 1:3 & 61.50 & 70.33 & 62.33 & 38.00 & **86.40** \\ \hline \multirow{3}{*}{**Counter Circ.**} & 0:4 & 44.17 & 48.33 & 60.33 & 21.33 & 90.72 \\ & 1:3 & 65.67 & 64.00 & 46.50 & 76.67 & **195.84** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance with expert partners. Mean episode rewards over 1 min trajectories for baselines and \(\text{COLE}_{\text{SV}}\) with ratio 0:4, 1:3. Each column represents a different expert group, in which the result is the mean reward for each model playing with all others.
Figure 6: The learning process analysis of \(\text{COLE}_{\text{SV}}\) 1:3. The darker-colored element on the left represents higher rewards, while the darker-colored element on the right represents lower centrality. The clustering of darker-colored areas around the diagonal on the right indicates that the new strategy adopted in each generation is preferred by most strategies, thus overcoming the cooperative incompatibility. |
2304.12305 | Downscaling Epidemiological Time Series Data for Improving Forecasting
Accuracy: An Algorithmic Approach | Data scarcity and discontinuity are common occurrences in the healthcare and
epidemiological dataset and often need help in forming an educative decision
and forecasting the upcoming scenario. Often, these data are stored as
monthly/yearly aggregate where the prevalent forecasting tools like
Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive
Integrated Moving Average (SARIMA), and TBATS often fail to provide
satisfactory results. Artificial data synthesis methods have been proven to be
a powerful tool for tackling these challenges. The paper aims to propose a
downscaling data algorithm based on the underlying distribution. Our findings
show that the synthesized data is in agreement with the original data in terms
of trend, seasonality, and residuals, and the synthesized data provides a
stable foothold for the forecasting tools to generate a much more accurate
forecast of the situation. | Mahadee Al Mobin, Md. Kamrujjaman | 2023-03-26T01:09:48Z | http://arxiv.org/abs/2304.12305v1 | Downscaling Epidemiological Time Series Data for Improving Forecasting Accuracy: An Algorithmic Approach
###### Abstract
Data scarcity and discontinuity are common occurrences in the healthcare and epidemiological dataset and often need help in forming an educative decision and forecasting the upcoming scenario. Often, these data are stored as monthly/yearly aggregate where the prevalent forecasting tools like Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving Average (SARIMA), and TBATS often fail to provide satisfactory results. Artificial data synthesis methods have been proven to be a powerful tool for tackling these challenges. The paper aims to propose a downscaling data algorithm based on the underlying distribution. Our findings show that the synthesized data is in agreement with the original data in terms of trend, seasonality, and residuals, and the synthesized data provides a stable foothold for the forecasting tools to generate a much more accurate forecast of the situation.
Epidemiology, Downscaling Algorithm, Temporal Downscaling, ARIMA, Fourier-ARIMA, Dengue.
## 1 Introduction
Any process that involves deriving high-resolution data from low-resolution variables is referred to as downscaling. This method relies on dynamical or statistical approaches and is extensively utilized in the field of meteorology, climatology, and remote sensing [1, 2]. Significant exploration of the downscaling methods has been done in the field of geology and climatology to enhance the out of existing models like the General Circulation Model (GCM) [3, 4, 5, 6, 7, 8], Regional Climate Model (RCM) [9], Integrated Grid Modeling System (IGMS) [10], System Advisor Model (SAM) [10] and to make it usable for the forecast of geographically significant region and time. Several methods has been used to downscale these data such as BCC/RCG-Weather Generators (BCC/RCG-WG) [11, 12, 13], and Statistics Downscaling Model (SDSM)[11, 14, 15, 16, 17, 18, 19], Bayesian Model Averaging (BMA) [20]. Even machine learning methods has been used like Genetic algorithm (GA) [9], K Nearest Neighbourhood Resampling (KNNR) [9], Support Vector Machine (SVM) [11, 21, 22, 23]. Except for the machine learning algorithms, which are methods that are finding their applications in new domains, the rest of the methods are tailored to suit the outputs of the models, as mentioned earlier.
This class of methods has recently been applied in the deaggregation of spatial epidemiological data [24]. But significant work has yet to be done for the temporal downscaling of epidemiological
data. Often, the temporal downscaling techniques are classical interpolation techniques that do not do justice to aggregated data. This phenomenon can be well illustrated with an example. Consider the case of monthly Dengue infection data of 2017 from figure 15, which has been downscaled using linear interpolation by considering the aggregated value as the value of the end date of a month in Figure 16. In this case, if we consider the monthly aggregate of the downscaled data, it does not match the original aggregate. This downscaled data, which differs from the original data in such statistical measures, shall result in decisions and knowledge that cannot be far from the truth.
This paper aims to propose a novel algorithm named Mahadee-Kamrujjaman Downscaling (MKD) algorithm based on the Bayesian approach that can regenerate downscaled temporal time series of varying time lengths from aggregated data preserving most of the statistical characteristics and the aggregated sum of the original data.
The paper is organized as follows. Section 2 describes the data used for the paper and its sources. Section 3 discusses the methodology at length with the proposed MKD algorithm. Section 4 compares the synthesized data with the actual data of two different epidemiological cases (Dengue and COVID-19) in Bangladesh and shows how the MKD algorithm could generate statistically accurate approximate of the actual with very little input in both cases, and discuss the benchmark metric used for evaluating the output. Section 5 shows the improvement of the forecasting accuracy using synthesized data over aggregated data using a statistical forecasting toolbox in the dengue scenario of Bangladesh using the last 12 years of monthly aggregated data, Forecasting model selection procedures, and residuals. Finally, section 6 concludes our paper with an overview of the paper and how our paper has contributed to the existing literature and scopes for improvements and fields of application of the MKD algorithm.
## 2 Data
The dengue data from Bangladesh used in this paper are from January 2010 to July 2022 and are collected from DGHS [25], and IEDCR [26]. The COVID-19 data of Bangladesh are from 8 March 2020 to December 2020 and are collected from the WHO data repository [27].
## 3 Methodology
The MKD algorithm can be segmented into three sequential parts, as exhibited in Figure 1. Initially, the algorithm considers a prior distribution to generate synthetic downscaled data. The MKD algorithm considers the aggregated data as the prior distribution of the downscaled data. For example: If we have the monthly epidemiological data of dengue for the year 2017, thus to attain the prior distribution for the downscaled data, we divide the data by 30. The fact is well illustrated in Figures 15 and 16 attached in the appendix A. Figure 15 depicts the monthly distribution of the DENV (Dengue Virus) infection in Bangladesh for the year 2017, and Figure 16 represents the prior distribution obtained by the method described above.
Based on the prior distribution, initial statistical properties of the synthetic data are obtained except for the standard deviation (\(\sigma\)). As \(\sigma\) is scaling independent, hence scaling method used to obtain the prior distribution from the monthly aggregate keeps the \(\sigma\) identical to the monthly aggregate. To overcome this problem, we consider,
\[\sigma_{0}=\frac{\sigma_{prior\,distribution}}{30} \tag{3.1}\]
where \(\sigma_{0}\) is the standard deviation considered for the distribution to be fitted to generate the downscaled data by the algorithm and \(\sigma_{prior\,distribution}\) is the standard deviation of the obtained prior distribution. Later on, in section 4, we will see that the initial assumption of the standard deviation considered in (3.1) is a good approximation for the downscaled data.
### Initial Data Generation
The _"Initial Data Generator"_ phase feeds on the aggregated data, length of the aggregate interval, and \(\sigma_{0}\) to give an initial downscaled data based on a "Distribution Generator". Based on the prior distribution, a proper statistical probability distribution (PD) is to be considered to be fitted to generate the data. The "Distribution Generator" aims to fit the selected PD to the prior distribution based on the statistical properties obtained for the initial phase. The challenge not only in this scenario but also in every step of the algorithm is to ensure that the synthetic data produced in every step is non-negative integers, as we are dealing with epidemiological data. Thus specific measures have been deployed to tackle these challenges which are:
* To ensure non negativity consider the transformation: \[\hat{\mathbf{y}}=\mathbf{y}+min(|\mathbf{y}|)\]
Figure 1: Flow diagram of the MKD algorithm.
* To ensure that the data points are integer irrespective of the selection of PD, we round off the data to the nearest integer and subtract one from randomly selected data points in each aggregated unit such that the synthesized data has the same sum as the aggregated unit
Thus imposing these measures, the "Distribution Generator" generates synthetic distribution for each aggregated unit. Thus looping over the entire aggregated timeline generates the initial distribution of the downscaled data with respect to the aggregated data. This initial distribution is a suitable approximation to the actual data but can be improved with further refinement. The synthetic data will result in the exact aggregated data from which it is generated.
### Overthrow Correction
This step is often necessary for time series data with an abrupt change in gradient or in case of initial approximation with abnormally large overthrow as the approximations are probabilistic. In case of data with the abrupt change in gradient, the initial approximation is often left with a staircase-like structure as exhibited in the Figure 2. The problem can be corrected using the overthrow correction measure, which is demonstrated in Figure 3.
The overthrow correction part takes a tolerance, \(\delta\), iteration limit, n, and a radius of an open interval, r. The step initially determine overthrow using tolerance between two neighboring points i.e. if \(y_{i}-y_{i-1}>\delta\) or \(y_{i}-y_{i+1}>\delta\) then \(y_{i}\) is an overthrow. After identifying an overthrow, we consider an open interval of radius r around the overthrow point and execute the distribution generator on that open interval. This redistributes the sample within the open interval diminishing the overthrow to some extent. This process is iterated n times over the entire time series to ensure satisfactory results. The strength of the overthrow correction step can be dictated by the two parameters \(\delta\) and n. The strength of the overthrow correction is directly proportional to n and is inversely proportional to \(\delta\). Selecting the correct parameter value can ensure a good approximation of the real-life scenario.
Figure 2: Initial approximation without overthrow correction exhibits a staircase like property due to higher gradient change of the prior distribution.
### Volume Correction
The overthrow correction disrupts the property of the synthesized time series to conserve its aggregated sum equal to the given aggregated distribution due to its local correction property. The scenario best illustrates the table 1. This problem is addressed in this step. To maintain aggregated sum equal to the original data, we consider each aggregated unit and adjust the sum accordingly, adding/subtracting 1 from randomly chosen indices until the sum equates as required.
Figure 3: Initial approximation with overthrow correction exhibits a much proper approximation of the real case scenario preserving its original trend.
### The Mahadee-Kamrujjaman Downscaling (MKD) Algorithm
The algorithms calls for a unique name of it's own. From now on, we shall address it as Mahadee-Kamrujjaman Downscaling (MKD) algorithm. The structural part of the algorithm has been discussed at length to in the first three subsections of the section methodology. The proper pseudo code of the MKD algorithm is as follows:
```
0: Aggregated value vector, \(\mathbf{v}\) Overthrow tolerance, \(\delta\) Iteration limit, n Radius of the open interval, r Standard deviation, \(\sigma\) Ensure: downscaled time series, \(\bar{v}\) for elem in \(\mathbf{v}\)do \(\bar{v}\) = Distribution Generator(elem,\(\sigma\)) endfor for i from 1 to n do find a vector of coordinates of overthrow points for elem in overthrow points do open interval centering elem of radius, r = Distribution Generator(sum of the elements of open interval,\(\sigma\)) endfor endfor
```
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Month** & **Actual** & **Initial** & **Overthrow** & **Volume** \\ & & **Distribution** & **Correction** & **Correction** \\ \hline January & 38 & 38 & 42 & 38 \\ \hline February & 18 & 18 & 13 & 18 \\ \hline March & 17 & 17 & 18 & 17 \\ \hline April & 58 & 58 & 61 & 58 \\ \hline May & 193 & 193 & 300 & 193 \\ \hline June & 1884 & 1884 & 2500 & 1884 \\ \hline July & 16253 & 16253 & 17617 & 16253 \\ \hline August & 53636 & 53636 & 49581 & 53636 \\ \hline September & 16856 & 16856 & 18259 & 16856 \\ \hline October & 8143 & 8143 & 8419 & 8143 \\ \hline November & 4011 & 4011 & 4094 & 4011 \\ \hline December & 1247 & 1247 & 1450 & 1247 \\ \hline
**Total** & **102354** & **102354** & **102354** & **102354** \\ \hline \end{tabular}
\end{table}
Table 1: The table exhibits the comparison of the number of cases each month for executing the MKD algorithm on the Dengue 2019 data of Bangladesh with the actual data. Here we can see the total number of infected individuals in each algorithm step is the same. In the case of the monthly sum, we see some anomaly in the overthrow correction case, which has been fixed in the volume correction step.
for elem in **v do if**\(v_{i}\neq\)sum of euiquivalent aggregate in \(\bar{v}\)**then**
d=\(v_{i}\)-sum of equivalent aggregate in \(\bar{v}\)
**while**\(d\neq 0\)**do if**\(d>0\)**then**
\(\bar{v}_{randomly\,picked\,index}=\bar{v}_{randomly\,picked\,index}+1\)
\(d-=1\)
**else**
\(\bar{v}_{randomly\,picked\,index}=\bar{v}_{randomly\,picked\,index}-1\)
\(d-=1\)
**end if**
**end while**
**end if**
**end for**
**Algorithm 2**: Distribution Generator
The MKD algorithm is heavily dependent on the random selection of numbers that are prone to generate non-reproducible results. Thus seeding the random number generator is highly recom
mended to ensure reproducible results.
The novelty of MKD algorithm is its consideration of the prior distribution as initialization and deploying the underlying distribution to generate synthesized downscaled data, which is non-negative and conserves the aggregated value of the given data.
## 4 Comparison of the Synthesized Data with the Real Data
To determine the accuracy of the MKD algorithm, we test the MKD algorithm against some real-world data. Here, we have taken 2020 COVID-19 data on infected individuals in Bangladesh and 2022 (January to July), Dengue data on infected individuals in Bangladesh. The aforementioned data are daily data of the number of newly infected individuals across the country. We aim to convert this data to monthly aggregate and feed the aggregated data to the algorithm to generate downscaled daily data; hence we can compare the accuracy of the synthetic daily data with respect to the actual daily data. To determine the accuracy of the approximation, we will use two error measures, and we will do component analysis on the real and synthetic data to see if the synthetic data can well approximate the underlying properties of the real data. In case of the component decomposition, we will use the additive model mentioned in (4.1),
\[y_{i}=Trend+Seasonality+Residual \tag{4.1}\]
as the procured data has some zero values for which the multiplicative model mentioned in (4.2)
\[y_{i}=Trend\times Seasonality\times Residual \tag{4.2}\]
is not suitable in this scenario.
### Error Measures for Benchmark
To compare the result with the real world data we shall use two error terms that describes the overall error of the approximation. These are as follows:
* **Root Mean Square Error:** The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a commonly used measure of the discrepancies between the values (sample or population values) predicted by a model or estimator and the actual values. RMSD is the square root of the second sample moment of the discrepancies between anticipated and observed values, or the quadratic mean of these differences. When the computations are executed over the data set used for estimate, these deviations are known as residuals, and when they are computed out-of-sample, they are known as errors (or prediction errors). The RMSD aggregates the magnitudes of the mistakes in predictions for various pieces of data into a single metric of predictive ability. RMSD is a measure of accuracy used to assess the predicting losses of various models for a specific dataset and not between datasets because it is scale-dependent [28].
RMSD is always positive, and a value of 0 would suggest a perfect fit to the data, which is nearly never attained in practice. A smaller RMSD is often preferable to a greater one. However, because the metric is dependent on the magnitude of the numbers used, comparisons between various kinds of data are invalid.
The root square of the mean of squared mistakes is the RMSD. The influence of each inaccuracy on RMSD is proportional to the magnitude of the squared error; therefore, larger errors have an outsized effect on RMSD. As a result, the RMSD is extremely sensitive to outliers [29, 30].
Instead of the Root Mean Square Deviation, the Mean Absolute Error (MAE) has been suggested as a useful statistical tool by a number of scholars. The MAE has certain advantages over the RMSD when it comes to interpretability. The mean absolute error, abbreviated as MAE, is the average of the absolute values of the mistakes. The square root of the average of squared errors is more difficult to grasp than the MAE, which simplifies things considerably. In addition, the magnitude of a mistake has an effect on the MAE in direct proportion to its absolute value, whereas the RMSD does not follow this pattern at all [29].
RMSE can be defined using the following formula:
\[\text{RMSE}{=}\sqrt{\frac{\sum_{i=1}^{N}(x_{i}-\hat{x}_{i})}{N}}\]
where, \(x_{i}\) is the actual data and \(\hat{x}_{i}\) is the predicted data.
* **Mass Absolute Error:** In statistics, the term "mean absolute error" (MAE) refers to a measurement of the errors that occur when matched observations expressing the same event are compared. Comparisons of what was predicted vs what was actually observed, subsequent time versus beginning time, and one technique of measurement versus an alternate technique of measurement are all examples of Y versus X. The mean absolute error (MAE) is determined by taking the sum of all absolute errors and dividing it by the total number of samples: \[\text{MAE}{=}\frac{\sum_{i=1}^{N}|x_{i}-\hat{x}_{i}|}{N}\] Therefore, it is an arithmetic average of the absolute errors, which may be represented as \(|e_{i}|=|x_{i}-\hat{x}_{i}|\), where \(\hat{x}_{i}\) represents the forecast and \(x_{i}\) represents the actual value. It is important to keep in mind that different formulations could use relative frequencies as weight factors. The scale that is being used to measure the data is also used for the mean absolute error. Because this is what's known as a scale-dependent accuracy measure, it can't be used to compare series that have different scales because the comparisons wouldn't be valid [31]. In time series analysis, the mean absolute error is a frequent way to quantify the accuracy of forecasts [28], occasionally leading to confusion with the more traditional definition of mean absolute deviation. There is, more generally speaking, the same confusion.
One of the many methods that may be used to compare forecasts with the results that actually transpired is called the mean absolute error. The mean absolute scaled error, also known as MASE, and the mean squared error are two options that have a solid track record. The mean signed difference is one metric that does put emphasis on this, as opposed to the other measures, which all summarize performance in a fashion that disregards the direction
of whether the forecast was made too high or too low.
When it comes to fitting a prediction model with a chosen performance metric, the equivalent for mean absolute error is mean absolute deviations, and the least squares approach is related to the mean squared error.
Although some academics report and interpret it that way, mean absolute error (MAE) and root-mean-square error (RMSE) are not the same concept. The MAE is conceptually simpler and also easier to perceive than the RMSE. It is just the average absolute vertical or horizontal distance between each point in a scatter plot and the Y=X line. In contrast, the RMSE is a measure of error that is more difficult to interpret. To phrase this another way, MAE refers to the average absolute difference that exists between X and Y. In addition, the contribution that each error makes to the MAE is weighted according to the absolute value of the error. This is in contrast to the RMSE, which involves quadrupling the differences; as a result, a few significant changes will have a higher impact on the RMSE than they will have on the MAE [29].
Since many of the data points in the actual and synthesized cases is popluated with 0 hence Mass Absolute Percentage Error (MAPE), and Scaled Mass Absolute Percentage Error (MAPE) are undefined in this scenario.
### Dengue
#### 4.2.1 Preprocessing and Result
In case of this simulation, we took Bangladesh's 2022 daily Dengue infected data from January to July. To feed this data into the MKD algorithm, we convert the daily data to monthly aggregate as illustrated in Figure 4,
We feed in this data considering,
* Initial standard deviation, \(\sigma_{0}=\frac{\sigma_{prior\,distribution}}{30}=\frac{556.6431703}{30}=18.55477234\).
* Over throw tolerance, \(\delta=0.6\times\) (Range of the initial distribution).
* Iteration limit, \(n=100\).
* Radius of open interval, \(r=3\).
* Underlying distribution to be normal.
and generate the synthesized data. Figure 6 illustrates the synthesized data, which can be said to be a good approximation of the actual given the aggregated prior distribution.
#### 4.2.2 Error Metrics and Statistical Measures
The calculated error measures are:
* \(MAE=6.60664\), which implies that the average error between the actual and synthesized data is \(6.60664\).
* \(RMSE=12.64499\) which implies that the standard deviation of the residuals/errors is \(12.64499\). The fact is well illustrated in Figure 22.
The error metric shows satisfactory results. The following table validates if the synthesized data honours the aggregated sum of the prior distribution.
The total number of cases in each scenario has been maintained equally. As discussed earlier, we can see that the initial distribution holds the monthly sum consistently, which gets disrupted in the overthrow correction phase and later corrected in the volume correction phase.
We shall now explore the basic statistical properties of the synthetic data with respect to the actual data.
It is to be noted that the mean of the synthesized data equates to that of the original data, although it was not plugged into the MKD algorithm in any manner. As previously discussed that \(\sigma_{0}\) is a good approximation to the original \(\sigma\). All the rest of the measures are somewhat close, but the maximum varies by a lot. The maximum is hard to anticipate from the aggregated data; hence it is an avenue that demands further exploration.
#### 4.2.3 Component Decomposition and Comparison
We now want to do component decomposition of both the actual and synthetic data based on the model mentioned in (4.1). However, component decomposition in no way is a benchmark for accuracy, but as MKD algorithm aims to improve the outcome of forecasting techniques which are highly influenced by the components within a time series data. Thus comparing these components can answer the question of whether the components-based characteristics of the original time series are present within the synthesized data.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Month** & **Actual** & **Initial** & **Overthrow** & **Volume** \\ & & **Distribution** & **Correction** & **Correction** \\ \hline January & 126 & 126 & 119 & 126 \\ \hline February & 20 & 20 & 27 & 20 \\ \hline March & 20 & 20 & 20 & 20 \\ \hline April & 23 & 23 & 32 & 23 \\ \hline May & 163 & 163 & 206 & 163 \\ \hline June & 737 & 737 & 733 & 737 \\ \hline July & 1491 & 1491 & 1443 & 1491 \\ \hline
**Total** & **2580** & **2580** & **2580** & **2580** \\ \hline \end{tabular}
\end{table}
Table 2: This table illustrates that the synthetic data agrees with the monthly sum of the actual data
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Measures** & **Observed** & **Synthesized** \\ \hline Mean & 12.22748815 & 12.22748815 \\ \hline Standard Deviation & 20.28993189 & 18.49672823 \\ \hline Minimum & 0 & 0 \\ \hline Lower Quartile(Q1) & 0 & 0 \\ \hline Median & 2 & 1 \\ \hline Upper Quartile(Q2) & 17 & 19 \\ \hline Maximum & 99 & 72 \\ \hline \end{tabular}
\end{table}
Table 3: This table illustrates the comparison of the basic statistical measures of the synthesized data with respect to the actual data.
In case of the trend component (Figure 17 and 18 on appendix A), both the actual and the synthesized data shows similar result and trend of the actual data have been well approximated by the trend of the synthesized data.
In case of the seasonality component (Figure 19 and 20 on appendix A), both the actual and the synthesized data shows major weekly, and minor sub-weekly seasonality. The synthesized data's seasonality approximates the actual data's seasonality well.
In case of the residual component (Figure 21 and 22 on appendix A), both the actual and the synthesized data show a similar result, although the residual of the synthetic data may look a bit noisy at first glance but upon closer inspection, it is evident that the residual of the synthetic data shows less deviation from the standard value in comparison to the actual data. The actual data's residual has been well approximated by the synthesized data's residual.
The key takeaway from the discussion as mentioned earlier, is that the MKD algorithm could generate an excellent approximation of the dengue data from the monthly aggregated data based on some statistical properties of the prior distribution. We shall also test MKD algorithm's efficacy in another epidemiological scenario in the following section.
### COVID-19
#### 4.3.1 Preprocessing and Result
In case of this simulation, we took Bangladesh's 2020 daily COVID-19 infected data from March to December. To feed this data into the MKD algorithm, we convert the daily data to monthly aggregate as illustrated in Figure 7.
Figure 7: Monthly aggregate of 2020 COVID-19 infected data of Bangladesh from March to December.
We feed in this data considering,
* Initial standard deviation, \(\sigma_{0}=\frac{\sigma_{prior\,distribution}}{30}=\frac{32021.87439}{30}=1067.395813\).
* Over throw tolerance, \(\delta=0.2\times\) (Range of the initial distribution).
* Iteration limit, \(n=100\).
* Radius of open interval, \(r=3\).
* Underlying distribution to be normal.
and generate the synthesized data. Figure 9 illustrates the synthesized data, which can be said to be a good approximation of the actual given the aggregated prior distribution.
#### 4.3.2 Error Metrics and Statistical Measures
The calculated error measures are:
* \(MAE=257.41806\), which implies that the average error between the actual and synthesized data is \(257.41806\), which is reasonable considering the mean of the data is \(1717.424749\).
* \(RMSE=346.6241\), which implies that the standard deviation of the residuals/errors is \(346.6241\). The fact is well illustrated in Figure 28.
it is to be noted that the error term of this scenario must not be compared with the error term of the previous case as they are of varying scale. Compared to the scale of the data, the error metric shows satisfactory results. The following table validates if the synthesized data honours the aggregated sum of the prior distribution.
The total number of cases in each scenario has been maintained equally. As discussed earlier, we can see that the initial distribution holds the monthly sum consistently, which gets a little disrupted in the overthrow correction phase and is later on corrected in the volume correction phase.
We shall now explore the basic statistical properties of the synthetic data with respect to the actual data.
It is to be noted that the mean of the synthesized data equates to that of the original data, although it was not plugged into the MKD algorithm in any manner. As previously discussed that \(\sigma_{0}\) is a good approximation to the original \(\sigma\). All the rest of the measures are somewhat close, but the maximum varies by a lot. The maximum is hard to anticipate from the aggregated data; hence it is an avenue that demands further exploration.
#### 4.3.3 Component Decomposition and Comparison
We now want to do component decomposition of both the actual and synthetic data based on the model mentioned in (4.1). However, component decomposition in no way is a benchmark for accuracy, but as MKD algorithm aims to improve the outcome of forecasting techniques which are highly influenced by the components within a time series data. Thus, comparing these components can answer the question of whether the original time series's components-based characteristics are
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Month** & **Actual** & **Initial** & **Overthrow** & **Volume** \\ & & **Distribution** & **Correction** & **Correction** \\ \hline March & 51 & 51 & 51 & 51 \\ \hline April & 7616 & 7616 & 9226 & 7616 \\ \hline May & 39486 & 39486 & 41261 & 39486 \\ \hline June & 98330 & 98330 & 94075 & 98330 \\ \hline July & 92178 & 92178 & 92115 & 92178 \\ \hline August & 75335 & 75335 & 75605 & 75335 \\ \hline September & 50483 & 50483 & 50766 & 50483 \\ \hline October & 44205 & 44205 & 45126 & 44205 \\ \hline November & 57248 & 57248 & 55805 & 57248 \\ \hline December & 48578 & 48578 & 49480 & 48578 \\ \hline
**Total** & **513510** & **513510** & **513510** & **513510** \\ \hline \end{tabular}
\end{table}
Table 4: This table illustrates that the synthetic data agrees with the monthly sum of the actual data.
\begin{table}
\begin{tabular}{|c|r|r|} \hline
**Measures** & **Observed** & **Synthesized** \\ \hline Count & 299 & 299 \\ \hline Mean & 1717.424749 & 1717.424749 \\ \hline Standard Deviation & 1044.457258 & 1007.554237 \\ \hline Minimum & 0 & 0 \\ \hline Lower Quartile(Q1) & 1115.5 & 1225 \\ \hline Median & 1666 & 1696 \\ \hline Upper Quartile(Q2) & 2521.5 & 2481.5 \\ \hline Maximum & 4019 & 3735 \\ \hline \end{tabular}
\end{table}
Table 5: This table illustrates the comparison of the basic statistical measures of the synthesized data with respect to the actual data.
present in the synthesized data.
In case of the trend component (Figure 23 and 24 on appendix A), both the actual and the synthesized data shows similar result and trend of the actual data have been well approximated by the trend of the synthesized data.
In case of the seasonality component (Figure 25 and 26 on appendix A), both the actual and the synthesized data shows major weekly seasonality. The seasonality of the synthesized data has well approximated the seasonality of the actual data.
In case of the residual component (Figure 27 and 28 on appendix A), both the actual and the synthesized data shows a similar result, although the residual of the synthetic data may look a bit noisy at first glance but upon closer inspection, it is evident that the residual of the synthetic data shows less deviation from the standard value in comparison to the actual data. The residual of the synthesized data has well approximated the residual of the actual data.
The key takeaway from the aforementioned discussion is that the algorithm could generate an excellent approximation of the COVID-19 data from the monthly aggregated data based on some statistical properties of the prior distribution. We shall also test MKD algorithm's efficacy in a forecasting scenario in the following section.
## 5 Improvements in Forecasting Accuracy
In this section, we shall forecast the Dengue infection case in Bangladesh using statistical forecasting tools. The use of statistical modelling is one of the helpful ways that may be utilized for the forecasting of dengue outbreaks [32, 33]. Previous research carried out in China [34], India [35], Thailand [36], West Indies [37], Colombia [38], and Australia [39] on infectious diseases made substantial use of the time series technique in the field of epidemiologic research on infectious diseases [39]. A number of earlier research looked at the Autoregressive Integrated Moving Average (ARIMA) model as a potential tool for use in forecasting [40, 41, 42, 41, 43, 43, 44]. In addition, the ARIMA models have seen widespread use for dengue forecasting [45, 46, 43, 47]. When establishing statistical forecasting models, these are frequently paired with Seasonal Auto-regressive Integrated Moving Average (SARIMA) models, which have proven to be suitable for assessing time series data with ordinary or seasonal patterns [35, 37, 39, 48, 49]. It is likely that developing a dengue incidence forecasting model based on knowledge from previous outbreaks and environment variables might be an extremely helpful tool for anticipating the severity and frequency of potential epidemics.
ARIMA is a well-known model in statistics that is predominantly used to forecast and analyze time series data [50]. Auto Regression of order p can be defined as
\[Y_{t}=e_{t}+\sum_{i=1}^{p}\alpha_{i}Y_{t-i} \tag{5.1}\]
where \(e_{t}\) are white noises of mean 0 and variance \(\sigma_{e}^{2}\).
The Moving Average (MA) of order q is defined as,
\[\hat{Y}_{t}=e_{t}+\sum_{i=1}^{q}\beta_{i}e_{t-i} \tag{5.2}\]
ARMA model in theory is formed in unison of (5.1) and (5.2). Hence, an ARMA model of order (\(p\), \(q\)) is defined
\[\hat{Y}_{t}=e_{t}+\sum_{i=1}^{p}\alpha_{i}Y_{t-i}+\sum_{i=1}^{q}\beta_{i}e_{t-i} \tag{5.3}\]
where the p and q are the corresponding order of the AR and MA. Development of the ARMA model for non-stationary time series is the Box-Jenkins model, also known as the ARIMA model, which integrates AR and MA with successive difference/lag operator, \(\triangledown^{d}\). Hence, an ARIMA model of order (\(p\), \(d\), \(q\)) is defined
\[\hat{Z}_{t}=e_{t}+\sum_{i=1}^{p}\alpha_{i}Z_{t-i}+\sum_{i=1}^{q}\beta_{i}e_{t-i} \tag{5.4}\]
where p, q has the previously mentioned definition, and \(d\) is the order of nonseasonal successive difference required to make the time series stationary i.e.
* If d=1 then \(Z_{t}=\triangledown Y_{t}=Y_{t}-Y_{t-1}\)
* If d=2 then \(Z_{t}=\triangledown^{2}Y_{t}=(Y_{t}-Y_{t-1})-(Y_{t-1}-Y_{t-2})=Y_{t}-2Y_{t-1} +Y_{t-2}\)
* If d=3 then \(Z_{t}=\triangledown^{3}Y_{t}=(Y_{t}-2Y_{t-1}+Y_{t-2})-(Y_{t-1}-2Y_{t-2}+Y_{t- 3})=Y_{t}-3Y_{t-1}+3Y_{t-2}-Y_{t-3}\)
* and so on.
The idea of seasonality using the Fourier coefficient naming Fourier ARIMA model was introduced by [51].
\[Z_{t}=\delta_{0}+\sum_{i=1}^{p}\alpha_{i}Z_{t-i}+\sum_{j=1}^{q}\beta_{J}e_{t- j}+\sum_{k=1}^{r}\left[a_{k}\sin\left(\omega_{k}t\right)+b_{k}\cos\left(\omega_{k}t \right)\right]Z_{t-m}+e_{t} \tag{5.5}\]
where, \(\delta_{0}\) is the constant term and \(\omega_{k}\) is the periodicity of the data.
We aim to forecast the monthly data and the synthesized daily data using the aforementioned forecasting techniques and compare the accuracy of the forecast based on error measures. We use SARIMA and Fourier-ARIMA model to forecast the monthly and synthesized data respectively. The model in each case is chosen based on the lowest value of Akaike's Information Criterion (AIC), Akaike's Information Criterion (AICc), and Bayesian Information Criterion (BIC).
### Model Selection Method
Box-Jenkins method is a generalized model selection pathway which works for time series irrespective of its stationarity or seasonality. The method is illustrated in Figure 10.
### Error Measures
The error measures for comparison is Mean Absolute Scaled Error(MASE) which is defined as
\[MASE=\frac{\frac{1}{n}\sum_{i=1}^{n}\left|Y_{i}-\hat{Y}_{i}\right|}{\frac{1}{T-m} \sum_{t=m+1}^{T}\left|Y_{t}-Y_{t-m}\right|}\]
We used this metric as it is scale-independent; hence is perfect for comparison. We also could have taken MAPE as a metric, but MAPE is undefined for such cases as the data is populated with zero values. We also use RMSE and MAE to gauge the error in the forecast.
Figure 10: Flow chart of Box-Jenkinβs Method.
### Forecast on the Aggregated Data
The actual data is monthly Dengue infection data of Bangladesh from 2010 to July 2022. Following Box-Jenkin's method, we firstly check for the stationarity of the data based on the Augmented Dicky Fuller (ADF) test. ADF test returns the value of -4.7906 with p-value = 0.01, which implies that the data is stationary.
We run multiple SARIMA models and calculate their AIC, AICc and BIC and the best model is chosen based on the minimum value of the criterion. We present 5 of the top results in table 6.
Here, the best model to use is SARIMA (1, 0, 0)(0, 1, 1)\({}_{12}\). We fit the given model, which gives us the following coefficients:
To check the goodness of fit of the model, we use the Ljung box test, which returns the p-value = 0.9998 \(>\) 0.05, i.e. we accept the null hypothesis: _"The model does not show lack ness of fit/ the residuals are not autocorrelated/ the residuals are random white noise."_
Given everything in place, we forecast the infection for the rest of 2023, i.e. from August to December. The forecast is illustrated in the given figure.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline Model & AIC & AICc & BIC \\ \hline SARIMA(1, 0, 0)(0, 1, 1)\({}_{12}\) & 2603.57 & 2603.76 & 2612.22 \\ \hline SARIMA(1, 0, 0)(0, 1, 2)\({}_{12}\) & 2506.58 & 2506.91 & 2517.96 \\ \hline SARIMA(1, 0, 0)(1, 1, 1)\({}_{12}\) & 2507.12 & 2507.45 & 2518.49 \\ \hline SARIMA(1, 0, 1)(0, 1, 1)\({}_{12}\) & 2510.07 & 2510.39 & 2521.44 \\ \hline SARIMA(2, 0, 0)(0, 1, 1)\({}_{12}\) & 2510.13 & 2510.45 & 2521.5 \\ \hline \end{tabular}
\end{table}
Table 6: Selection of Best model based on criteria.
\begin{table}
\begin{tabular}{c|c c c} & ar1 & ar2 & sma1 \\ \hline & 0.6000 & -0.0919 & -0.8324 \\ S.E. & 0.0879 & 0.0877 & 0.0948 \\ \end{tabular}
\end{table}
Table 7: Coefficients of SARIMA (1, 0, 0)(0, 1, 1)\({}_{12}\) model to fit and forecast actual monthly data of Dengue infection in Bangladesh from 2010 to July, 2022. Here, ar implies autoregressive, ma implies moving average, SMA implies seasonal moving average, and the trailing number enumerates their coefficient ordering. SE implies the standard error of the mean.
To validate the goodness of the fit, we can analyze the model residual, illustrated in Figure 12. Here, the top graph is that of the residual with the timeline of the original data. The bottom left graph represents the Autocorrelation Function (ACF) with respect to the lag of the data. Almost all the values are within the significance e level, and the bottom right figure shows the distribution of the model's residuals. It implies that the residuals are distributed normally with zero mean.
Figure 11: The figure illustrates the forecast generated by SARIMA (1, 0, 0)(0, 1, 1)\({}_{12}\) from actual aggregated data.
To calculate the accuracy of the given forecast, we calculate the aforementioned error measures.
The error measures are acceptable given the magnitude of the data, but there is room for improvement shall be demonstrated in the following subsection.
### Forecast on the Synthesized Data
The synthesized data is daily Dengue infection data of Bangladesh from 2010 to July 2022. Following Box-Jenkin's method, we firstly check for the stationarity of the data based on the Augmented Dicky Fuller (ADF) test. ADF test returns the value of -6.6531 with p-value = 0.01, which implies that the data is stationary.
We run multiple Fourier ARIMA models and calculate their AIC, AICc and BIC. The best model is chosen based on the minimum value of the criterion. We present 5 of the top results in table 9. Here in each case of Fourier transformation, we used one pair of trigonometric terms where each pair is comprised of a sine and a cosine term as defined in (5.5) and the periodicity of the Fourier term is used to be 365.25. Prior to this we have used box cox transformation of \(\lambda=0.49\)
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Data & RMSE & MAE & MASE \\ \hline Monthly & 4092.712 & 753.6765 & 0.409654 \\ \hline \end{tabular}
\end{table}
Table 8: Error measures for the forecast of the SARIMA (1, 0, 0)(0, 1, 1)\({}_{12}\) of the actual aggregated data.
Figure 12: Residual of the SARIMA (1, 0, 0)(0, 1, 1)\({}_{12}\).
Here, the best model to use is ARIMA(7,0,7). We fit the given model, which gives us the following coefficients:
To check the goodness of fit of the model, we use the Ljung box test, which returns the p-value = 0.07749 \(>\) 0.05, i.e. we accept the null hypothesis: _"The model does not show lack ness of fit/ the residuals are not autocorrelated/ the residuals are random white noise"_.
Given everything in place, we forecast the infection for the rest of 2023, i.e. from August to December. The forecast is illustrated in the given figure.
\begin{table}
\begin{tabular}{|c|c c c c c c c c|} \hline & ar1 & ar2 & ar3 & ar4 & ar5 & ar6 & ar7 & ma1 \\ \hline & -0.5273 & 0.3109 & 1.2946 & 1.0562 & 0.2775 & -0.6222 & -0.7940 & 0.8055 \\ S.E. & 0.0513 & 0.0310 & 0.0419 & 0.0755 & 0.0323 & 0.0353 & 0.0488 & 0.0471 \\ \hline & ma2 & ma3 & ma4 & ma5 & ma6 & ma7 & intercept & s1-365 \\ \hline & 0.0718 & -1.0032 & -1.1256 & -0.5327 & 0.3051 & 0.6454 & 3.3498 & 6.6197 \\ S.E. & 0.0340 & 0.0365 & 0.0602 & 0.0321 & 0.0356 & 0.0303 & 1.4789 & 1.6859 \\ \hline & c1-365 & & & & & & & \\ \hline & -0.7430 & & & & & & & \\ S.E. & 1.6857 & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 10: Coefficients of ARIMA(7,0,7) model to fit and forecast actual monthly data of Dengue infection in Bangladesh from 2010 to July 2022. Here, ar implies auto-regressive, ma implies the moving average, s and c represent the coefficient of the sine and cosine of Fourier term, intercept implies the constant term, and the trailing number enumerates their coefficient ordering. SE implies the standard error of the mean.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model & AIC & AICc & BIC \\ \hline \hline ARIMA(7,0,7) & 21711.25 & 21711.4 & 21827.03 \\ \hline ARIMA(5,0,0) & 21819.04 & 21819.08 & 21876.94 \\ \hline ARIMA(3,0,0) & 22147.88 & 22147.9 & 22192.91 \\ \hline ARIMA(2,0,0) & 22527.02 & 22527.04 & 22565.61 \\ \hline ARIMA(1,0,0) & 23476.24 & 23476.25 & 23508.4 \\ \hline ARIMA(0,0,0) & 33245.98 & 33271.71 & 33271.71 \\ \hline \end{tabular}
\end{table}
Table 9: Selection of Best model based on criteria.
To validate the goodness of the fit, we can analyze the model residual, illustrated in Figure 14. Here, the top graph is that of the residual with the timeline of the original data. The bottom left graph represents the Autocorrelation Function (ACF) with respect to the lag of the data. Almost all the values are within the significance e level, and the bottom right figure shows the distribution of the model's residuals. It implies that the residuals are distributed normally with zero mean.
Figure 13: The figure illustrates the forecast generated by ARIMA(7,0,7) from actual aggregated data.
To calculate the accuracy of the given forecast, we calculate the aforementioned error measures.
The error measures are acceptable, given the magnitude of the data. In comparison to the error measures of the actual data illustrated in table 8, we can see improvement in the table 11. Comparing the MASE term of the two tables shows about four times improvement in the forecast accuracy using the synthetic data over actual data.
## 6 Conclusion
Downscaling algorithm has been predominantly used in geology to facilitate outputs of the prevalent models in the field. Very few applications have been made in epidemiology, and most of the application is spatial downscaling. This paper contributes by proposing a parametric, probabilistic one-dimensional downscaling algorithm using aggregated data in the field of epidemiology that facilitates existing forecasting tools box to generate better forecasts than the aggregated data. The MKD algorithm is by no means by construction applies to only epidemiological data but is tuned to work with non-negative, integer data. Deduction of particular conditioning on the initial data synthesizer can generalize the model to practical and real data. This opens up a horizon for
Figure 14: Residual of the ARIMA (7, 0, 7).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Data & RMSE & MAE & MASE \\ \hline Daily & 18.71255 & 6.593062 & 0.1115845 \\ \hline \end{tabular}
\end{table}
Table 11: Error measures for the forecast of the ARIMA (7, 0, 7) of the synthetic daily data.
applying the MKD algorithm on the subject of temporal downscaling. Other than this, the MKD algorithm can be a potent deconvolution algorithm to recover Gaussian blurring and generate high-resolution images keeping the mean pixel value of the image unchanged. There are still avenues to be explored using this algorithm. Further work is needed to show how the MKD algorithm's output affects the outcome of predictive machine learning models like Long Short Term Memory (LSTM), and prophet.
## Acknowledgments
The research by M. Kamrujjaman was partially supported by the University Grants Commission (UGC), the Ministry of Science and Technology, and the Bose Center for Advanced Study and Research in Natural Sciences, University of Dhaka.
## Conflict of interest
The authors declare no conflict of interest.
## Data sharing
Data can be provided on a properly justified request.
## Ethical approval
No consent is required to publish this manuscript.
## Author contributions
Conceptualization, MK and MAM; methodology, MAM and MK; software, MAM and MK; validation, MK; formal analysis, MAM; investigation, MK; resources, MK; data curation, MAM; original draft preparation, MAM; review and editing, MK; supervision, MK. All authors have read and agreed to the published version of the manuscript.
## References
* [1] Jian Peng, Alexander Loew, Olivier Merlin, and Niko EC Verhoest. A review of spatial downscaling of satellite remotely sensed soil moisture. _Reviews of Geophysics_, 55(2):341-366, 2017.
* [2] J Ribalaygua, L Torres, J Portoles, R Monjo, E Gaitan, and MR Pino. Description and validation of a two-step analogue/regression downscaling method. _Theoretical and Applied Climatology_, 114(1):253-269, 2013.
* [3] Seon-Ho Kim, Jeong-Bae Kim, and Deg-Hyo Bae. Optimizing parameters for the downscaling of daily precipitation in normal and drought periods in south korea. _Water_, 14(7):1108, 2022.
* [4] Deg-Hyo Bae, Toshio Koike, Jehangir Ashraf Awan, Moon-Hwan Lee, and Kyung-Hwan Sohn. Climate change impact assessment on water resources and susceptible zones identification in the asian monsoon region. _Water Resources Management_, 29(14):5377-5393, 2015.
* [5] Moon-Hwan Lee, Eun-Soon Im, and Deg-Hyo Bae. Impact of the spatial variability of daily precipitation on hydrological projections: A comparison of gcm-and rcm-driven cases in the han river basin, korea. _Hydrological Processes_, 33(16):2240-2257, 2019.
* [6] Jeong-Bae Kim, Eun-Soon Im, and Deg-Hyo Bae. Intensified hydroclimatic regime in korean basins under 1.5 and 2\({}^{\circ}\) c global warming. _International Journal of climatology_, 40(4):1965-1978, 2020.
* [7] Subhrendu Gangopadhyay, Martyn Clark, and Balaji Rajagopalan. Statistical downscaling using k-nearest neighbors. _Water Resources Research_, 41(2), 2005.
* [8] Hayley J Fowler, Stephen Blenkinsop, and Claudia Tebaldi. Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological modelling. _International Journal of Climatology: A Journal of the Royal Meteorological Society_, 27(12):1547-1578, 2007.
* [9] Taesam Lee and Changsam Jeong. Nonparametric statistical temporal downscaling of daily precipitation to hourly precipitation and implications for climate change scenarios. _Journal of Hydrology_, 510:182-196, 2014.
* [10] Grant Buster, Michael Rossol, Galen Maclaurin, Yu Xie, and Manajit Sengupta. A physical downscaling algorithm for the generation of high-resolution spatiotemporal solar irradiance data. _Solar Energy_, 216:508-517, 2021.
* [11] Jiaming Liu, Di Yuan, Liping Zhang, Xia Zou, and Xingyuan Song. Comparison of three statistical downscaling methods and ensemble downscaling method based on bayesian model averaging in upper hanjiang river basin, china. _Advances in Meteorology_, 2016, 2016.
* [12] Liao Yaoming, Zhang Qiang, and Chen Deliang. Stochastic modeling of daily precipitation in china. _Journal of Geographical Sciences_, 14(4):417-426, 2004.
* [13] Yaoming Liao. Change of parameters of bcc/rcg-wg for daily non-precipitation variables in china: 1951-1978 and 1979-2007. _Journal of Geographical Sciences_, 23(4):579-594, 2013.
* [14] Yonas B Dibike and Paulin Coulibaly. Hydrologic impact of climate change in the saguenay watershed: comparison of downscaling methods and hydrologic models. _Journal of hydrology_, 307(1-4):145-163, 2005.
* [15] Mohammad Sajjad Khan, Paulin Coulibaly, and Yonas Dibike. Uncertainty analysis of statistical downscaling methods. _Journal of Hydrology_, 319(1-4):357-382, 2006.
* [16] Robert L Wilby, Christian W Dawson, and Elaine M Barrow. Sdsm--a decision support tool for the assessment of regional climate change impacts. _Environmental Modelling & Software_, 17(2):145-157, 2002.
* [17] Colin Harpham and Robert L Wilby. Multi-site downscaling of heavy daily precipitation occurrence and amounts. _Journal of Hydrology_, 312(1-4):235-255, 2005.
* [18] Robert L Wilby and Michael D Dettinger. Streamflow changes in the sierra Nevada, california, simulated using a statistically downscaled general circulation model scenario of climate change. In _Linking climate change to land surface change_, pages 99-121. Springer, 2000.
* [19] Fredrik Wetterhall, Sven Halldin, and C-Y Xu. Seasonality properties of four statistical-downscaling methods in central sweden. _Theoretical and Applied Climatology_, 87(1):123-137, 2007.
* [20] Adrian E Raftery and Yingye Zheng. Discussion: Performance of bayesian model averaging. _Journal of the American Statistical Association_, 98(464):931-938, 2003.
* [21] Shivam Tripathi, VV Srinivas, and Ravi S Nanjundiah. Downscaling of precipitation for climate change scenarios: a support vector machine approach. _Journal of hydrology_, 330(3-4):621-640, 2006.
* [22] Xinying Yu and Shie-Yui Liong. Forecasting of hydrologic time series with ridge regression in feature space. _Journal of Hydrology_, 332(3-4):290-302, 2007.
* [23] Subimal Ghosh and Pradeep P Mujumdar. Statistical downscaling of gcm simulations to streamflow using relevance vector machine. _Advances in water resources_, 31(1):132-146, 2008.
* [24] Timothy C Matisziw, Tony H Grubesic, and Hu Wei. Downscaling spatial structure for the analysis of epidemiological data. _Computers, Environment and Urban Systems_, 32(1):81-93, 2008.
* [25] DGHS. Denv press relseases, 2022.
* [26] IEDCR. Dengue surveillance report, September 2021.
* [27] WHO. Covid-19 dashboard, 2022.
* [28] Rob J Hyndman and Anne B Koehler. Another look at measures of forecast accuracy. _International journal of forecasting_, 22(4):679-688, 2006.
* [29] Robert Gilmore Pontius, Olufunmilayo Thontteh, and Hao Chen. Components of information for multiple resolution comparison between maps that share a real variable. _Environmental and ecological statistics_, 15(2):111-142, 2008.
* [30] Cort J Willmott and Kenji Matsuura. On the use of dimensioned measures of error to evaluate the performance of spatial interpolators. _International Journal of Geographical Information Science_, 20(1):89-102, 2006.
* [31] Rob J Hyndman and George Athanasopoulos. _Forecasting: principles and practice_. OTexts, 2018.
* [32] Nor Azura Husin, Naomie Salim, et al. Modeling of dengue outbreak prediction in malaysia: a comparison of neural network and nonlinear regression model. In _2008 International Symposium on Information Technology_, volume 3, pages 1-4. IEEE, 2008.
* [33] Li Ping Wong, Sharina Mahavera Mohamad Shakir, Narges Atefi, and Szalay AbuBakar. Factors affecting dengue prevention practices: nationwide survey of the malaysian public. _PloS one_, 10(4):e0122890, 2015.
* [34] Liang Lu, Hualiang Lin, Linwei Tian, Weizhong Yang, Jimin Sun, and Qiyong Liu. Time series analysis of dengue fever and weather in guangzhou, china. _BMC Public Health_, 9(1):1-5, 2009.
* [35] Sunil Bhatnagar, Vivek Lal, Shiv D Gupta, Om P Gupta, et al. Forecasting incidence of dengue in rajasthan, using time series analyses. _Indian journal of public health_, 56(4):281, 2012.
* [36] S Wongkoon, M Pollar, M Jaroensutasinee, and K Jaroensutasinee. Predicting dhf incidence in northern thailand using time series analysis technique. _International Journal of Medical and Health Sciences_, 1(8):484-488, 2007.
* [37] Myriam Gharbi, Philippe Quenel, Joel Gustave, Sylvie Cassadou, Guy La Ruche, Laurent Girdary, and Laurence Marrama. Time series analysis of dengue incidence in guadeloupe, french west indies: forecasting models using climate variables as predictors. _BMC infectious diseases_, 11(1):1-13, 2011.
* [38] Claudia Torres, Samier Barguil, Miguel Melgarejo, and Andres Olarte. Fuzzy model identification of dengue epidemic in colombia based on multiresolution analysis. _Artificial intelligence in medicine_, 60(1):41-51, 2014.
* [39] Wenbiao Hu, Archie Clements, Gail Williams, and Shilu Tong. Dengue fever and el nino/southern oscillation in queensland, australia: a time series predictive model. _Occupational and environmental medicine_, 67(5):307-311, 2010.
* [40] Faruq Abdulla and Md Moyazzem Hossain. Forecasting of wheat production in kushtia district & bangladesh by arima model: An application of box-jenkin's method. _Journal of Statistics Applications & Probability_, 4(3):465, 2015.
* [41] Md Moyazzem Hossain and Faruq Abdulla. Forecasting potato production in bangladesh by arima model. _Journal of Advanced Statistics_, 1(4):191-198, 2016.
* [42] MM Hossain and F Abdulla. Jute production in bangladesh: a time series analysis. _Journal of Mathematics and Statistics_, 11(3):93-98, 2015.
* [43] Md Hossain, Faruq Abdulla, et al. Forecasting the tea production of bangladesh: Application of arima model. 2015.
* [44] M Hossian and F Abdulla. A time series analysis for the pineapple production in bangladesh. _Jahangirnagar University Journal of Science_, 38(2):49-59, 2015.
* [45] Arul Earnest, Say Beng Tan, Annelies Wilder-Smith, and David Machin. Comparing statistical models to predict dengue fever notifications. 2012:1-6, 2012.
* [46] Matthew D Eastin, Eric Delmelle, Irene Casas, Joshua Wexler, and Cameron Self. Intra-and interseasonal autoregressive prediction of dengue outbreaks using local weather and regional climate for a tropical environment in colombia. _The American journal of tropical medicine and hygiene_, 91(3):598, 2014.
* [47] Pei-Chih Wu, How-Ran Guo, Shih-Chun Lung, Chuan-Yao Lin, and Huey-Jen Su. Weather as an effective predictor for occurrence of dengue fever in taiwan. _Acta tropica_, 103(1):50-57, 2007.
* [48] Paula M Luz, Beatriz VM Mendes, Claudia T Codeco, Claudio J Struchiner, Alison P Galvani, et al. Time series analysis of dengue incidence in rio de janeiro, brazil. 2008.
* [49] Edson Zangiacomi Martinez, Elisangela Aparecida Soares da Silva, and Amaury Lelis Dal Fabbro. A sarima forecasting model to predict the number of cases of dengue in campinas, state of sao paulo, brazil. _Revista da Sociedade Brasileira de Medicina Tropical_, 44:436-440, 2011.
* [50] Jason Brownlee. _Introduction to time series forecasting with python: how to prepare data and develop models to predict the future_. Machine Learning Mastery, 2017.
* [51] Dilip Nachane and Jose G Clavel. Forecasting interest rates: a comparative assessment of some second-generation nonlinear models. _Journal of Applied Statistics_, 35(5):493-514, 2008.
* [52] Iberedem Iwok and G Udoh. A comparative study between the arima-fourier model and the wavelet model 1. _AMERICAN JOURNAAL OF SCIENTIFIC AND INDUSTRIAL RESEARCH_, 7:137-144, 12 2016.
## Appendix A Additional Figures
## References
|
2304.05679 | Fully Conservative Difference Schemes for the Rotation-Two-Component
Camassa-Holm System with Smooth/Nonsmooth Initial Data | The rotation-two-component Camassa--Holm system, which possesses strongly
nonlinear coupled terms and high-order differential terms, tends to have
continuous nonsmooth solitary wave solutions, such as peakons, stumpons,
composite waves and even chaotic waves. In this paper an accurate semi-discrete
conservative difference scheme for the system is derived by taking advantage of
its Hamiltonian invariants. We show that the semi-discrete numerical scheme
preserves at least three discrete conservative laws: mass, momentum and energy.
Furthermore, a fully discrete finite difference scheme is proposed without
destroying anyone of the conservative laws. Combining a nonlinear iteration
process and an efficient threshold strategy, the accuracy of the numerical
scheme can be guaranteed. Meanwhile, the difference scheme can capture the
formation and propagation of solitary wave solutions with satisfying long time
behavior under the smooth/nonsmooth initial data. The numerical results reveal
a new type of asymmetric wave breaking phenomenon under the nonzero rotational
parameter. | Tong Yan, Jiwei Zhang, Qifeng Zhang | 2023-04-12T08:06:10Z | http://arxiv.org/abs/2304.05679v1 | Fully Conservative Difference Schemes for the Rotation-Two-Component Camassa-Holm System with Smooth/Nonsmooth Initial Data1
###### Abstract
The rotation-two-component Camassa-Holm system, which possesses strongly nonlinear coupled terms and high-order differential terms, tends to have continuous nonsmooth solitary wave solutions, such as peakons, stumpons, composite waves and even chaotic waves. In this paper an accurate semi-discrete conservative difference scheme for the system is derived by taking advantage of its Hamiltonian invariants. We show that the semi-discrete numerical scheme preserves at least three discrete conservative laws: mass, momentum and energy. Furthermore, a fully discrete finite difference scheme is proposed without destroying anyone of the conservative laws. Combining a nonlinear iteration process and an efficient threshold strategy, the accuracy of the numerical scheme can be guaranteed. Meanwhile, the difference scheme can capture the formation and propagation of solitary wave solutions with satisfying long time behavior under the smooth/nonsmooth initial data. The numerical results reveal a new type of asymmetric wave breaking phenomenon under the nonzero rotational parameter.
R2CH system, semi-discrete scheme, discrete conservation law, peakon solution
## 1 Introduction
As one of the typical representatives of the nonlinearly dispersive partial differential equations, the Camassa-Holm (CH) equation
\[m_{t}+um_{x}+2mu_{x}=0,\quad\text{with}\quad m=u-u_{xx} \tag{1.1}\]
was proposed to simulate the evolution of shallow water waves [1]. Here \(m\) represents the momentum related to the fluid velocity \(u\). The CH equation instead of the famous KdV equation allows traveling wave solutions in the explicit form of
\[u(x,t)=c^{\prime}\mathrm{e}^{-|x-ct|}\]
with a sharp peak (peakons), where \(c^{\prime}\) is the amplitude and \(c\) is the wave velocity. The CH equation has at least the following remarkable properties: complete integrability, infinitely conservation laws and bi-Hamiltonian structure [5, 10]. In this paper, we focus on the generalized two-component case of the CH equation (1.1) given by
\[m_{t}+\sigma(um_{x}+2mu_{x})=-3(1-\sigma)uu_{x}+Au_{x}-\mu u_{xxx}-(1-2\Omega A )\rho\rho_{x}+2\Omega\rho(\rho u)_{x},\quad t\in(0,T], \tag{1.2}\]
\[\rho_{t}+(\rho u)_{x}=0,\quad t\in(0,T] \tag{1.3}\]
with the periodic boundary condition for \(x\in R\). The system (1.2)-(1.3) is called the rotation-two-component Camassa-Holm (R2CH) system.
The R2CH system was firstly proposed by Fan et al. [7] in 2016, which not only inherits most of the properties of the solutions of the Camassa-Holm equation, but also adds a new variable \(\rho\) to describe the height of the water waves. Moreover, it also introduces a constant rotational speed \(\Omega\) (\(\Omega\in[0,1/4)\)) to characterize the effect of the Earth's rotation on the shallow water waves. Therefore, the R2CH system can depict the propagation phenomenon of shallow water waves more accurately.
Recalling \(m=u-u_{xx}\), the R2CH system (1.2)-(1.3) can be rewritten as
\[u_{t}-u_{xxt}-Au_{x}+3uu_{x}=\sigma(2u_{x}u_{xx}+uu_{xxx})-\mu u_{xxx}-(1-2 \Omega A)\rho\rho_{x}+2\Omega\rho(\rho u)_{x},\quad t\in(0,T], \tag{1.4}\]
\[\rho_{t}+(\rho u)_{x}=0,\quad t\in(0,T]. \tag{1.5}\]
Several significant works have been developed in theory after its first appearance. For example, blow-up scenarios of strong solutions are discussed in [14, 20], blow-up criteria and wave-breaking phenomena are established in [13], solitary waves with singularities, like peakons and cuspons are introduced in [2], peakon weak solutions in distribution sense are considered in [6], the persistence properties of the system in weighted \(L^{p}\)-spaces are investigated in [17] and the effect of the Coriolis force on traveling waves is studied in [21].
As we know, the system (1.4)-(1.5) has at least three conservative laws
\[I_{1}(u,\rho)= \int_{R}\rho\mathrm{d}x, \tag{1.6}\] \[I_{2}(u,\rho)= \int_{R}(u+\Omega\rho^{2})\mathrm{d}x,\] (1.7) \[E(u,\rho)= \frac{1}{2}\int_{R}(u^{2}+u_{x}^{2}+(1-2\Omega A)\rho^{2})\mathrm{ d}x. \tag{1.8}\]
In addition, for a special case of \(\sigma=1\) and \(\Omega=0\), the system (1.4)-(1.5) has another three-order conservative law, which is described by
\[H=\int_{R}(u^{3}+uu_{x}^{2}-Au^{2}-\mu u_{x}^{2}+u\rho^{2})\mathrm{d}x,\]
see also [3, 23]. The detailed derivation is postponed in the appendix.
While theoretical studies have been well carried out in many aspects, numerical works for the R2CH system (1.4)-(1.5) are still few in the literature and in urgent need for development. At present, the only numerical scheme is finite difference discretization based on a bilinear operator, see e.g., [19]. However, the numerical simulation in [19] only covers the smooth initial values with conditional stability. In the case of nonsmooth initial data, it is not only theoretically difficult to analyze, but also difficult to capture the nonsmooth solutions. In order to develop more accurate numerical algorithms, this paper extends the excellent ideas in [16] to solve the system (1.4)-(1.5).
Li and Vu-Quoc pointed out in [11]: _"in some areas, the ability to preserve some invariant properties of the original differential equation is a criterion to judge the success of a numerical simulation."_ Hence, one of our principal targets in deriving numerical scheme is to preserve the specially intrinsic conservative structure of the system. Other targets include the study of novel wave-breaking phenomena during the collision and evolution under the nonsmooth initial data in a long time simulation.
The study of the high-order discrete conservation law is a very challenging subject. As Liu and Xing pointed out in [12]: _"it appears a rather difficult task to preserve all conservation laws."_ In this paper, we make a tentative attempt to simulate a third-order discrete quantity \(H^{n}\) in the last example in Section 5.3. It seems that \(H^{n}\) is very close to the conserved quantity \(H\). However, a rigorous theoretical interpretation is still lacking to determine whether the quantity \(H^{n}\) is a discrete conservation law.
The main contributions of the paper are concretely summarized as follows.
* The present difference scheme possesses at least three discrete conservative laws, enabling it to accurately capture the behaviors of solutions to the R2CH system under smooth/nonsmooth initial conditions.
* The difference scheme demonstrates several new phenomena with nice resolution for the first time, including the phenomena of the short-wave breaking and interaction of the long-wave propagation.
* The numerical accuracy of the difference scheme can be guaranteed and oscillations of nonsmooth solutions can also be effectively eliminated by a threshold technique.
* The difference scheme improves the numerical results in the literature and has potential impacts for predicting the propagation of solutions of other shallow water wave equations.
The rest of the paper is arranged as follows. In Section 2, a semi-discrete finite difference scheme is derived, which preserves the semi-discrete mass, momentum and the energy. After that, a fully discrete finite difference scheme is established in Section 3, which preserves all the conservative laws in the discrete counterpart. Algorithm implementation in combination of a fixed point iteration method is carried out in Section 4. Numerical examples including the dam-break problem, singe peakon, multipeakon and peakon anti-peakon interaction problems demonstrate the discrete conservative laws and nice evolution of the solutions in Section 5 followed by a brief conclusion in Section 6.
## 2 Semi-discrete scheme
To establish a conservative finite difference scheme for solving (1.2)-(1.3) with the periodic boundary condition on the computational domain \([0,L]\), we first discretize the system in space and derive a spatial semi-discrete scheme, which is shown to preserve conservation laws. Let \(h=L/M\) be the spatial stepsize for a given integer \(M\) and define the spatial variable by \(x_{i}=ih\), \(i=0,1,\cdots,M\). Denote \(u_{i}(t)=u(x_{i},t)\) and \(m_{i}(t)=m(x_{i},t)\). Put forward on this basis, we propose the semi-discrete conservative finite difference scheme by
\[\frac{\mathrm{d}}{\mathrm{d}t}m_{i}(t)+\frac{\sigma}{2h}\big{[} (m_{i+1}(t)u_{i+1}(t)-m_{i-1}(t)u_{i-1}(t))+m_{i}(t)(u_{i+1}(t)-u_{i-1}(t))\big{]}\] \[= -\frac{3(1-\sigma)}{4h}\big{[}u_{i+1}^{2}(t)-u_{i-1}^{2}(t) \big{]}+\frac{A}{2h}\big{[}u_{i+1}(t)-u_{i-1}(t)\big{]}\] \[-\frac{\mu}{2h^{3}}\big{[}u_{i+2}(t)-2u_{i+1}(t)+2u_{i-1}(t)-u_{i- 2}(t)\big{]}-\frac{(1-2\Omega A)}{4h}\big{[}\rho_{i+1}^{2}(t)-\rho_{i-1}^{2}(t )\big{]}\] \[+\frac{\Omega}{2h}\rho_{i}(t)\big{[}(u_{i+1}(t)+u_{i}(t))(\rho_{i +1}(t)+\rho_{i}(t))-(u_{i-1}(t)+u_{i}(t))(\rho_{i-1}(t)+\rho_{i}(t))\big{]}, \tag{2.1}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\rho_{i}(t)+\frac{1}{4h}\big{[}(u_{ i+1}(t)+u_{i}(t))(\rho_{i+1}(t)+\rho_{i}(t))-(u_{i}(t)+u_{i-1}(t))(\rho_{i}(t)+ \rho_{i-1}(t))\big{]}= 0,\] (2.2) \[m_{i}(t)=u_{i}(t)-\frac{1}{2h}\big{[}v_{i+1}(t)-v_{i-1}(t)\big{]}, \tag{2.3}\]
where \(v_{i}(t)=\frac{1}{2h}\big{[}u_{i+1}(t)-u_{i-1}(t)\big{]}\) and \(i=1,2,\cdots,M\).
With the periodicity of the boundary conditions for discrete mesh functions \(u_{i}\) and \(\rho_{i}\), we have
\[u_{i}=u_{i+M},\quad\rho_{i}=\rho_{i+M},\quad i=0,1,\cdots,M-1.\]
Moreover, the periodicity for \(u\) ensures the periodicity of the intermediate variable \(m\). Throughout the whole paper, we omit the spatial stepsize \(h\) in the semi-discrete and discrete conservative laws. On this basis, we define semi-discrete mass, momentum, and
energy as follows
\[I_{1}(t) = \sum_{i=1}^{M}\rho_{i}(t), \tag{2.4}\] \[I_{2}(t) = \sum_{i=1}^{M}\left[u_{i}(t)+\Omega\rho_{i}^{2}(t)\right],\] (2.5) \[E(t) = \frac{1}{2}\sum_{i=1}^{M}\left[u_{i}^{2}(t)+\left(\frac{u_{i+1}(t )-u_{i-1}(t)}{2h}\right)^{2}+(1-2\Omega A)\rho_{i}^{2}(t)\right]. \tag{2.6}\]
Therefore, we show that the semi-discrete scheme (2.1)-(2.3) is conservative in the following sense.
**Theorem 2.1**.: _Consider the semi-discrete scheme (2.1)-(2.3) of the R2CH system (1.2)-(1.3). Then the semi-discrete mass, momentum, and the energy with \(\sigma=1\) are conservative for the R2CH system (1.2)-(1.3) in the following sense_
\[\frac{\mathrm{d}}{\mathrm{d}t}I_{1}(t)=0,\quad\frac{\mathrm{d}}{\mathrm{d}t}I _{2}(t)=0,\quad\frac{\mathrm{d}}{\mathrm{d}t}E(t)=0.\]
Proof.: **(I)**. In combination of (2.2) and (2.4), we have
\[\frac{\mathrm{d}}{\mathrm{d}t}I_{1}(t)=\sum_{i=1}^{M}\frac{\mathrm{d}}{ \mathrm{d}t}\rho_{i}(t)=\sum_{i=1}^{M}\frac{(u_{i-1}+u_{i})(\rho_{i-1}+\rho_{i })-(u_{i+1}+u_{i})(\rho_{i+1}+\rho_{i})}{4h}=0,\]
where the periodicity of the boundary conditions for \(u\) and \(\rho\) is used.
**(II)**. Similarly, we have
\[\sum_{i=1}^{M}\frac{m_{i+1}(t)u_{i+1}(t)-m_{i-1}(t)u_{i-1}(t)}{2h }=0,\] \[\sum_{i=1}^{M}m_{i}(t)\frac{u_{i+1}(t)-u_{i-1}(t)}{2h}=0,\] \[\sum_{i=1}^{M}\frac{u_{i+1}^{2}(t)-u_{i-1}^{2}(t)}{4h}=0,\] \[\sum_{i=1}^{M}\frac{u_{i+1}(t)-u_{i-1}(t)}{2h}=0,\] \[\sum_{i=1}^{M}\frac{u_{i+2}(t)-2u_{i+1}(t)+2u_{i-1}(t)-u_{i-2}(t )}{2h^{3}}=0,\] \[\sum_{i=1}^{M}\frac{\rho_{i+1}^{2}(t)-\rho_{i-1}^{2}(t)}{4h}=0.\]
Thus, summing over \(i\) from \(1\) to \(M\) on both sides of (2.1), we have
\[\sum_{i=1}^{M}\frac{\mathrm{d}}{\mathrm{d}t}m_{i}(t)=2\Omega\sum_{i=1}^{M}\rho_{i }(t)\frac{(u_{i+1}(t)+u_{i}(t))(\rho_{i+1}(t)+\rho_{i}(t))-(u_{i-1}(t)+u_{i}(t)) (\rho_{i-1}(t)+\rho_{i}(t))}{4h}. \tag{2.7}\]
Taking the derivative on both sides of (2.3) with respect to \(t\) and summing over \(i\) from \(1\) to \(M\), we have
\[\sum_{i=1}^{M}\frac{\mathrm{d}}{\mathrm{d}t}m_{i}(t)=\sum_{i=1}^{M}\left(\frac {\mathrm{d}}{\mathrm{d}t}u_{i}(t)-\frac{v_{i+1}^{\prime}(t)-v_{i-1}^{\prime}(t )}{2h}\right)=\sum_{i=1}^{M}\frac{\mathrm{d}}{\mathrm{d}t}u_{i}(t). \tag{2.8}\]
Using (2.2), (2.7) and (2.8), we have
\[\frac{\mathrm{d}}{\mathrm{d}t}I_{2}(t) = \sum_{i=1}^{M}\frac{\mathrm{d}}{\mathrm{d}t}u_{i}(t)+2\Omega\sum_ {i=1}^{M}\rho_{i}(t)\frac{\mathrm{d}}{\mathrm{d}t}\rho_{i}(t)\] \[= 2\Omega\sum_{i=1}^{M}\rho_{i}(t)\frac{(u_{i+1}(t)+u_{i}(t))(\rho _{i+1}(t)+\rho_{i}(t))-(u_{i-1}(t)+u_{i}(t))(\rho_{i-1}(t)+\rho_{i}(t))}{4h}\] \[+2\Omega\sum_{i=1}^{M}\rho_{i}(t)\frac{(u_{i-1}(t)+u_{i}(t))(\rho _{i-1}(t)+\rho_{i}(t))-(u_{i+1}(t)+u_{i}(t))(\rho_{i+1}(t)+\rho_{i}(t))}{4h}\] \[= 0.\]
(**III**). Finally, we show that \(\frac{\mathrm{d}}{\mathrm{d}t}E(t)=0\). By observing (2.3), we have
\[\frac{\mathrm{d}}{\mathrm{d}t}E(t) = \sum_{i=1}^{M}\left[u_{i}(t)u_{i}^{\prime}(t)+v_{i}(t)v_{i}^{ \prime}(t)+(1-2\Omega A)\rho_{i}(t)\rho_{i}^{\prime}(t)\right] \tag{2.9}\] \[= \sum_{i=1}^{M}\left[u_{i}(t)\left(u_{i}^{\prime}(t)-\frac{v_{i+1 }^{\prime}(t)-v_{i-1}^{\prime}(t)}{2h}\right)+(1-2\Omega A)\rho_{i}(t)\rho_{i} ^{\prime}(t)\right]\] \[= \sum_{i=1}^{M}u_{i}(t)m_{i}^{\prime}(t)+(1-2\Omega A)\sum_{i=1}^ {M}\rho_{i}(t)\rho_{i}^{\prime}(t).\]
Noticing that
\[\sum_{i=1}^{M}u_{i}(t)\Big{(}\frac{m_{i+1}(t)u_{i+1}(t)-m_{i-1}(t )u_{i-1}(t)}{2h}+m_{i}(t)\frac{u_{i+1}(t)-u_{i-1}(t)}{2h}\Big{)}\] \[= \sum_{i=1}^{M}\Big{(}\frac{m_{i+1}(t)u_{i+1}(t)u_{i}(t)-m_{i-1}( t)u_{i-1}(t)u_{i}(t)}{2h}+m_{i}(t)u_{i}(t)v_{i}(t)\Big{)}\] \[= \sum_{i=1}^{M}\Big{(}\frac{m_{i}(t)u_{i}(t)u_{i-1}(t)-m_{i}(t)u_ {i}(t)u_{i+1}(t)}{2h}+m_{i}(t)u_{i}(t)v_{i}(t)\Big{)}\] \[= \sum_{i=1}^{M}\Big{(}-m_{i}(t)u_{i}(t)v_{i}(t)+m_{i}(t)u_{i}(t)v _{i}(t)\Big{)}=0. \tag{2.10}\]
Eq. (2.9) follows from (2.1), (2.2) and (2.10) that
\[\sum_{i=1}^{M}u_{i}(t)m_{i}^{\prime}(t)+(1-2\Omega A)\sum_{i=1}^{M} \rho_{i}(t)\rho_{i}^{\prime}(t)\] \[= -\frac{\sigma}{2h}\sum_{i=1}^{M}u_{i}(t)\Big{[}\big{(}m_{i+1}(t)u _{i+1}(t)-m_{i-1}(t)u_{i-1}(t)\big{)}+m_{i}(t)\big{(}u_{i+1}(t)-u_{i-1}(t)\big{)} \Big{]}\] \[-\frac{3(1\!-\!\sigma)}{4h}\sum_{i=1}^{M}\Big{[}u_{i}(t)(u_{i+1}^ {2}(t)\!-\!u_{i-1}^{2}(t))\Big{]}\!+\!\frac{A}{2h}\sum_{i=1}^{M}\Big{[}u_{i}(t) (u_{i+1}(t)\!-\!u_{i-1}(t))\Big{]}\] \[-\frac{\mu}{2h^{3}}\sum_{i=1}^{M}u_{i}(t)\Big{[}u_{i+2}(t)-2u_{i+ 1}(t)\!+\!2u_{i-1}(t)\!-\!u_{i-2}(t)\Big{]}\] \[-\frac{(1\!-\!2\Omega A)}{4h}\sum_{i=1}^{M}u_{i}(t)\Big{[}\rho_{ i+1}^{2}(t)\!-\!\rho_{i-1}^{2}(t)\Big{]}\] \[+2\Omega\!\sum_{i=1}^{M}u_{i}(t)\rho_{i}(t)\frac{(u_{i+1}(t)\!+\! u_{i}(t))(\rho_{i+1}(t)\!+\!\rho_{i}(t))\!-\!(u_{i-1}(t)\!+\!u_{i}(t))(\rho_{i-1}(t) \!+\!\rho_{i}(t))}{4h}\] \[+(1\!-\!2\Omega A)\!\sum_{i=1}^{M}\!\rho_{i}(t)\frac{(u_{i-1}(t) \!+\!u_{i}(t))(\rho_{i-1}(t)\!+\!\rho_{i}(t))\!-\!(u_{i+1}(t)\!+\!u_{i}(t))( \rho_{i+1}(t)\!+\!\rho_{i}(t))}{4h}\] \[= 0,\]
which completes the proof.
## 3 Fully discrete difference scheme
In the previous section, a semi-discrete finite difference scheme is established for the R2CH system (1.2)-(1.3). Below, a new temporal discretization is presented to guarantee the long time accurate calculation without destroying the conservation properties of the original system. To this end, let \(\tau\!=\!T/N\) the time stepsize and \(t^{n}\!=\!n\tau\) (\(n\!=\!0,1,\cdots,N\)) the partition of time variable with a given integer \(N\). Denoting \(m_{i}^{n}\!=\!m(x_{i},t^{n})\), \(u_{i}^{n}\!=\!u(x_{i},t^{n})\), \(\rho_{i}^{n}\!=\!\rho(x_{i},t^{n})\), then (2.1)-(2.3) can be discretized implicitly by
\[\frac{m_{i}^{*}\!-\!m_{i}^{n}}{\tau/2}+\frac{\sigma}{2h}\big{[}( m_{i+1}^{*}u_{i+1}^{*}\!-\!m_{i-1}^{*}u_{i-1}^{*})+m_{i}^{*}(u_{i+1}^{*}\!-\!u_{i-1 }^{*})\big{]}\] \[= -\frac{3(1\!-\!\sigma)}{4h}\big{[}(u_{i+1}^{*})^{2}\!-\!(u_{i-1}^ {*})^{2}\big{]}+\frac{A}{2h}\big{(}u_{i+1}^{*}\!-\!u_{i-1}^{*}\big{)}\] \[-\frac{\mu}{2h^{3}}\big{(}u_{i+2}^{*}\!-\!2u_{i+1}^{*}\!+\!2u_{i- 1}^{*}\!-\!u_{i-2}^{*}\big{)}-\frac{(1\!-\!2\Omega A)}{4h}\big{[}(\rho_{i+1}^ {*})^{2}\!-\!(\rho_{i-1}^{*})^{2}\big{]}\] \[+\frac{\Omega}{2h}\rho_{i}^{*}\big{[}(u_{i+1}^{*}\!+\!u_{i}^{*})( \rho_{i+1}^{*}\!+\!\rho_{i}^{*})\!-\!(u_{i-1}^{*}\!+\!u_{i}^{*})(\rho_{i-1}^{*} \!+\!\rho_{i}^{*})\big{]}, \tag{3.1}\] \[\frac{\rho_{i}^{*}\!-\!\rho_{i}^{n}}{\tau/2}+\frac{1}{4h}\big{[}( u_{i+1}^{*}\!+\!u_{i}^{*})\big{(}\rho_{i+1}^{*}\!+\!\rho_{i}^{*})\!-\!(u_{i-1}^{*} \!+\!u_{i}^{*})(\rho_{i-1}^{*}\!+\!\rho_{i}^{*})\big{]}\!=\!0, \tag{3.2}\]
\[m_{i}^{n}=u_{i}^{n}-\frac{1}{2h}\big{(}v_{i+1}^{n}-v_{i-1}^{n}\big{)},\quad v_{i}^{n}=\frac{1}{2h}\big{(}u_{i+1}^{n}-u_{i-1}^{n}\big{)}, \tag{3.3}\] \[u_{i}^{n+1}=2u_{i}^{*}-u_{i}^{n},\quad m_{i}^{n+1}=2m_{i}^{*}-m_{i }^{n},\quad\rho_{i}^{n+1}=2\rho_{i}^{*}-\rho_{i}^{n}, \tag{3.4}\]
where \(i=1,\cdots,M\) and \(n=0,1,\cdots,N-1\). Thus, the discrete mass, momentum and energy are defined as
\[I_{1}^{n}=\sum_{i=1}^{M}\rho_{i}^{n}, \tag{3.5}\] \[I_{2}^{n}=\sum_{i=1}^{M}\big{[}u_{i}^{n}+\Omega(\rho_{i}^{n})^{2 }\big{]},\] (3.6) \[E^{n}=\frac{1}{2}\sum_{i=1}^{M}\bigg{[}(u_{i}^{n})^{2}+\Big{(} \frac{u_{i+1}^{n}-u_{i-1}^{n}}{2h}\Big{)}^{2}+(1-2\Omega A)(\rho_{i}^{n})^{2} \bigg{]}. \tag{3.7}\]
The newly established discrete scheme will be shown to preserve the above three conserved quantities for the original system (1.4)-(1.5). Then we have the following theorem.
**Theorem 3.1**.: _Consider the difference scheme (3.1)-(3.4) of the R2CH system (1.2)-(1.3). Then the discrete mass, momentum, and energy with \(\sigma=1\) are conservative for the R2CH system (1.2)-(1.3) in the following sense_
\[I_{1}^{n+1}=I_{1}^{n},\quad I_{2}^{n+1}=I_{2}^{n},\quad E^{n+1}=E^{n}.\]
Proof.: **(I)**. Firstly, we show that \(I_{1}^{n+1}=I_{1}^{n}\). Noticing (3.2) and (3.4), it readily knows that
\[\frac{\rho_{i}^{n+1}-\rho_{i}^{n}}{\tau}+\frac{(u_{i+1}^{*}+u_{i}^{*})(\rho_{ i+1}^{*}+\rho_{i}^{*})-(u_{i-1}^{*}+u_{i}^{*})(\rho_{i-1}^{*}+\rho_{i}^{*})}{4h}=0. \tag{3.8}\]
Summing (3.8) with respect to \(i\) from \(1\) to \(M\) and utilizing the periodicity, we have
\[\sum_{i=1}^{M}\Big{[}\frac{\rho_{i}^{n+1}-\rho_{i}^{n}}{\tau}+ \frac{(u_{i+1}^{*}+u_{i}^{*})(\rho_{i+1}^{*}+\rho_{i}^{*})-(u_{i-1}^{*}+u_{i}^ {*})(\rho_{i-1}^{*}+\rho_{i}^{*})}{4h}\Big{]}\] \[= \sum_{i=1}^{M}\frac{\rho_{i}^{n+1}-\rho_{i}^{n}}{\tau}=\frac{I_{1 }^{n+1}-I_{1}^{n}}{\tau}=0,\]
which implies \(I_{1}^{n+1}=I_{1}^{n}\).
**(II)**. Next we show \(I_{2}^{n+1}=I_{2}^{n}\). In combination of (3.1) and (3.3), and noticing the periodicity, we have
\[\sum_{i=1}^{M}\frac{u_{i}^{n+1}-u_{i}^{n}}{\tau}\] \[= \sum_{i=1}^{M}\frac{m_{i}^{n+1}-m_{i}^{n}}{\tau}+\sum_{i=1}^{M} \frac{(v_{i+1}^{n+1}-v_{i-1}^{n+1})-(v_{i+1}^{n}-v_{i-1}^{n})}{2h\tau}\] \[= \,\Omega\sum_{i=1}^{M}\rho_{i}^{*}\frac{(u_{i+1}^{*}+u_{i}^{*})( \rho_{i+1}^{*}+\rho_{i}^{*})-(u_{i-1}^{*}+u_{i}^{*})(\rho_{i-1}^{*}+\rho_{i}^{* })}{2h}. \tag{3.9}\]
Again using (3.8), we find that
\[\sum_{i=1}^{M}\frac{(\rho_{i}^{n+1})^{2}-(\rho_{i}^{n})^{2}}{\tau}=2 \sum_{i=1}^{M}\frac{\rho_{i}^{n+1}+\rho_{i}^{n}}{2}\cdot\frac{\rho_{i}^{n+1}- \rho_{i}^{n}}{\tau}\] \[= \sum_{i=1}^{M}\rho_{i}^{*}\frac{(u_{i-1}^{*}+u_{i}^{*})(\rho_{i-1} ^{*}+\rho_{i}^{*})-(u_{i+1}^{*}+u_{i}^{*})(\rho_{i+1}^{*}+\rho_{i}^{*})}{2h}. \tag{3.10}\]
Combining (3.9) with (3.10), we have
\[\frac{I_{2}^{n+1}-I_{2}^{n}}{\tau}=\sum_{i=1}^{M}\frac{u_{i}^{n+1}-u_{i}^{n}}{ \tau}+\Omega\sum_{i=1}^{M}\frac{(\rho_{i}^{n+1})^{2}-(\rho_{i}^{n})^{2}}{\tau }=0,\]
which indicates that \(I_{2}^{n+1}=I_{2}^{n}\).
**(III)**. Finally, we prove \(E^{n+1}=E^{n}\). Using (2.10), we have
\[\sum_{i=1}^{M}u_{i}^{*}\left(\frac{m_{i+1}^{*}u_{i+1}^{*}-m_{i-1}^{*}u_{i-1}^{ *}}{2h}+m_{i}^{*}\frac{u_{i+1}^{*}-u_{i-1}^{*}}{2h}\right)=0.\]
Using the periodicity, we have
\[\sum_{i=1}^{M}\rho_{i}^{*}\frac{(u_{i-1}^{*}+u_{i}^{*})(\rho_{i-1}^{*}+\rho_{i }^{*})-(u_{i+1}^{*}+u_{i}^{*})(\rho_{i+1}^{*}+\rho_{i}^{*})}{4h}=\sum_{i=1}^{M }(\rho_{i}^{*})^{2}\frac{u_{i-1}^{*}-u_{i+1}^{*}}{4h}. \tag{3.11}\]
Multiplying (3.1) with \(u_{i}^{*}\) and summing over \(i\) from \(1\) to \(M\), and rearranging the corresponding result, we have
\[0= \sum_{i=1}^{M}\frac{m_{i}^{n+1}-m_{i}^{n}}{\tau}u_{i}^{*}+(1-2 \Omega A)\sum_{i=1}^{M}\frac{(\rho_{i+1}^{*})^{2}-(\rho_{i-1}^{*})^{2}}{4h}u_{ i}^{*}\] \[= \sum_{i=1}^{M}\left(\frac{u_{i}^{n+1}-u_{i}^{n}}{\tau}u_{i}^{*}- \frac{v_{i+1}^{n+1}-v_{i-1}^{n+1}-v_{i+1}^{n}+v_{i-1}^{n}}{2h\tau}u_{i}^{*} \right)+(1-2\Omega A)\sum_{i=1}^{M}(\rho_{i}^{*})^{2}\frac{u_{i-1}^{*}-u_{i+1} ^{*}}{4h}\] \[= \sum_{i=1}^{M}\left[\frac{u_{i}^{n+1}-u_{i}^{n}}{\tau}u_{i}^{*}+ \left(\frac{v_{i}^{n+1}-v_{i}^{n}}{\tau}\right)\left(\frac{u_{i+1}^{*}-u_{i-1} ^{*}}{2h}\right)\right]\] \[+(1-2\Omega A)\sum_{i=1}^{M}\rho_{i}^{*}\frac{(u_{i-1}^{*}+u_{i}^{ *})(\rho_{i-1}^{*}+\rho_{i}^{*})-(u_{i+1}^{*}+u_{i}^{*})(\rho_{i+1}^{*}+\rho_{i }^{*})}{4h}\] \[= \sum_{i=1}^{M}\left[\left(\frac{u_{i}^{n+1}-u_{i}^{n}}{\tau} \right)\left(\frac{u_{i}^{n+1}+u_{i}^{n}}{2}\right)+\left(\frac{v_{i}^{n+1}-v _{i}^{n}}{\tau}\right)\left(\frac{v_{i}^{n+1}+v_{i}^{n}}{2}\right)\right]\] \[+(1-2\Omega A)\sum_{i=1}^{M}\frac{\rho_{i}^{n+1}+\rho_{i}^{n}}{2} \cdot\frac{\rho_{i}^{n+1}-\rho_{i}^{n}}{\tau}\] \[= \frac{E^{n+1}-E^{n}}{\tau},\]
where the periodicity, (3.4) and \(\sigma{=}1\) are used in the first equality, the periodicity and (3.3) are used in the second equality, the periodicity and (3.11) are used in the third equality, and (3.8) is used in the penultimate equality. This completes the proof.
## 4 Algorithm implementation
Denote \(\boldsymbol{u}{=}(u_{1},u_{2},\cdots,u_{M})^{T}\), \(\boldsymbol{m}{=}(m_{1},m_{2},\cdots,m_{M})^{T}\) and \(\boldsymbol{\rho}{=}(\rho_{1},\rho_{2},\cdots,\rho_{M})^{T}\). Recalling (3.3), we have a linear system of equations \(\boldsymbol{m}=\boldsymbol{B}\boldsymbol{u}\), where \(\boldsymbol{B}\) is a symmetric circulant matrix defined by
\[\boldsymbol{B}{=}\,\mathbf{circ}(c[0],c[1],\cdots,c[M{-}1])\]
with \(c[0]{=}1{+}\frac{1}{2h^{2}}\), \(c[1]{=}0\), \(c[2]{=}-\frac{1}{4h^{2}}\), \(c[3]{=}\cdots{=}c[M{-}3]{=}0\), \(c[M{-}2]{=}-\frac{1}{4h^{2}}\) and \(c[M{-}1]{=}0\). Obviously, \(\boldsymbol{B}\) is a dominant diagonal matrix, which allows us to express the vector \(\boldsymbol{u}\) in terms of \(\boldsymbol{m}\) as \(\boldsymbol{u}{=}\boldsymbol{B}^{-1}\boldsymbol{m}\). To avoid ambiguity we let \((\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i}\) denote the \(i\)th element of the vector. Consequently, (3.1)-(3.4) can be rewritten equivalently in terms of \(m^{*}\) as follows
\[\frac{m_{i}^{*}-m_{i}^{n}}{\tau/2}+\frac{\sigma}{2h}\big{[}(m_{i+ 1}^{*}(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i+1}-m_{i-1}^{*}(\boldsymbol{ B}^{-1}\boldsymbol{m}^{*})_{i-1})+m_{i}^{*}((\boldsymbol{B}^{-1}\boldsymbol{m}^{*}) _{i+1}-(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i-1})\big{]}\] \[\quad{=}-\frac{3(1{-}\sigma)}{4h}\big{[}(\boldsymbol{B}^{-1} \boldsymbol{m}^{*})_{i+1}^{2}{-}(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i-1} ^{2}\big{]}+\frac{A}{2h}\big{[}(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i+1} -(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i-1}\big{]}\] \[\quad{-}\frac{\mu}{2h^{3}}\big{[}(\boldsymbol{B}^{-1}\boldsymbol {m}^{*})_{i+2}{-}2(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i+1}{+}2( \boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i-1}{-}(\boldsymbol{B}^{-1} \boldsymbol{m}^{*})_{i-2}\big{]}\] \[\quad{-}\frac{(1{-}2\Omega A)}{4h}\big{[}(\rho_{i+1}^{*})^{2}{-} (\rho_{i-1}^{*})^{2}\big{]}+\frac{\Omega}{2h}\rho_{i}^{*}\big{[}((\boldsymbol{ B}^{-1}\boldsymbol{m}^{*})_{i+1}+(\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i}) \big{(}\rho_{i+1}^{*}+\rho_{i}^{*})\] \[\quad{-}((\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i-1}{+}( \boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i})(\rho_{i-1}^{*}{+}\rho_{i}^{*}) \big{]}\,, \tag{4.1}\] \[\frac{\rho_{i}^{*}-\rho_{i}^{n}}{\tau/2}+\frac{1}{4h}\big{[}(( \boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i+1}{+}(\boldsymbol{B}^{-1}\boldsymbol {m}^{*})_{i})(\rho_{i+1}^{*}{+}\rho_{i}^{*})\] \[\quad{-}((\boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i-1}{+}( \boldsymbol{B}^{-1}\boldsymbol{m}^{*})_{i})(\rho_{i-1}^{*}{+}\rho_{i}^{*}) \big{]}{=}\,0,\] (4.2) \[m_{i}^{n+1}{=}\,2m_{i}^{*}-m_{i}^{n}\,,\quad\rho_{i}^{n+1}{=}\,2 \rho_{i}^{*}-\rho_{i}^{n}\,. \tag{4.3}\]
At present, we have a nonlinear system of equations with variables \(\boldsymbol{m}^{*}\) and \(\boldsymbol{\rho}^{*}\) only, which can be solved by a fixed point iteration method.
The algorithm flow chart of the scheme (4.1)-(4.3) is given as follows. Specifically, to solve the solutions at the \((n+1)\)th time level, we suppose that \(\boldsymbol{u}^{n,l}{=}\left(u_{1}^{n,l},u_{2}^{n,l},\cdots,u_{M}^{n,l}\right)^ {T}\), \(\boldsymbol{\rho}^{n,l}{=}\left(\rho_{1}^{n,l},\rho_{2}^{n,l},\cdots,\rho_{M}^{ n,l}\right)^{T}\) and \(\boldsymbol{m}^{n,l}{=}\left(m_{1}^{n,l},m_{2}^{n,l},\cdots,m_{M}^{n,l}\right)^ {T}\) have been determined, then **Algorithm 1** is utilized to solve the scheme (4.1)-(4.3).
**Algorithm 1**: The iterative process for solving (4.1)-(4.3)
\begin{tabular}{l l} \hline \hline
1. & Set the tolerance error _tol_. \\
2. & Give the initial guess \(\mathbf{u}^{*,l}:=\mathbf{u}^{n,l}\). \\
3. & For \(n=1,2,\cdots,N\) Do: \\
4. & Compute \(\mathbf{\rho}^{*,l+1}\), \(\mathbf{m}^{*,l+1}\), then obtain \(\mathbf{u}^{*,l+1}\) \\
5. & EndDo \\
6. & If \(\|\mathbf{u}^{*,l}-\mathbf{u}^{*,l+1}\|_{\infty}>tol\), \(\mathbf{u}^{*,l}:=\mathbf{u}^{*,l+1}\), GoTo 2; \\
7. & else, \(\mathbf{u}^{n+1}:=2\mathbf{u}^{*,l+1}-\mathbf{u}^{n,l}\); \(\mathbf{\rho}^{n+1}:=2\mathbf{\rho}^{*,l+1}-\mathbf{\rho}^{n,l}\), \(\mathbf{m}^{n+1}=\mathbf{B}\mathbf{u}^{n+1}\). \\ \hline \hline \end{tabular}
**Remark 4.1**.: Provided the solution of the system (1.4)-(1.5) is less regular, then numerical oscillation will occur during the calculation. To eliminate this phenomenon, an efficient strategy is utilized by adding local numerical viscosity. The following two viscous terms
\[R_{i}^{u}=\frac{\epsilon_{i}^{u}}{2h}(u_{i+1}^{*}-2u_{i}^{*}+u_{i-1}^{*}), \quad\text{and}\quad R_{i}^{\rho}=\frac{\epsilon_{i}^{\rho}}{2h}(\rho_{i+1}^{* }-2\rho_{i}^{*}+\rho_{i-1}^{*}),\]
where
\[\epsilon_{i}^{u}=\left\{\begin{array}{ll}1,&|u_{i+1}^{*}-2u_{i}^{*}+u_{i-1}^ {*}|\geqslant\epsilon h,\\ 0,&\text{otherwise},\end{array}\right.\quad\text{and}\quad\epsilon_{i}^{\rho}= \left\{\begin{array}{ll}1,&|\rho_{i+1}^{*}-2\rho_{i}^{*}+\rho_{i-1}^{*}| \geqslant\epsilon h,\\ 0,&\text{otherwise}.\end{array}\right.\]
are added to the right-hand side of (3.1) and (3.2), respectively. The above factor \(\varepsilon\) is a threshold (usually very small, e.g., \(\varepsilon=10^{-5}\)) which can determine where the slope of the solution becomes unbounded. It is worth mentioning that the factor \(\varepsilon\) varies from case to case in the simulation of the discontinuous solutions in the section 5.2 below.
## 5 Numerical tests
In this section, several examples for the benchmark problems are provided to verify the convergence, conservation laws and the performance of the proposed scheme (3.1)-(3.4). To test the convergence orders, the posterior error estimate is utilized in temporal direction and spatial direction. To be more specific, for sufficient small \(\tau\), we denote
\[\|\mathrm{F}_{u}(h)\|_{\infty}=\max_{1\leqslant i\leqslant M,1 \leqslant n\leqslant N}\left|u_{i}^{n}(h,\tau)-u_{2i}^{n}(h/2,\tau)\right|, \quad\mathrm{Ord}_{\infty}^{h}=\log_{2}\frac{\|\mathrm{F}_{u}(h)\|_{\infty}}{ \|\mathrm{F}_{u}(h/2)\|_{\infty}},\]
and for sufficient small \(h\), denote
\[\|\mathrm{G}_{u}(\tau)\|_{\infty}=\max_{1\leqslant i\leqslant M,1 \leqslant n\leqslant N}\left|u_{i}^{n}(h,\tau)-u_{i}^{2n}(h,\tau/2)\right|, \quad\mathrm{Ord}_{\infty}^{\tau}=\log_{2}\frac{\|\mathrm{G}_{u}(\tau)\|_{ \infty}}{\|\mathrm{G}_{u}(\tau/2)\|_{\infty}}.\]
### Part I: smooth initial data
**Example 5.1** (**Dam-break problem [16, 19])**.: The R2CH system with the smooth initial data
\[u(x,0)=0,\quad\rho(x,0)=1+\tanh(x+a)-\tanh(x-a)\]
are considered, where \(a\) is a dam-breaking parameter. The exact solution for the problem is unknown.
To test the convergence orders and conservation, we consider the following four cases:
* **Case** (I). \(a=0.1\), \(A=0\), \(\mu=0\), \(\sigma=1\), \(\Omega=0\), on the domain \([-6,6]\times[0,20]\).
* **Case** (II). \(a=4\), \(A=0\), \(\mu=0\), \(\sigma=1\), \(\Omega=0\), on the domain \([-12\pi,12\pi]\times[0,2]\).
* **Case** (III). \(a=0.1\), \(A=0.1\), \(\mu=0.1\), \(\sigma=1\), \(\Omega=73\times 10^{-6}\), on the domain \([-8,8]\times[0,1]\).
* **Case** (IV). \(a=4\), \(A=1\), \(\mu=1\), \(\sigma=1\), \(\Omega=73\times 10^{-6}\), on the domain \([-12\pi,12\pi]\times[0,2]\).
**(Convergence)** We first verify the convergence orders for the above four cases. The temporal convergence orders of velocity \(u(x,t)\) and height \(\rho(x,t)\) are respectively listed in Table 1 and Table 2, which show the second-order convergence by fixing spatial grid \(M=100\) and refining \(N\). Similarly, Tables 3-4 illustrate the second-order spatial convergence order by fixing \(N=4000\) and refining \(M\).
**(Conservation)** The conservations in Theorem 3.1 are demonstrated in Table 5, which shows our scheme indeed preserves three conserved quantities defined in (3.5)-(3.7). It is worth noting that the total momentum in **Cases** (I) and (II) is zero, while it does not vanish in **Cases** (III) and (IV) since \(\Omega\) is nonzero in (3.6).
**(Portraits of solutions at different instants)** In addition, an applicable reference solution is necessary to evaluate the behavior of the numerical solution. To generate a reference solution, we take a refined grid \(M=3200\) in space. Figures 1-2 show the behaviors of the predicted velocities \(u(x,t)\) and heights \(\rho(x,t)\) at time \(t=2\) for **Cases** (II) and (IV). We observe that the solutions in **Case** (II) are symmetric, while the solutions in **Case** (IV) are asymmetric. Actually the symmetry depends heavily on the selection of parameters. It can be clearly seen that the conservative scheme (3.1)-(3.4) performs well in depicting the dam break solution of the R2CH system (1.2)-(1.3) even using a relatively rough grid. Figures 3-4 show the evolution of the predicted dam break solutions of the R2CH system over a long period of time. We find that even up to \(t=50\), the solution in Figure 3 still stretches leftward and rightward in a symmetric form, while the evolution trend of the solutions in Figure 4 is clearly less symmetric.
* **Case** (III). \(A=1\), \(\mu=1\), \(\sigma=1\), \(\Omega=73\times 10^{-6}\).
**Example 5.2** (**Three-peakon interaction for the CH equation**[9, 16, 18, 22]).: We now test a degenerate R2CH system (1.4)-(1.5) by taking \(\rho=A=\mu=0\) and \(\sigma=1\), which will reduce to the classical CH equation. Consider the three-peakon interaction with the
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Case** (I)} & \multicolumn{2}{c}{**Case** (II)} & \multicolumn{2}{c}{**Case** (III)} & \multicolumn{2}{c}{**Case** (III)} & \multicolumn{2}{c}{**Case** (IV)} \\ \cline{2-10} \(N\) & \(\|\mathrm{F}_{u}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) & \(\|\mathrm{F}_{u}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) & \(\|\mathrm{F}_{u}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) & \(\|\mathrm{F}_{u}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) \\ \hline
100 & \(9.0981\mathrm{e}-04\) & \(*\) & \(3.3262\mathrm{e}-04\) & \(*\) & \(1.7801\mathrm{e}-07\) & \(*\) & \(8.3334\mathrm{e}-04\) & \(*\) \\
200 & \(2.2919\mathrm{e}-04\) & \(1.9890\) & \(8.3267\mathrm{e}-05\) & \(1.9982\) & \(4.4503\mathrm{e}-08\) & \(2.0000\) & \(2.0881\mathrm{e}-04\) & \(1.9967\) \\
400 & \(5.7402\mathrm{e}-05\) & \(1.9973\) & \(2.0827\mathrm{e}-05\) & \(1.9999\) & \(1.1126\mathrm{e}-08\) & \(2.0000\) & \(5.2245\mathrm{e}-05\) & \(1.9988\) \\
800 & \(1.4357\mathrm{e}-05\) & \(1.9993\) & \(5.1994\mathrm{e}-06\) & \(1.9997\) & \(2.7815\mathrm{e}-09\) & \(2.0000\) & \(1.3056\mathrm{e}-05\) & \(2.0006\) \\
1600 & \(3.5897\mathrm{e}-06\) & \(1.9998\) & \(1.3006\mathrm{e}-06\) & \(1.9995\) & \(6.9536\mathrm{e}-10\) & \(2.0000\) & \(3.2636\mathrm{e}-06\) & \(2.0001\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Numerical errors, temporal convergence orders of velocity \(u(x,t)\) of the scheme (3.1)β(3.4) with \(M=100\) for **Case** (I), **Case** (II), **Case** (III) and **Case** (IV).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Case** (I)} & \multicolumn{2}{c}{**Case** (II)} & \multicolumn{2}{c}{**Case** (III)} & \multicolumn{2}{c}{**Case** (IV)} \\ \cline{2-10} \(N\) & \(\|\mathrm{F}_{\rho}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) & \(\|\mathrm{F}_{u}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) & \(\|\mathrm{F}_{\rho}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) & \(\|\mathrm{F}_{\rho}(\tau)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{\tau}\) \\ \hline
100 & \(1.4094\mathrm{e}-03\) & \(*\) & \(3.0830\mathrm{e}-04\) & \(*\) & \(5.2605\mathrm{e}-07\) & \(*\) & \(3.7110\mathrm{e}-04\) & \(*\) \\
200 & \(3.5345\mathrm{e}-04\) & \(1.9955\) & \(7.7121\mathrm{e}-05\) & \(1.9991\) & \(1.3151\mathrm{e}-07\) & \(2.0000\) & \(9.2969\mathrm{e}-05\) & \(1.9970\) \\
400 & \(8.8512\mathrm{e}-05\) & \(1.9976\) & \(1.9283\mathrm{e}-05\) & \(1.9997\) & \(3.2879\mathrm{e}-08\) & \(2.0000\) & \(2.3258\mathrm{e}-05\) & \(1.9990\) \\
800 & \(2.2133\mathrm{e}-05\) & \(1.9997\) & \(4.8210\mathrm{e}-06\) & \(1.9999\) & \(8.2193\mathrm{e}-09\) & \(2.0001\) & \(5.8135\mathrm{e}-06\) & \(2.0002\) \\
1600 & \(5.5335\mathrm{e}-06\) & \(1.9999\) & \(1.2060\mathrm{e}-06\) & \(1.9990\) & \(2.0549\mathrm{e}-09\) & \(2.0000\) & \(1.4532\mathrm{e}-06\) & \(2.0001\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Numerical errors, temporal convergence orders in height \(\rho(x,t)\) of the scheme (3.1)β(3.4) with \(M=100\) for **Case** (I), **Case** (II), **Case** (III) and **Case** (IV).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Case** (I)} & \multicolumn{2}{c}{**Case** (II)} & \multicolumn{2}{c}{**Case** (III)} & \multicolumn{2}{c}{**Case** (IV)} \\ \cline{2-10} \(M\) & \(\|\mathrm{F}_{u}(h)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{h}\) & \(\|\mathrm{F}_{u}(h)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{h}\) & \(\|\mathrm{F}_{u}(h)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{h}\) & \(\|\mathrm{F}_{u}(h)\|_{\infty}\) & \(\mathrm{Ord}_{\infty}^{h}\) \\ \hline
100 & \(6.2259\mathrm{e}-04\) & \(*\) & \(1.2876\mathrm{e}-01\) & \(*\) & \(1.9975\mathrm{e}-04\) & \(*\) & \(1.1547\mathrm{e}-01\) & \(*\) \\
200 & \(1.5547\mathrm{e}-04\) & \(2.0016\) & \(3.7311\mathrm{e}-02\) & \(1.7871\) & \(4.9396\mathrm{e}-05\) & \(2.0157\) & \(4.6363\mathrm{e}-02\) & \(1.3165\) \\
400 & \(3.8856\mathrm{e}-05\) & \(2.0004\) & \(1.2058\mathrm{e}-02\) & \(1.6296\) & \(1.2327\mathrm{e}-05\) & \(2.0025\) & \(1.4034\mathrm{e}-02\) & \(1.7240\) \\
800 & \(9.7152\mathrm{e}-06\) & \(1.9998\) & \(3.3317\mathrm{e}-03\) & \(1.8557\) & \(3.0826\mathrm{e}-06\) & \(1.9997\) & \(3.7075\mathrm{e}-03\) & \(1.9205\) \\
1600 & \(2.4288\mathrm{e}-06\) & \(2.0000\) & \(8.5597\mathrm{e}-04\) & \(1.9606\) & \(7.7051\mathrm{e}-07\) & \(2.0003\) & \(9.3181\mathrm{e}-04\) & \(1.9923\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Numerical errors, spatial convergence orders in velocity \(u(x,t)\) of the scheme (3.1
following initial condition
\[u(x,0)=\phi_{1}(x)+\phi_{2}(x)+\phi_{3}(x),\]
where
\[\phi_{i}(x)=\left\{\begin{array}{ll}\frac{c_{i}}{\cosh(L/2)}\cosh(x-x_{i}), \quad|x-x_{i}|\leqslant L/2,&\\ \frac{c_{i}}{\cosh(L/2)}\cosh(L-(x-x_{i})),\quad|x-x_{i}|>L/2,&\end{array}\right. i=1,2,3.\]
The parameters are given by \(c_{1}=2\), \(c_{2}=1\), \(c_{3}=0.8\), \(x_{1}=-5\), \(x_{2}=-3\), \(x_{3}=-1\) and the computational domain is \([0,L]\) with \(L=30\).
In the calculation, we take the spatial stepsize \(h=L/2048\) and temporal stepsize \(\tau=1/10000\) to simulate this interaction at \(t=0,1,2,3,4,6,8,10\), respectively. Figure 5 shows the moving peak interaction at different instants of time. It is clear to see that the proposed scheme performs well in resolving the complex interaction among multiple peakons for the CH equation. The results obtained here are comparable to those obtained by a fourth-order HIEQ-GM method in [9] and a MSWCM-AD30 method in [22].
**Case (III).** In contrast, we observe that the selection of parameters has remarkable impact on the profiles of solutions to the system (1.2)-(1.3). Figure 8 shows the computed results of discrete conserved quantities (3.5)-(3.7) at time \(t\!=\!5\) for **Case (I)** and **Case (III)**. One can easily observe that the scheme (3.1)-(3.4) guarantees the conservation of mass, momentum and energy for different parameters. Moreover, One can also see that the rotational parameter \(\Omega\) indeed changes the conserved quantities \(E^{n}\) and \(I_{2}^{n}\), which is consistent with (3.6)-(3.7).
**Example 5.4** (**Peakon anti-peakon interaction-I [15]**).** We consider a piecewise smooth
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{**Case (I)**, \(h\!=\!0.06\), \(\tau\!=\!0.01\)} \\ \cline{2-4} \(t_{n}\) & \(E^{n}\) & \(I_{1}^{n}\) & \(I_{2}^{n}\) \\ \hline
0 & 107.1098478543294 & 206.6665840981688 & 0 \\
2 & 107.1098478543293 & 206.6665840981688 & \(-0.0
initial conditions
\[u(x,0)=\left\{\begin{array}{l}\frac{1}{2\sinh(\frac{1}{4})}\sinh(x ),\quad 0\leqslant x\leqslant\frac{1}{4},\\ \frac{1}{\sinh(-\frac{1}{2})}\sinh(x-\frac{1}{2}),\quad\frac{1}{4}<x \leqslant\frac{3}{4},\qquad\rho(x,0)=1.5.\\ \frac{1}{2\sinh(\frac{1}{4})}\sinh(x-1),\quad\frac{3}{4}<x<1, \end{array}\right.\]
To test the influence of \(\Omega\) on the solution of the R2CH system (1.2)-(1.3), the parameters in **Cases** (I) and (II) are selected for calculation in this example. Figure 9 and Figure 10 respectively display the profiles of solutions of the velocity variable \(u(x,t)\) and the height variable \(\rho(x,t)\) at different instants of time, which enables us to observe how the solutions evolve over time. In addition, it can be observed from Figure 9(a) and Figure 10(a) that \(u(x,t)\) at initial time is jump discontinuous at \(x=1/4\) and \(x=3/4\). Compared with **Case** (I), the symmetry is broken in **Case** (II). Figure 11 shows the results of discrete conserved quantities defined in (3.5)-(3.7) for **Cases** (I) and (II). It is observed that the total momentum is zero when the solution is symmetric, while it is nonzero when the solution is asymmetric. Moreover, we portray the evolution of the peakon-antipeakon for the velocity \(u(x,t)\) and the height \(\rho(x,t)\) in Figures 12-13. It can be seen from Figure 12 that the evolution of solutions presents a certain periodicity in the long time simulation,
Figure 1: The profiles of the predicted dam-break solutions of velocities \(u(x,t)\) and heights \(\rho(x,t)\) using different spatial grid points at \(t=2\) in **Case** (II).
Figure 3: The predicted dam-break solutions for the R2CH system in **Case** (II) show evolution of the velocity \(u(x,t)\) and the height \(\rho(x,t)\) with \(t=50\).
Figure 2: The profiles of the predicted dam-break solutions of velocities \(u(x,t)\) and heights \(\rho(x,t)\) using different spatial grid points at \(t=2\) in **Case** (IV).
while the evolution of solutions is drawn in a short time in Figure 13 since it will blow up in a slightly long time.
**Example 5.5** (**Peakon anti-peakon interaction-II**[4], [16]).: Here we study the case of a peakon anti-peakon interaction over a long time. For this purpose, we consider the following initial data
\[u(x,0)=p_{1}e^{-|x-x_{1}|}+p_{2}e^{-|x-x_{2}|},\quad\rho(x,0)=0.5,\]
where \(x_{1}=-5\) and \(x_{2}=5\) respectively represent the position of the peak and the trough, \(p_{1}=1\), \(p_{2}=-1\) and the computation domain is set to be \([-20,20]\).
We depict the evolution of the solutions for the R2CH system (1.2)-(1.3), where the parameters are selected as **Cases** (I) and (III) by taking \(h=0.05\) and \(\tau=0.0005\) for calculation. Figures 14-17 portray the evolution behaviors of velocity \(u(x,t)\) and height \(\rho(x,t)\) under these two cases. It easily observes that for sufficiently long periods of time, the peak solutions will have an elastic collision, so that we obtain a dissipative solution for the R2CH system (1.2)-(1.3). By comparing Figures 14-15 with Figures 16-17, we find that **Case** (I) still maintains symmetric, while the symmetry in **Case** (III) is obviously broken. Similar to Example 5.4, we assert that the total momentum remains zero in **Case** (I) because the two peak solutions have the same magnitude but opposite signs. The momentum in **Case** (III) is nonzero because the solution in **Case** (III) loses its symmetry. We verify this through the calculation, which is not listed here for brevity. It is worth mentioning that, since \(\rho(x,0)>0\), it can be proved that \(\rho(x,t)\) remains strictly positive and that the solution retains the regularity of the initial data, see e.g., [8]. Particularly, \(\rho(x,t)\) remains bounded. We observe that the numerical solution for the height \(\rho(x,t)\) still keep positive. Ultimately, the evolution of the velocity \(u(x,t)\) and the height \(\rho(x,t)\) for these
Figure 4: The predicted dam-break solutions for the R2CH system in **Case** (IV) show evolution of the velocity \(u(x,t)\) and the height \(\rho(x,t)\) with \(t=50\).
Figure 5: Three-peakon interaction of the CH equation at \(t\!=\!0,1,2,3,4,6,8,10\), respectively.
Figure 8: Conserved quantities defined in (3.5)β(3.7) for the R2CH system in **Case** (I) and **Case** (III) with stepsizes \(h=0.025\) and \(\tau=0.0005\).
Figure 6: Velocities \(u(x,t)\) and heights \(\rho(x,t)\) for the R2CH system in **Case** (I) computed by scheme (3.1)β(3.4) at three different times with stepsizes \(h=0.025\) and \(\tau=0.0005\).
Figure 7: Velocities \(u(x,t)\) and heights \(\rho(x,t)\) for the R2CH system in **Case** (III) computed by scheme (3.1)β(3.4) at three different times with stepsizes \(h=0.025\) and \(\tau=0.0005\).
Figure 11: Conserved quantities defined in (3.5)β(3.7) for the R2CH system in **Case** (I) and **Case** (II) with stepsizes \(h=0.02\) and \(\tau=0.0005\).
Figure 10: Velocities \(u(x,t)\) and heights \(\rho(x,t)\) for the R2CH system in **Case** (II) computed by scheme (3.1)β(3.4) at five different times with stepsizes \(h=0.002\) and \(\tau=0.0005\).
Figure 9: Velocities \(u(x,t)\) and heights \(\rho(x,t)\) for the R2CH system in **Case** (I) computed by scheme (3.1)β(3.4) at five different times with stepsizes \(h=0.002\) and \(\tau=0.001\).
Figure 12: The predicted peakon solutions for the R2CH system in **Case** (I) show the evolution of the velocity, \(u(x,t)\) and the height \(\rho(x,t)\) with stepsizes \(h=0.002\) and \(\tau=0.001\).
Figure 13: The predicted peakon solutions for the R2CH system in **Case** (II) show the evolution of the velocity, \(u(x,t)\) and the height \(\rho(x,t)\) with stepsizes \(h=0.002\) and \(\tau=0.001\).
two cases in a longer time (\(t\!=\!35\)) is depicted in Figures 18-19. We clearly see the symmetry in **Case** (I) and asymmetry in **Case** (III). These phenomena for the asymmetrical case are displayed firstly in current paper.
### Part III: the conserved quantity \(H\)
Recalling the conserved quantity \(H\) mentioned in the introduction, it can be defined discretely as
\[H^{n}\!=\!\sum_{i=1}^{M}\left[(u_{i}^{n})^{3}\!+\!u_{i}^{n}\Big{(}\frac{u_{i+1 }^{n}\!-\!u_{i-1}^{n}}{2h}\Big{)}^{2}\!-\!A(u_{i}^{n})^{2}\!-\!\mu\Big{(}\frac{u _{i+1}^{n}\!-\!u_{i-1}^{n}}{2h}\Big{)}^{2}\!+\!u_{i}^{n}(\rho_{i}^{n})^{2} \right],\quad n\!=\!1,\!2,\cdots,N. \tag{5.1}\]
We expect that the proposed scheme (3.1)-(3.4) also can preserve the conserved quantity. Toward this end we conduct a conservation test for \(H\) through two different types of numerical examples including a smooth initial data problem and two nonsmooth initial data problem. For the calculation of Example 5.1, the parameters are selected from **Case** (I) and **Case** (II). For the nonsmooth initial data problem, Example 5.4 and Example 5.5 are selected. All the parameters in **Case** (I) are used. Due to the symmetry of initial values, it immediately obtains that \(H^{0}\!=\!0\) according to (5.1). Figure 20 depicts the errors
Figure 14: The velocities \(u(x,t)\) for the R2CH system in **Case** (I) at different times with stepsizes \(h\!=\!0.05\) and \(\tau\!=\!0.0005\).
Figure 16: The velocities \(u(x,t)\) for the R2CH system in **Case** (Ill) at different times with stepsizes \(h=0.05\) and \(\tau=0.0005\).
Figure 15: The heights \(\rho(x,t)\) for the R2CH system in **Case** (1) at different times with stepsizes \(h=0.05\) and \(\tau=0.0005\).
Figure 17: The heights \(\rho(x,t)\) for the R2CH system in **Case** (III) at different times with stepsizes \(h=0.05\) and \(\tau=0.0005\).
Figure 18: The predicted peakon solutions for the R2CH system in **Case** (I) show the evolution of the velocity \(u(x,t)\) and the height \(\rho(x,t)\) with \(t=35\).
between \(H^{n}\) and \(H^{0}\) at different times, and we can find that \(H^{n}\) can approximate to \(H\) numerically.
## 6 Concluding remarks
In this paper, we have carried out extensively numerical study on the R2CH system under both the smooth and nonsmooth initial data based on a well-designed conservative finite difference discretization. The evolution of several asymmetric and non-smooth solitary wave solutions are depicted for the first time. The numerical conservation laws enable the calculation schemes to accurately capture the evolution of the smooth/nonsmooth solutions of the benchmark problems. In particular, the numerical threshold technique plays a key role in grasping drastic change of peakon solutions. Last but surely not the
Figure 19: The predicted peakon solutions for the R2CH system in **Case** (III) show the evolution of the velocity \(u(x,t)\) and the height \(\rho(x,t)\) with \(t\!=\!35\).
Figure 20: (a) shows the computed errors of the conserved quantity \(H\) with \(h\!=\!0.5\) and \(\tau\!=\!0.005\) in **Case** (I) and **Case** (II) in subsection 5.1 for Example 5.1; (b) and (c) show the computed errors of the conserved quantity \(H\) with \(h\!=\!0.2\) and \(\tau\!=\!0.0025\) in **Case** (I) for Example 5.4 and Example 5.5 in subsection 5.2, respectively.
least, there is still plenty of rooms for further study. Especially, the strict error estimate of the difference scheme is a challenging topic, which is not covered in this paper.
## Appendix
Noticing when \(\sigma=1\) and \(\Omega=0\), the system (1.4)-(1.5) can be written as a system of hyperbolic type
\[\left\{\begin{aligned} u_{t}+uu_{x}+\partial_{x}G*\left(u^{2}+\frac{ 1}{2}u_{x}^{2}-Au+\mu u_{xx}+\frac{1}{2}\rho^{2}\right)=0,\\ \rho_{t}+u\rho_{x}=-u_{x}\rho,\quad x\in R,t>0,\end{aligned}\right.\] (A.1)
then we show the system above has the conservation law
\[H=\int_{R}(u^{3}+uu_{x}^{2}-Au^{2}-\mu u_{x}^{2}+u\rho^{2})\mathrm{d}x.\]
Proof.: For simplicity, we set
\[g(x)=G*\left(u^{2}+\frac{1}{2}u_{x}^{2}-Au+\mu u_{xx}\right),\quad h(x)=G* \left(\frac{1}{2}\rho^{2}\right).\] (A.3)
Then (A.1) takes the equivalent form
\[u_{t}+uu_{x}+g_{x}+h_{x}=0.\] (A.4)
Using (A.4) and noticing
\[\int_{R}u\rho\rho_{t}\mathrm{d}x=-\int_{R}(u\rho)(u\rho)_{x}\mathrm{d}x=0,\]
we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{R}(u^{3}+uu_{x}^{2}-Au^{2}-\mu u _{x}^{2}+u\rho^{2})\mathrm{d}x\] \[= \int_{R}(3u^{2}u_{t}+u_{t}u_{x}^{2}+2uu_{x}u_{xt}-2Auu_{t}-2\mu u _{x}u_{xt}+u_{t}\rho^{2})\mathrm{d}x\] \[= \int_{R}(3u^{2}+u_{x}^{2}-2Au)u_{t}\mathrm{d}x+\int_{R}(2uu_{x}- 2\mu u_{x})u_{xt}\mathrm{d}x+\int_{R}u_{t}\rho^{2}\mathrm{d}x\] \[= -\int_{R}(3u^{2}+u_{x}^{2}-2Au)(uu_{x}+g_{x}+h_{x})\mathrm{d}x\] \[-\int_{R}(2uu_{x}-2\mu u_{x})(u_{x}^{2}+uu_{xx}+g_{xx}+h_{xx}) \mathrm{d}x\] \[-\int_{R}(uu_{x}+g_{x}+h_{x})\rho^{2}\mathrm{d}x\] \[\triangleq K_{1}+K_{2}+K_{3}.\] (A.5)
Below, we calculate each term at the right hand of (A.5). First, it easily obtains that
\[K_{1} = -\int_{R}(3u^{2}+u_{x}^{2}-2Au)(uu_{x}+g_{x}+h_{x})\mathrm{d}x\] \[= -\int_{R}uu_{x}^{3}\mathrm{d}x-\int_{R}(3u^{2}+u_{x}^{2}-2Au)g_{x} \mathrm{d}x-\int_{R}(3u^{2}+u_{x}^{2}-2Au)h_{x}\mathrm{d}x.\]
Using (A.3), we have
\[K_{2} = -\int_{R}(2uu_{x}-2\mu u_{x})(u_{x}^{2}+uu_{xx}+g_{xx}+h_{xx}) \mathrm{d}x\] \[= -\int_{R}(2uu_{x}^{3}+2u^{2}u_{x}u_{xx})\mathrm{d}x+\int_{R}(2\mu u _{x}^{3}+2\mu uu_{x}u_{xx})\mathrm{d}x\] \[-\int_{R}(2uu_{x}-2\mu u_{x})\Big{(}g-u^{2}-\frac{1}{2}u_{x}^{2}+ Au-\mu u_{xx}\Big{)}\mathrm{d}x\] \[-\int_{R}(2uu_{x}-2\mu u_{x})\Big{(}h-\frac{1}{2}\rho^{2}\Big{)} \mathrm{d}x\] \[= -\int_{R}2uu_{x}g\mathrm{d}x+\int_{R}uu_{x}^{3}\mathrm{d}x+\int_ {R}2\mu u_{x}g\mathrm{d}x-\int_{R}\mu u_{x}^{3}\mathrm{d}x\] \[-\int_{R}2uu_{x}h\mathrm{d}x+\int_{R}uu_{x}\rho^{2}\mathrm{d}x+ \int_{R}2\mu u_{x}h\mathrm{d}x-\int_{R}\mu u_{x}\rho^{2}\mathrm{d}x.\]
Similarly, we have
\[K_{3} = -\int_{R}\rho^{2}(uu_{x}+g_{x}+h_{x})\mathrm{d}x=-\int_{R}uu_{x} \rho^{2}\mathrm{d}x-\int_{R}2(h-h_{xx})g_{x}\mathrm{d}x.\]
Adding \(K_{1}\), \(K_{2}\) and \(K_{3}\) together, we obtain
\[\frac{\mathrm{d}H}{\mathrm{d}t} = -\int_{R}(2u^{2}+u_{x}^{2}-2Au+2uu_{xx})g_{x}\mathrm{d}x+\int_{R} 2\mu u_{xx}g_{x}\mathrm{d}x+\int_{R}2\mu u_{x}g\mathrm{d}x\] \[-\int_{R}(2u^{2}+u_{x}^{2}-2Au+2uu_{xx})h_{x}\mathrm{d}x+\int_{R} 2\mu u_{xx}h_{x}\mathrm{d}x+\int_{R}2\mu u_{x}h\mathrm{d}x\] \[-\int_{R}\mu u_{x}^{3}\mathrm{d}x-\int_{R}2\mu u_{x}(h-h_{xx}) \mathrm{d}x-\int_{R}2(h-h_{xx})g_{x}\mathrm{d}x\] \[= -\int_{R}2(g-g_{xx})g_{x}\mathrm{d}x-\int_{R}2\mu u_{x}g_{xx} \mathrm{d}x+\int_{R}2\mu u_{x}g\mathrm{d}x\] \[-\int_{R}2(g-g_{xx})h_{x}\mathrm{d}x-\int_{R}2\mu u_{x}h_{xx} \mathrm{d}x+\int_{R}2\mu u_{x}h\mathrm{d}x\] \[-\int_{R}\mu u_{x}^{3}\mathrm{d}x-\int_{R}2\mu u_{x}(h-h_{xx}) \mathrm{d}x-\int_{R}2(h-h_{xx})g_{x}\mathrm{d}x\] \[= -\int_{R}2\mu u_{x}g_{xx}\mathrm{d}x+\int_{R}2\mu u_{x}g\mathrm{d }x-\int_{R}\mu u_{x}^{3}\mathrm{d}x\] \[= -\int_{R}2\mu u_{x}\Big{(}g-u^{2}-\frac{1}{2}u_{x}^{2}+Au-\mu u_{ xx}\Big{)}\mathrm{d}x+\int_{R}2\mu u_{x}g\mathrm{d}x-\int_{R}\mu u_{x}^{3} \mathrm{d}x\] \[= 0,\]
in which
\[-\int_{R}u^{2}u_{x}u_{xx}\mathrm{d}x=\int_{R}uu_{x}^{3}\mathrm{d}x\quad\text{and} \quad-\int_{R}2\mu uu_{x}u_{xx}\mathrm{d}x=\int_{R}\mu u_{x}^{3}\mathrm{d}x\]
have been used several times during the calculation.
|
2306.13662 | Best Practices for Machine Learning Systems: An Industrial Framework for
Analysis and Optimization | In the last few years, the Machine Learning (ML) and Artificial Intelligence
community has developed an increasing interest in Software Engineering (SE) for
ML Systems leading to a proliferation of best practices, rules, and guidelines
aiming at improving the quality of the software of ML Systems. However,
understanding their impact on the overall quality has received less attention.
Practices are usually presented in a prescriptive manner, without an explicit
connection to their overall contribution to software quality. Based on the
observation that different practices influence different aspects of
software-quality and that one single quality aspect might be addressed by
several practices we propose a framework to analyse sets of best practices with
focus on quality impact and prioritization of their implementation. We first
introduce a hierarchical Software Quality Model (SQM) specifically tailored for
ML Systems. Relying on expert knowledge, the connection between individual
practices and software quality aspects is explicitly elicited for a large set
of well-established practices. Applying set-function optimization techniques we
can answer questions such as what is the set of practices that maximizes SQM
coverage, what are the most important ones, which practices should be
implemented in order to improve specific quality aspects, among others. We
illustrate the usage of our framework by analyzing well-known sets of
practices. | Georgios Christos Chouliaras, Kornel KieΓ
Βczewski, Amit Beka, David Konopnicki, Lucas Bernardi | 2023-06-09T12:14:43Z | http://arxiv.org/abs/2306.13662v1 | # Best Practices for Machine Learning Systems: An Industrial Framework for Analysis and Optimization
###### Abstract.
In the last few years, the Machine Learning (ML) and Artificial Intelligence community has developed an increasing interest in Software Engineering (SE) for ML Systems leading to a proliferation of best practices, rules, and guidelines aiming at improving the quality of the software of ML Systems. However, understanding their impact on the overall quality has received less attention. Practices are usually presented in a prescriptive manner, without an explicit connection to their overall contribution to software quality. Based on the observation that different practices influence different aspects of software-quality and that one single quality aspect might be addressed by several practices we propose a framework to analyse sets of best practices with focus on quality impact and prioritization of their implementation. We first introduce a hierarchical Software Quality Model (SQM) specifically tailored for ML Systems. Relying on expert knowledge, the connection between individual practices and software quality aspects is explicitly elicited for a large set of well-established practices. Applying set-function optimization techniques we can answer questions such as what is the set of practices that maximizes SQM coverage, what are the most important ones, which practices should be implemented in order to improve specific quality aspects, among others. We illustrate the usage of our framework by analyzing well-known sets of practices.
machine learning system, system quality, best practices, software quality, quality model, software engineering, reliable, trustworthy
## 1. Introduction
In Software Engineering, Software Quality Models (SQM) are central when it comes to achieving high quality software, as highlighted for example by (Bouze et al., 2015): _"A quality model provides the framework towards a definition of quality"_. A Software Quality Model is the set of _characteristics_ and the relationships between them that provides the basis for specifying quality requirements and evaluation (Relying and Krape, 2016). In practice, a SQM is a structured set of attributes describing the aspects that are believed contribute to the overall quality. Machine Learning (ML) systems have unique properties like data dependencies and hidden feedback loops which make quality attributes such as _diversity_, _fairness_, _human agency_ and _oversight_ more relevant than in traditional software systems (Krape, 2016). This makes traditional quality models not directly applicable for ML applications. Moreover in recent years there has been a rise in the publication of best practices tailored for ML systems (Krape, 2016), (Bouze et al., 2015), (Krape, 2017), (Krape, 2018), however understanding their impact on overall quality and the systematic prioritization for their adoption has not received enough interest. Improving the quality of ML systems, especially in an industrial setting where multiple ML systems are in production, does not only require a set of practices, but also a deep understanding of their contribution to specific aspects of the quality of the system, as well as criteria to prioritize their implementation due to their large number and high implementation costs. Without a systematic prioritization based on their contribution to each individual aspect of software quality, it is challenging for practitioners to choose the optimal practices to adopt based on their needs which might lead to limited adoption, undesired biases, inefficient development processes and inconsistent quality. The challenge lies on the fact that some best-practices have a narrow impact, strongly affecting a few specific quality aspects while others have wider impact affecting many aspects, which might lead to redundancy or gaps in the _coverage_ of the all the relevant quality aspects. Another challenge is that the importance of each quality aspect depends on the specific ML application, hence there is no single set of best-practices that satisfies the quality requirements of all ML applications. To address these challenges we introduce a reusable framework to analyse the contribution of a set of best practices to the quality of the system according to the specific needs of the particular application. The framework consists of a general-purpose Software Quality Model for ML Systems, expert-based representations of a large set of well established best-practices, and a criterion to assess a _set_ of best practices w.r.t. our SQM: the _SQM Coverage Criterion_, which quantifies how many of the attributes receive enough attention from a given _set_ of best practices. Applying set optimization techniques we can answer questions such as what are the practices that maximize the coverage, which practices can be implemented to address specific quality aspects and which aspects lack coverage, among others.
Concretely, our contributions are the following: **1)** A general-purpose software quality model tailored for ML systems. **2)** A framework to analyse and prioritize software engineering best practices based on their influence on quality, with the flexibility to be adaptable according to the needs of each organization. **3)** We apply the proposed framework to analyze existing sets of best practices for ML systems and identify their strengths and potential gaps.
The rest of the paper is organized as follows. Section 2 discusses related work with emphasis on Software Quality Models and software best-practices for ML systems, section 3 introduces our Software Quality Model and describes its construction process. Section 4 introduces our best-practices analysis framework with details about its construction process and relevant algorithms. In section 5 various best-practices sets are analysed using our framework, we present our findings and insights. Finally, section 6 summarizes our work and discusses limitations and future work. Appendices include all the details, such as proofs, extensive results, and computer code to facilitate reusability and repeatability of our framework.
## 2. Related Work
### Software Quality Models for ML Systems
Defining and measuring software quality is a fundamental problem and one of the first solutions came through the means of a software quality model in 1978 (Friedman, 1978). Such models include general software _characteristics_ which are further refined into _sub-characteristics_, which are decomposed into measurable software attributes whose values are computed by a metric (Friedman, 1978).
Software quality models developed until 2001 (Friedman, 1978; Krawczyk, 1990; Krawczyk, 1990) are characterized as _basic_ since they make global assessments of a software product. Models developed afterwards, such as (Friedman, 1978; Krawczyk, 1990) are built on top of basic models and are specific to certain domains or specialized applications, hence are called _tailored_ quality models (Krawczyk, 1990). Such a quality model tailored for data products has been presented in (Krawczyk, 1990).
Software for ML Systems exhibits differences when compared to traditional software such as the fact that minor changes in the input may lead to large discrepancies in the output (Krawczyk, 1990). Moreover due to the dependencies on data, ML systems accumulate technical debt which is harder to recognize than code dependencies, which are identified via static analysis by compilers and linkers, tooling that is not widely available for data dependencies. Other peculiarities of ML systems include direct and hidden feedback loops where two systems influence each other indirectly (Krawczyk, 1990). Additionally, software quality aspects such as _fairness_ and _explainability_ as well as legal and regulatory aspects which are relevant to ML software are not covered by existing software quality models (Krawczyk, 1990). Furthermore, existing quality attributes such as _maintainability_ and _testability_ need to be rethought in the context of ML software (Krawczyk, 1990). All these peculiarities make existing software quality models only partially applicable to ML software. In (Krawczyk, 1990) the authors present the systematic construction of quality models for ML systems based on a specific industrial use case. The authors focus on the process of constructing a quality meta model, identifying ML quality requirements based on the use case and instantiating a quality model that is tailored to the business application. In our work however, we introduce a general software quality model for ML systems that can be directly applied on a large set of industrial applications, without the need to go through a construction process. The key difference between our work and (Krawczyk, 1990), is that their main contribution a development process for quality models, while one of our main contributions is the quality model itself, which can be used with no or minimum modifications for a broad range of ML systems. This allows the usage of the same quality model for multiple use cases within an organization which reduces the effort of its adoption and allows to create a common communication language regarding the quality of the ML systems in the organization. In (Krawczyk, 1990) the authors conclude that the majority of the studies on software quality for ML either adopt or extend the ISO 25010 Quality Model for software product quality (Krawczyk, 1990). They find though that there is no consensus on whether ISO 25010 is appropriate to use for AI-based software or which characteristics of AI-based software may be mapped to attributes of traditional quality models. Unlike other studies, we did not adopt or extend ISO 25010 but rather followed a systematic approach to build our quality model from scratch by adding quality sub-characteristics based on their relevance to ML systems.
### Software best-practices for ML Systems
Best practices for increasing the quality of ML systems are presented in (Friedman, 1978), (Friedman, 1978) and (Krawczyk, 1990) however a systematic way to link the influence of the recommended practices to the software quality attributes of ML systems is not included. This makes it particularly challenging for ML practitioners to prioritize the adoption (or even understand the impact) of the large set of best practices based on the specific needs of their organizations. In (Krawczyk, 1990) the authors present published ML practices targeting several testing properties (_relevance_, _robustness_, _correctness_, _efficiency_, _security_, _privacy_, _fairness_ and _interpretability_) however their influence on quality aspects is not being studied. The authors in (Krawczyk, 1990) conducted a survey of ML practitioners from multiple companies and present the effect of various published ML practices on four categories (_Agility_, _Software Quality_, _Team Effectiveness_ and _Traceability_). They present the importance of each practice for each of the categories, as perceived by the surveyed practitioners. However, these categories are generic, and in fact only two of them are directly related to software quality (_Software Quality_ and _Traceability_), in contrast, we study the influence of each best practice on a full-blown general purpose Software Quality Model specifically built for ML system with fine-grained aspects such as _testability_ and _deployability_. Furthermore, we study the influence on each quality aspect of the quality model when a _set_ of practices is applied, which is key to understand and prioritize best-practices since the overall impact is different depending on which other practices are also implemented. In (Krawczyk, 1990) the authors extracted challenges and solutions for large scale ML systems synthesized into four quality attributes: _adaptability_, _scalability_, _safety_ and _privacy_. They categorized software practices based on the step on the ML lifecycle and the addressed quality attribute. A difference of this work with ours, is that in (Krawczyk, 1990) each practice targets a single quality attribute while its effect on multiple attributes is not explicitly studied. Even though there is work that studies the effect of practices on software quality (Krawczyk, 1990), (Krawczyk, 1990) to the best of our knowledge, no study has been published about the interrelationship of software best-practices for ML Systems with multiple fine-grained quality attributes, nor about their prioritization in order to balance Software Quality and implementation costs.
## 3. A Software Quality Model for ML Systems
### The model
A quality model determines which quality aspects are considered when evaluating the properties of a software product (Krishnan et al., 2017). Our software quality model for ML systems comprises 7 quality _characteristics_ further divided into _sub-characteristics_. Quality _characteristics_ are general properties of quality that comprise the fundamental factors, which cannot be measured directly. Each _characteristic_ consists of _sub-characteristics_, which are concrete quality aspects that can be directly influenced and measured. A graphical illustration of our software quality model for ML systems is presented in tree-structure in Figure 1. We define quality _characteristics_ as follows:
* The degree to which a machine learning system provides functions that meet stated and implied needs when used under specified conditions.
* The level of performance relative to the amount of resources used under stated conditions.
* The tolerance to degradation by the machine learning system under consideration when exposed to dynamic or adverse events.
* The degree of effectiveness and efficiency with which a machine learning system can be modified to improve it, correct it or adapt it to changes in environment and in requirements.
* The ease of performing the actions required for a machine learning system to run successfully in production.
* The degree to which users and contributors understand the relevant aspects of a machine learning system.
* The level of trustworthiness of a machine learning system.
The definitions of all _sub-characteristics_ can be found in Appendix B. Notice that there are no data quality attributes in the quality model, as these are defined in well established software quality models tailored for data (Krishnan et al., 2017). This existing data quality model can be used in addition to our software quality model, to analyze the quality of data which are used as input to an ML system.
### The development process
We started by creating a list of the quality sub-characteristics to be included in our model. To achieve this, we went through the list of all the known system quality attributes in (Shen et al., 2017) and all software quality models in (Shen et al., 2017) from which we shortlisted and adapted the ones we judged applicable to machine learning systems. The shortlisting was done based on the relevance of each quality attribute to any stages of the ML development lifecycle defined in (Bordes et al., 2016) and taking into account the various types of ML use cases e-commerce platforms like Booking.com has. Next, we added attributes related to machine learning that were not part of the initial list, such as _fairness_ and _explainability_ (as defined in Appendix B). With the final list of attributes, we created clusters of factors (_characteristics_) comprising related sub-factors (_sub-characteristics_), following the standard nomenclature for quality models (Shen et al., 2017).
We validated the completeness of our quality model using published sets of machine learning practices (Krishnan et al., 2017), (Bordes et al., 2016), (Bordes et al., 2016). Concretely, we checked if we can relate these practices to at least one of the quality _sub-characteristics_ in our quality model. We iterated on this procedure a few times before we concluded on an first version, which was further refined using feedback from 10 internal senior ML engineers and scientists working in the industry and building ML systems for a minimum of 5 years. Given the speed with which the field is evolving, it is important to remark that the software quality model for machine learning is a live artifact constantly reviewed and updated in order to keep its relevance to the current machine learning needs. Another development process for a quality model for machine learning has been presented in (Krishnan et al., 2017), in which the authors explain the implementation process of quality models for particular machine learning related use cases. Our development process aimed at creating a general-purpose quality model which is relevant for a wide range of machine learning applications. Different applications and organizations will put different emphasis onto different _sub-characteristics_ (for example external facing systems should be invulnerable even at the cost of accuracy) something that can be achieved by using importance weights per quality _sub-characteristic_. Having a common quality model for all the machine learning systems allows its usage as a common language for quality related initiatives and for identification of gaps on quality attributes both at the system and organizational level.
## 4. A Framework to Prioritize
Software Practices
Choosing practices in order to improve ML quality is a challenging task mainly due to their large number, varying implementation costs, and overlapping effects. To tackle this, we propose a framework to analyze and prioritize software practices. Given a Software Quality Model represented by a set of sub-characteristics \(C\), and a set of software best practices \(P\) we want to choose a subset of practices maximizing the coverage of a given set of sub-characteristics, under a constraint of implementing at most \(B\) practices 1. Having an influence \(u(p,c)\) for a practice \(p\) on a sub-characteristic \(c\) we can define _coverage_ as a minimum threshold \(k\) of influence. Formally we have:
Footnote 1: To simplify, we focus on the number of practices as cost function, but it is straightforward to extend to a general knapsack constraint (Shen et al., 2017) such as number of hours needed to adopt a practice.
1. A Software Quality Model, represented by its set of _sub-characteristics_\(C\)
2. A set of software practices \(P\)
3. For each practice \(p\in P\) and each quality _sub-characteristic_\(c\in C\), the influence defined by a function \(u:P\times C\rightarrow\mathbb{R}^{+}\)
4. A _sub-characteristic_ importance vector \(w\in[0,1]^{|C|}\) representing the relevance of each _sub-characteristic_\(c\in C\)
5. An effort budget in the form of number of practices to be adopted \(B\in\mathbb{N}\)
6. An integer \(k\) representing the minimum influence necessary to consider any _sub-characteristic covered_
We define the _coverage function_ as a set function that given a set of _sub-characteristics_\(C\) with importance weights \(w\) and a
coverage threshold \(k\) maps a set of practices \(X\in 2^{P}\) to a real number, formally:
\[f(X;C,w,k)=\sum_{c\in C}w_{c}\min(k,\sum_{p\in X}u(p,c)) \tag{1}\]
The objective is to choose a subset of practices that maximizes the coverage of the quality model weighted by its importance under the budget constraint:
\[\operatorname*{arg\,max}_{X\in 2^{P}}f(X;C,W,k)\text{ subject to }|X|\leq B. \tag{2}\]
### Eliciting the relationship between best-practices and quality sub-characteristics
In order to apply the framework in practice, we first needed a set of practices \(P\). To achieve this, we conducted a survey with our internal ML practitioners at Booking.com where we asked them which 3 best practices for ML systems, from the ones they apply in their day to day work, they find the most useful. In total we received 25 responses from ML engineers and scientists with a minimum of 3 years of industrial experience building production ML systems. Based on the responses we created a list of 41 practices, which can be found in Appendix D.1. Then, we obtained the values of the function \(u(p,c)\) to be used as inputs in the framework by going through the following procedure.
We conducted a workshop with 13 internal ML practitioners (ML engineers and scientists with a minimum of 3 years of industrial experience building ML systems) who were given a lecture on the proposed Software Quality Model and had interactive exercises to ensure a deep understanding of all the quality sub-characteristics and their nuances. In the end of the workshop, the practitioners were given a quiz to assess their understanding. After the quiz, the practitioners were asked to score the set of 41 practices against each quality sub-characteristic (\(C\)) on a 0-4 scale indicating their influence: irrelevant (0), weakly contributes (1), contributes (2), strongly contributes (3) and addresses (4) 2. Finally by taking the median of the scores of all the practitioners we obtain the influence of each practice \(p\) on each quality sub-characteristic \(c,u(p,c)\). To make this more concrete, we provide some examples of scores \(u(p,c)\) for several pairs of quality sub-characteristic and practices in Table 1. Influence scores for each sub-characteristic can be found in Appendix F.
Footnote 2: The scoring instructions can be found in Appendix C.1.
Given the influence per practice and sub-characteristic \(u(p,c)\) and a coverage threshold \(k\), we can determine when a sub-characteristic is considered _covered_. For example, given that we want to cover _Understandability_, if \(k=10\) then the practices _documentation_, _peer code review_ and _error analysis_ with influence scores \(u(p,c)\) of 4,3 and 3 respectively, do cover it. However the practices _logging of metadata and artifacts_, _data versioning_ and _alerting_, with influence scores of 2,1 and 0 respectively, do not cover _Understandability_.
### Scaling of Influence Scores
Based on ML practitioners' evaluation, four practices scored with an influence of _weakly contributes_ = 1 should not be treated equally as a practice scored with _addresses_ = 4, hence to penalize weak
\begin{table}
\begin{tabular}{c c c}
**Sub-characteristic** & **Practice** & **Score** \\ \hline Deployability & Data Versioning & 0 \\ Repeatability & Documentation & 2 \\ Debuggability & Logging of Metadata And Artifacts & 3 \\ Traceability & Data Versioning & 3 \\ Understandability & Documentation & 4 \\ \end{tabular}
\end{table}
Table 1. Examples of Influence Scores \(u(p,c)\)
Figure 1. A software quality model for machine learning systems.
contributions we re-scale the scores. To achieve this we chose a piecewise linear function where we define the _addresses_ influence score = 4'_strongly contributes_, _strongly contributes_ = 3 * _contributes_, _contributes_ = 2 * _weakly contributes_. For continuous values, after averaging multiple ML practitioners scores, we apply a piecewise linear function between these values which we depict in Figure 2.
We defined _coverage_ in Equation 1 as the minimum threshold of influence \(k\). We chose one _addresses_ influence to _cover_ a _sub-characteristic_, and after applying our re-scaling function we get \(k=24\). In general, the parameter \(k\) defines the coverage threshold, and the re-scaling allows to parameterize the relationship of the influence scores while keeping the scoring of the _sub-characteristic_ and practice pairs on a small linear scale of \([0;4]\in\mathbb{Z}^{0+}\).
The choice of \(k\) and of the re-scaling function depend on the application where the ML System is deployed and on the risk of wrongly treating a _sub-characteristic_ as covered.
### Inter-annotator Agreement
Assessing the influence of a practice in a quality sub-characteristic is a subjective task and therefore subject to annotator disagreement. We used two tests for agreement - whether two scores are identical (referred as _plain_ agreement) and whether two scores differ by more than one level (referred as _practical_ agreement). The _practical_ test is more aligned with the complexity of the task and the variance coming from the practitioners experience and knowledge. We found an average agreement rate (between a pair of annotators) of 73.56% (plain) and 86.38% (practical). We used Cohen's Kappa to check the agreement rate while neutralizing the probability of agreement happening by chance, and reached 0.4 (plain) and 0.69 (practical). These scores represent an agreement rate which is between fair (_plain_) and substantial (_practical_) according to (Shen et al., 2017).
The observed consistency suggests that we can have new best practices sets (or new quality sub-characteristics), scored by substantially fewer practitioners, which we consider an important insight when it comes to adopting new practices in an industrial setting. For example, considering the case of only two annotators, we estimate the sampling distribution for both the agreement-rate and Kappa statistic by computing the metric for every possible pair of annotators among the 13. For the agreement rates, the standard deviation is 1.38% (_plain_) and 1.68% (_practical_), and for the Kappa statistic the standard deviation is 0.043 (_plain_) and 0.05 (_practical_). Both figures are low enough which enables us to substitute a large group of annotators with only a pair and still get reliable scores.
### Algorithms
The maximization problem we want to solve is similar to the Generalized Maximum Coverage (GMC) problem (Bordes and Rafter, 1995), with a clear difference: in GMC if a set \(X\) covers an element \(a\), then at least one subset \(Y\subset X\) covers \(a\). In our case, if a set of practices \(Q\subseteq P\) covers a sub-characteristic \(c\in C\), it might be the case that no subset of \(Q\) covers \(c\). Consider two practices \(p_{1},p_{2}\) and sub-characteristic \(c\) with \(u(p_{1},c)=u(p_{2},c)=k/2\). In this case the set \(Q=\{p1,p2\}\) covers \(c\) since \(f(Q;\{c\},1,k))=k\) but no subset of \(Q\) does since \(f(\{p1\};\{c\},1,k))=f(\{p2\};\{c\},1,k))=k/2\) and \(f(\emptyset;\{c\},1,k))=0\). Because of this, a specific analysis is required.
The budget expressed as the maximum number of practices to be applied leads to a combinatorial explosion of the search space. To illustrate, the set of 41 practices we collected and a budget of 3 practices yields a search space of size \(\binom{41}{3}=10660\), whereas a budget of 10 practices yields a search space of 1.12e+9 options to explore. To tackle this computational problem we propose a greedy solution based on the observation that \(f\) is positive monotone submodular (proof in Appendix A). Maximizing a monotone submodular function is known to be NP-Hard (Hard, 1995; 1996), however a simple greedy approach yields a \((1-\frac{1}{c})\)-approximation (Shen et al., 2017) even for one general knapsack constrain (Shen et al., 2017), and it is the best polynomial time solution, unless \(P=NP\)(Shen et al., 2017), (Bordes and Rafter, 1995). We propose two solutions: brute force and greedy, in Algorithm 1 and 2 respectively. In practice we found that the greedy approach rarely yields sub-optimal results for this case.
```
0:\(B\): budget of practices to be used
0:\(W\): sub-characteristic importance vector
0:\(C\): set of sub-characteristics
0:\(P\): set of practices
0:\(k\): coverage threshold
1:\(score\gets 0\)
2:\(selected\leftarrow\emptyset\)
3:for each possible subset \(P_{i}\subseteq P\) of size \(B\)do
4:\(curr\gets f(P_{i};C,W,k)\)
5:if\(curr\)\(>\) score then
6:\(curr\gets score\)
7:\(selected\gets P_{i}\)
8:endif
9:endfor
10:return\(selected\)
```
**Algorithm 1**Brute force search
Figure 2. Scaling function for influence scores
## 5. Applying the framework
In this section we illustrate the usage of our framework by analyzing our own best-practices set and three well-known ML best-practices sets (Bai et al., 2017), (Bai et al., 2018) and (Bai et al., 2019) (we combine the last two as they intersect) including 28, 7, and 45 best practices respectively. In each case we compute the coverage function, optimal practices sets for different budgets, and highlight gaps as well as general trends. We also provide a _global_ analysis combining all sets of best practices.
### Analyzing sets of best practices
#### 5.1.1. Internal Set
Using the influence vectors of the internal set of 41 practices applied at Booking.com, we can visualize the total contribution of the set to all the quality sub-characteristics and assess its completeness. We plot the contributions of the internal set in Figure 3, where we mark the threshold \(k=24\) contribution points indicating coverage of a quality sub-characteristic. We observe that \(22\) out of \(29\) sub-characteristics are being covered indicating a coverage rate of \(75\%\). The sub-characteristics with the largest influences are mostly associated with traditional software systems, such as _effectiveness_ and _monitoring_, while the ones with the least influences are more specific to ML systems, such as _explainability_ and _discoverability_. This is due to the fact that historically, engineering best practices are more closely related to traditional software systems and only in the recent years ML specific best practices started becoming popular. Based on this analysis we were able to identify the areas for which practices are lacking and work towards their coverage, by creating new ones. Concretely, to address the gaps in _Vulnerability_, _Responsiveness_ and _Discoverability_ we created the following practices: "Request an ML system security inspection", "Latency and Throughput are measured and requirements are defined", "Register the ML system in an accessible registry", which increase the coverage for each of the sub-characteristics respectively (see Appendix D.1 for their descriptions).
To gain further insight, we use the Greedy algorithm to find the top 3 influential practices on all quality sub-characteristics, considering them all equally important. The algorithm outputs a set of the following top 3 practices: "Write documentation about the ML system", "Write modular and reusable code", and "Automate the ML lifecycle". This result has been used to guide the ML practitioners at Booking.com on the prioritization of practice adoption in their daily work, by highlighting the value of these practices on the overall ML quality. The actual prioritization of their adoption depends on the team, since different teams and departments use different priorities for the quality sub-characteristics.
#### 5.1.2. External Sets
We analyze three ML best practices sets of 80 practices in total. Since it is impractical to have the same 13 ML practitioners scoring the 80 practices, we limit the number of annotators to 2, based on the high agreement rate for a pair of annotators observed in Section 4.3. After the scoring, we compute the plain agreement rate for the 2 annotators to be 63.5% and the practical agreement rate 94.5%. With these vectors, we can visualize the total contribution of the whole set of practices to each of the quality sub-characteristics and based on that assess which of them are being covered. In Figure 3(a) we see that applying all the practices presented in (Bai et al., 2018) 25 sub-characteristics are covered. In this set of practices the strongest emphasis is on sub-characteristics related to _cost-effectiveness_, _responsibility_ and _modifiability_. On the other hand, _sub-characteristics_ such as _scalability_, _discoverability_, _operability_ and _responsiveness_, remain uncovered even when applying all the 45 practices from this set. Figure 3(b) illustrates the contributions by applying all the 28 practices mentioned in (Bai et al., 2018) and we observe that this set covers 17 _sub-characteristics_: we observe the top contributions to be on non-ML specific quality _sub-characteristics_, although ML specific ones such as _accuracy_ and _fairness_ are also covered. The least covered are related to collaboration such as _ownership_, _discoverability_ and _readability_. Lastly, the contributions of (Bai et al., 2018) to the software quality are depicted in Figure 3(c). This set of 7 practices manages to cover 9 quality _sub-characteristics_ with a focus on those related to _economy_ and _modifiability_. The least contributions are achieved on aspects related to the _comprehensibility_ of ML systems.
Figure 3. Coverage of the quality sub-characteristics by applying all the 41 practices from the internal set.
In general we find that all practice sets focus on different quality attributes and have gaps on different areas of our SQM. This indicates that the sets complement each other, which motivates our next analysis.
In Figure 3(d) we look into the quality coverage in the scenario where we apply all the practices combined. After removing overlapping practices (see Appendix D.2), this set includes 76 practices. We observe that when we apply the full set of 76 practices, 28 sub-characteristics are covered which verifies that the practices complement each other. An example that shows this is _scalability_, which is not covered by any set in isolation, but only when the practices are combined. We also see that even when applying all the 76 practices, _discoverability_ remains uncovered. This shows that there is lack of practices addressing this quality sub-characteristic, something that was also observed in the analysis of the internal practice set. Moreover, the low scores for sub-characteristics like _scalability_, _operability_, _usability_ and _responsiveness_ indicate that they receive less attention compared to the rest. On the other hand, it is encouraging to see large scores for sub-characteristics related to trustworthiness such as _fairness_ and _explainability_.
### Score and coverage threshold sensitivity
To further assess the sensitivity of the results to the scores assigned by the ML practitioners, we perturb the scores by adding a random integer in the range \([-1;1]\) and \([-2;2]\). We then take the original scores and perturbed ones, and compute the scores of each _sub-characteristic_ as if all practices were applied and rank them by the sum of scores. Then we measure the Pearson correlation coefficient of the original ranking and the ranking after the scores were perturbed. After 1000 perturbation iterations we obtain a mean correlation coefficient of 0.94 with a variance of 0.0002 for perturbing by \([-1;1]\), and a mean of 0.91 with a variance of 0.0006 for perturbing by \([-2;2]\) respectively. A random integer in the range \([-3;3]\) yields a mean of 0.86 and a variance of 0.0016. This shows that our results are robust to scoring variance. Regarding the coverage threshold \(k\) we remark that 24 points is rather low since one single practice with _addresses_ score would cover the sub-characteristic, at the same time, in Figures 3 and 4 we can see that small changes in \(k\) do not lead to big changes in which quality sub-characteristics are covered, more importantly, the general observations hold even for moderate changes in \(k\).
### How many practices are enough?
To evaluate how many practices are enough to maximize quality, we analyze the internal and open source sets combined (after removing overlapping practices the combined set has 101 practices, see Appendix D.2 for details). Using our prioritization framework we find the minimum number of practices which cover the same number of quality _sub-characteristics_ as the full set of those 101 practices. To achieve that, we find the top \(N\) practices from the combined set of practices using our greedy algorithm (brute force takes too long), for \(N\in[1,101]\) and we evaluate what percentage of the quality _sub-characteristics_ is being covered with each set of practices. Figure 5 illustrates the coverage percentage for all the values of \(N\). We see that applying 5 practices covers almost 40%, 10 cover 70%, and to reach 96%, 24 are needed. The coverage does not increase further with the practices. This result shows that using a relatively small number of practices can achieve similar results in terms of quality coverage to the full set of 101 practices. This means that when applying the right set of practices, a significant reduction in the effort of adoption can be achieved, which is especially relevant in an industrial setting.
### Which are the best practices?
To gain further insights as to which are the 24 practices which maximize coverage, we provide the optimal set in Table 2, along with the source of each practice (some practices have been renamed for better clarity, see Appendix D.3 for details). It is important to note that here we assume equal importance for each quality sub-characteristic, something that needs to be taken into account from ML practitioners wanting to use this set as guidance. In case a different importance weighting is desired, one needs to re-create this set after applying importance weights to each sub-characteristic. Prioritization within the final set, can be achieved by taking into account the specific needs of an organization (for example if safety is top priority, practices focusing on robustness should be prioritized) or the cost of adoption per practice.
### Further applications of the framework
The proposed SQM is currently being used to construct a quality assessment framework for ML systems. Concretely, the framework assesses the coverage of each quality sub-characteristic on an ML system level, to pinpoint improvement areas. Implementing an
\begin{table}
\begin{tabular}{l}
**Practice Name \& Source** \\ Versioning for Data, Model, Configurations and Scripts (Krishnan et al., 2017) \\ Continuously Monitor the Behaviour of Deployed Models (Krishnan et al., 2017) \\ Unifying and automating ML workflow (Bartos et al., 2017) \\ Remove redundant features. (Krishnan et al., 2017) \\ Continuously Measure Model Quality and Performance (Krishnan et al., 2017) \\ All input feature code is tested. (Krishnan et al., 2017) \\ Automate Model Deployment (Krishnan et al., 2017) \\ Use of Containerized Environment [Appx. D.1] \\ Unified Environment for all Lifecycle Steps [Appx. D.1] \\ Enable Shadow Deployment (Krishnan et al., 2017) \\ The ML system outperforms a simple baseline. (Krishnan et al., 2017) \\ Have Your Application Audited (Krishnan et al., 2017) \\ Monitor model staleness (Krishnan et al., 2017) \\ Use A Collaborative Development Platform (Krishnan et al., 2017) \\ Explain Results and Decisions to Users (Krishnan et al., 2017) \\ The ML system has a clear owner. [Appx. D.1] \\ Assign an Owner to Each Feature and Document its Rationale (Krishnan et al., 2017) \\ Computing performance has not regressed. (Krishnan et al., 2017) \\ Communicate, Align, and Collaborate With Others (Krishnan et al., 2017) \\ Perform Risk Assessments (Krishnan et al., 2017) \\ Peer Review Training Scripts (Krishnan et al., 2017) \\ Establish Responsible AI Values (Krishnan et al., 2017) \\ Write documentation about the ML system. [Appx. D.1] \\ Write Modular and Reusable Code [Appx. D.1] \\ \end{tabular}
\end{table}
Table 2. The practices which maximize quality.
ML quality assessment framework without an SQM for ML systems, would lead to an incomplete picture of ML quality. Moreover, the prioritization framework is being used alongside the quality assessment framework: After the quality of an ML system is assessed, by assigning a quality score per quality sub-characteristic, the sub-characteristics with low scores are provided as input in the prioritization framework in order to recommend the best 3 practices to apply in order to cover them. This has been very helpful for ML practitioners as it allows them to prioritize the improvements to be made efficiently, by focusing on practices that have the largest influence in the quality attributes that are considered the most important for the use case at hand.
Additionally, the SQM has created a common language for ML practitioners to discuss ML quality topics and quality related initiatives are easier to be justified. For example, it is more straightforward to argue about the value of an initiative targeting to increase the adoption of unit-testing for ML systems, since the benefit of it, e.g. improvement in _modifiability_ of the system, is clear.
An advantage of our framework is that it is flexible enough to be adapted to other organizations. For completeness, we describe how this can happen. The organization needs to determine which quality _sub-characteristics_ are the most crucial, by specifying the importance weights \(W\) for each _sub-characteristic_. The provided software practices can be used as is or new ones can be added and scored
Figure 4. Coverage of the quality sub-characteristics by applying all the best practices in [27] (4a), [7] (4b), [2] (4c) and the combined set (4d).
by ML practitioners within the organization. Lastly, a coverage threshold \(k\) should be chosen based on how strict an organization wants to be for solving a given quality _sub-characteristic_. To deal with disagreements in the scores \(u(p,c)\) or the coverage threshold \(k\), the mean or median can be taken. Then, all an ML practitioner needs to do is to run the prioritization algorithm using as inputs the quality _sub-characteristics_\(C\) to be improved, the set of practices \(P\) to be considered, the allowed budget \(B\), the importance vectors \(W\) and the coverage threshold \(k\), and then adopt the optimal practices which are recommended by the framework.
## 6. Conclusions and Discussions
**Conclusion.** In this work we presented a framework to analyse the relationship between software engineering best practices for ML Systems and their quality with the primary purpose of prioritizing their implementation in an industrial setting. We addressed the challenge of defining quality by introducing a novel Software Quality Model specifically tailored for ML Systems. The relationship between best practices and the various aspects of quality was elicited by means of expert opinion and represented by vectors over the sub-characteristics of the Software Quality Model. With these vectors we applied Set Optimization techniques to find the subset of best practices that maximize the coverage of the SQM. We applied our framework to analyse 1 large industrial set of best practices as implemented at Booking.com and 3 public sets. Our main findings are:
1. Different best-practices sets focus on different aspects of quality, reflecting the priorities and biases of the authors.
2. Combining the different best-practices sets, high coverage is achieved, remarkably, aspects that no single best-practices set covers on its own are covered by integrating different practices proposed by different authors.
3. Even though there is a proliferation of best practices for ML Systems, when chosen carefully, only a few are needed to achieve high coverage of all quality aspects.
4. Even though the influence of best-practices on quality aspects is a subjective concept we found surprisingly high consistency among experts.
Our framework was useful to spot gaps in our practices leading to the creation of new ones to increase the coverage of specific quality aspects.
**Limitations.** A limitation of this work is that in order to add a new quality _sub-characteristic_ or a new practice to the framework, one needs to score the influence vectors which is a time consuming procedure. On the other hand, the addition or removal of an existing practice or quality _sub-characteristic_ does not influence the existing scores. Another caveat regards the subjectivity of the influence vectors based on the individuals who conduct the scoring. However, our sensitivity analysis described in Section 5.2 indicates that our results are robust to scoring variance, which mitigates the subjectivity concerns.
**Future Work.** Future work will focus on a comparison of our framework with baseline prioritization approaches (such as prioritizing the most popular practices first or the ones requiring the least effort) and on assessing the coverage of _sub-characteristics_ in existing ML Systems. We will also keep evolving the assessment framework mentioned in Section 5.5 since this can provide visibility on quality gaps of ML systems, and along with the prioritization framework can provide guidance to ML practitioners on the optimal actions to take to improve them. Furthermore, exploring more realistic practice implementation cost functions can lead to a better cost and quality trade-off. Lastly, even though we aim at producing a complete software quality model, further validation is necessary especially by the external ML community.
###### Acknowledgements.
We would like to thank the ML practitioners of Booking.com for being very helpful with providing input for important components of this work: the scoring of the influence vectors, the survey of best practices, the feedback on the SQM and the prioritization framework. Quality improvements can only be done successfully in collaboration with the practitioners, and without their help this work would not be possible.
|
2305.04896 | The hazardous km-sized NEOs of the next thousands of years | The catalog of km-sized near-Earth objects (NEOs) is nearly complete. Typical
impact monitoring analyses search for possible impacts over the next 100 years
and none of the km-sized objects represent an impact threat over that time
interval. Assessing the impact risk over longer time scales is a challenge
since orbital uncertainties grow. To overcome this limitation we analyze the
evolution of the Minimum Orbit Intersection Distance (MOID), which bounds the
closest possible encounters between the asteroid and the Earth. The evolution
of the MOID highlights NEOs that are in the vicinity of the Earth for longer
periods of time, and we propose a method to estimate the probability of a deep
Earth encounter during these periods. This metric is used to rank the km-sized
catalog in terms of their long-term impact hazard to identify targets of
potential interest for additional observation and exploration. | Oscar Fuentes-MuΓ±oz, Daniel J. Scheeres, Davide Farnocchia, Ryan S. Park | 2023-05-08T17:36:52Z | http://arxiv.org/abs/2305.04896v1 | # The hazardous km-sized NEOs of the next thousands of years
###### Abstract
The catalog of km-sized near-Earth objects (NEOs) is nearly complete. Typical impact monitoring analyses search for possible impacts over the next 100 years and none of the km-sized objects represent an impact threat over that time interval. Assessing the impact risk over longer time scales is a challenge since orbital uncertainties grow. To overcome this limitation we analyze the evolution of the Minimum Orbit Intersection Distance (MOID), which bounds the closest possible encounters between the asteroid and the Earth. The evolution of the MOID highlights NEOs that are in the vicinity of the Earth for longer periods of time, and we propose a method to estimate the probability of a deep Earth encounter during these periods. This metric is used to rank the km-sized catalog in terms of their long-term impact hazard to identify targets of potential interest for additional observation and exploration.
Asteroids (72), Near-Earth objects (1092), Astrodynamics (76), Asteroid dynamics (2210), Close encounters (255) 0000-0002-4002-2508]Oscar Fuentes-Munoz
0000-0002-4882-7880]Daniel J. Scheeres
0000-0002-4882-7880]Davide Farnocchia
0000-0002-4882-7880]Ryan S. Park
## 1 Introduction
Asteroid impacts are one of the few natural disasters that can be prevented through human action. The main planetary defense efforts consist of observations, orbit determination and impact hazard assessment, and deflection/in-situ characterization. The near-Earth asteroid catalog is being completed by current and proposed surveys, providing new candidates of a future collision to study in more detail.
In 1998 the congress of the US requested NASA to detect and catalog 90% of the km-sized NEO population1. As of 2023-02-08, the catalog is around 95% complete, with an estimated population of \(962^{+52}_{-56}\)(Granvik et al., 2018). Impact monitoring systems estimate the orbits of newly discovered objects and compute any impact probabilities in future close encounters. Using the observational data available for a given object, the orbit is statistically estimated within an uncertainty region. This uncertainty region is efficiently sampled using various techniques to assess impact probabilities.
Footnote 1: More details on the historical efforts of the U.S. Government to track and mitigate asteroids were given in two parts of a hearing before the Committee on Science, Space and Technology of Congress in March 19, 2013 and April 10, 2013. Full hearing statements accessible at [https://www.govinfo.gov/content/pkg/CHRG-113hhrg80552/pdf/CHRG-113hhrg80552.pdf](https://www.govinfo.gov/content/pkg/CHRG-113hhrg80552/pdf/CHRG-113hhrg80552.pdf)
The first generation impact monitoring system relied on the Line of Variations technique (Milani et al., 2005), sampling a suitably chosen direction of the uncertainty region. More recently, Roa et al. (2021) describe a different approach that samples the full N-dimensional uncertainty region and identifies virtual impactors by using the impact condition as an observable. This latter approach is used by JPL's Sentry-II system2.
Footnote 2: [https://cneos.jpl.nasa.gov/sentry/](https://cneos.jpl.nasa.gov/sentry/)
In this paper we investigate the potential impact risk over an order of magnitude larger timescales, in the next thousand years. To do this we review the two conditions required for an impact to occur (Valsecchi et al., 2003), and how the growth in orbit uncertainty affects them. The first one is that the Minimum Orbit Intersection Distance (MOID) has to be smaller than the combined radii of the two bodies, taking into account the gravitational focusing factor. This condition motivates the orbit condition for the definition of a Potentially Hazardous Asteroid (PHA): having an Earth MOID \(<0.05\) au (Bowell and Muinonen, 1994). Similarly, the MOID can be used to rule out NEOs for further potential impact analysis. The MOID is found as a function of the orbit elements of the Earth and those of the
NEO, but does not directly depend on the position along the orbit (Gronchi, 2005). The uncertainty in these elements does not grow as fast as in mean anomaly, which allows us to propagate it confidently in longer timescales. Previous works studied the models required to propagate the MOID (Gronchi and Tardioli, 2013), including the applicability of the 3-body problem. In the presence of planetary encounters and complex long-term secular interactions, we must use numerical integration to propagate the orbits.
The second condition is on the timing of the flyby: the two bodies must be at the same time in the region in which their relative distance allows for a collision (Valsecchi et al., 2003). Uncertainty in the asteroids position grows faster in the direction of motion, limiting the assessment of future impacts. After a few centuries the uncertainty in mean anomaly can cover the whole orbit. This phenomenon is used as an assumption of analytical theories of impact rates in timescales of millions of years (Opik, 1951; Wetherill, 1967). In this work we keep track of the uncertainty in mean anomaly and use this assumption when the MOID condition is met. Previous works combine these assumptions in hundreds of thousands of years timescales (Vokrouhlicky et al., 2012; Pokorny and Vokrouhlicky, 2013), using analytical models of the long-term dynamics. In these much longer timescales the uncertainty in NEO orbits grows large enough that lower fidelity models of the long-term dynamics provide good estimates of the frequency of close encounters (Fuentes-Munoz et al., 2022). Thus, in the shorter timescales of this work, we propose the combination of the two conditions although we propagate the orbits of the NEOs numerically.
We investigate the long-term MOID dynamics and identify km-sized NEOs that are frequently in the neighborhood of Earth. Then, we keep track of the uncertainty in mean anomalies during those low-MOID periods of the NEOs. Using the analytical approximations we estimate the probability of a deep encounter. This metric allows us to rank the km-sized NEO population, highlighting a few km-sized NEOs for further detailed analysis.
## 2 Long-Term Neo Hazard Characterization
In this section we describe the tools and methods used to analyze the long-term dynamics of the km-sized NEO population and the estimation of their potential impact hazard. The MOID time histories are obtained following the propagation of the orbit. Hence, we first describe the orbit propagator as motivated by the NEO long-term dynamics and then the MOID algorithm and dynamics. Last, we introduce the long-term collision hazard metric that is used to rank the selected group of near-Earth objects.
### Orbit propagation
The orbits of the NEOs are propagated using the JPL small-body integrator which is based on an N-body model that includes Sun, planets, Pluto, Moon and small-body perturbers (Farnocchia et al., 2015). When the Yarkovsky effect was detected from astrometric data (Farnocchia et al., 2013), we added it to the force model. The ephemeris models used in the integration are DE441 (Park et al., 2021) for the planets and SB441-N16 for the largest main-belt bodies3.
Footnote 3: Available at: [ftp://ssd.jpl.nasa.gov/pub//eph/small_bodies/asteroids_de441/SB441_MOM392R-21-005_perturbers.pdf](ftp://ssd.jpl.nasa.gov/pub//eph/small_bodies/asteroids_de441/SB441_MOM392R-21-005_perturbers.pdf)
Figure 1 shows the propagation of the orbit of 2015 FP332, which reveals the relevant dynamical effects to the long-term dynamics of near-Earth objects. 2015 FP332 is in a Lidov-Kozai cycle (Lidov, 1962; Kozai, 1962), in which periods of high eccentricity are exchanged with periods of high inclination. In this case, both longitude of the node and argument of perihelion drift secularly.
Figure 1: Numerical propagation of the orbit of 2015 FP332, a km-sized NEO. The trajectories are shown using the Keplerian elements semi-major axis, eccentricity, inclination, longitude of the ascending node, argument of perihelion and mean anomaly with respect to the nominal propagation. Individual Monte-Carlo runs (N=21) are shown in colors, the nominal trajectory is shown in a black line. The bottom rows show the propagation of Earth and Venus MOID.
Planetary encounters can cause the exponential growth of the distance between initially neighboring trajectories, a necessary condition for chaos (Tancredi, 1998). Neighboring trajectories of near-Earth asteroids can diverge in timescales ranging from decades, such as 99942 Apophis (Farnocchia et al., 2013a); to hundreds of years, such as 29075 (1950 DA) (Farnocchia and Chesley, 2014); to tens of thousands of years, such as 433 Eros (Michel et al., 1996). In this process the linear approximation of the state uncertainty can quickly become inaccurate.
Figure 1 shows the propagation of the multiple samples or virtual asteroids of the orbit of 2015 FP332. In this example, the nominal trajectory of 2015 FP332 experiences a very close Venus encounter that causes the rapid increase in semi-major axis. Once each initially neighboring virtual asteroid diverges to a different trajectory it experiences a unique sequence of close encounters. This effect motivates the use the MOID to estimate long-term probabilities of collision. The resulting dynamics under these encounters are very nonlinear, and the orbits of near-Earth objects in these timescales become stochastic. For this reason, we sample the uncertainty in the orbits of NEOs and propagate them in a Monte Carlo simulation. The detection of potential impactors of small probabilities is out of the scope of this work, in which the main metric of interest is the MOID. For this reason we run a limited number of Monte Carlo samples (N=21), which allows us to distinguish the main dynamical effects as well as the uncertainty in mean anomaly.
The presence of close encounters is expected if the near-Earth object has a small MOID with any of the planets. Thus, tracking the evolution of the MOID is not only relevant for the evaluation of the probability of collision with Earth but to understand when the dynamics are subject to nonlinear stochastic variations. The evolution of the orbit of 2015 FP332 in Figure 1 shows the effect of a low-MOID period in the long-term prediction. A Venus low-MOID enables close approaches that cause the rapid expansion of the distribution of orbits and mean anomaly to become unknown.
### MOID algorithm and dynamics
The MOID is the result of the optimization of the relative distance between two bodies over their respective fast angles. There are multiple algorithms to compute the MOID in the literature, including analytical methods (Gronchi, 2005) and numerical methods such as Wisniowski and Rickman (2013) or Hedo et al. (2018), which is used in this work. The MOID, a function of the osculating orbit elements, is then computed when post-processing the numerically integrated trajectory.
Depending on the dynamical effects on the asteroid described in the previous section we find a variety of MOID long-term dynamics trajectories. Figure 2 shows a few examples of the dynamics of the propagation of the MOID for four km-sized NEOs. 2021 UY9 represents the simplest case, in which the MOID does not become small throughout the simulation time therefore making Earth impacts impossible. The uncertainty in the orbit of 2021 UY9 remains
Figure 2: Propagation of the Earth MOID of a few selected examples of km-sized near-Earth objects. For each asteroid we show the 21 Monte Carlo simulations (colors) and the nominal orbit propagation (black, continuous). The orbit elements in the figure indicate the initial conditions of the propagation.
small throughout the propagation. The case of 29075 (1950 DA) is the very common case for NEOs, in which the MOID drifts secularly until a future zero crossing that lasts a short period of time, of about a century. The example of 136618 (1994 CN2) is similar to the case of 2015 FP332 in Figure 1, in which the date in which the MOID becomes small for the first time is uncertain. After a low-MOID period the trajectory becomes more uncertain. The last example, 2329 Orthos (1976 WA), illustrates the scenario of an extended period of time with a low Earth MOID. This is caused by the combination of two effects, a large amplitude of short-period oscillations and a favorable phasing of the secular cycle.
The examples of Figure 2 are also representative of the growth in uncertainty of the MOID. In the two top cases the uncertainty remains small for thousands of years. The km-sized NEA population have well defined orbits and their Earth MOIDs remain well known for hundreds of years. Previous works focus on mapping the orbit covariance into a confidence region of the MOID (Gronchi & Tommei, 2007). However, the uncertainty in the orbit can become far from Gaussian in long-term orbit propagations. Thus, we use statistics of the Monte Carlo propagation as indicators of the spread of the distribution as well as the confidence in our predictions.
### Long-term impact probability estimation
The complexity in long-term MOID dynamics that we showed in the previous section motivates the development of a systematic method to quantify the long-term Earth impact hazard of NEOs. We propose a novel metric to characterize the potential impact hazard that consists in an estimated probability of collision between planet and NEO. The probability of collision is a problem primarily studied in two major timescales. The fundamental problem of impact hazard assumes the position of the asteroid within its orbit is reasonably well determined and it is possible to precisely determine the geometry of the subsequent close encounters. In the case of potentially hazardous asteroids, this analysis can typically be completed for one or two centuries (Chamberlin et al., 2001; Roa et al., 2021). In these timescales the uncertainty in the orbit of many NEOs starts to grow large enough that the position within the orbit can become unknown. This effect motivates the statistical assumption of a uniformly distributed mean anomaly.
Traditional impact probability theories assume that the orbit elements of the two objects involved are constant and have one intersection point (Opik, 1951; Wetherill, 1967). Then, the probability depends on the timing of the orbits, which is when the mean anomalies are assumed uniformly distributed. This timing probability, here \(P_{MA}\), is the probability that both bodies are in the right time at the right place, i.e., the range of mean anomalies that corresponds to a collision or flyby within a small distance.
There are a few options for the timing probability in the literature. In the most simplified case, we can assume the planet's orbit to be circular with Opik's formula (Opik, 1951), in which the probability is function of the Keplerian elements of the asteroid a-e-i. Wetherill (1967) then derived an expression for an elliptic orbit of the planet, with the problem of being singular at zero inclination. In this work we use this expression as re-derived recently in JeongAhn & Malhotra (2017) for regular non-tangential encounters. In particular, we use the extended expression for the case in which the two objects do not exactly intersect. That means that the MOID is a positive value between 0 and the distance threshold for the close encounters of interest \(d\). Thus, the probability that two objects have a close encounter with closest approach distance smaller than \(d\) is:
\[P_{MA}=\frac{2Ud}{T_{p}T_{NEO}|\mathbf{v_{p}\times v_{NEO}}|}\sqrt{1-\frac{ MOID^{2}}{d^{2}}} \tag{1}\]
where \(\mathbf{v_{p}}\) and \(\mathbf{v_{NEO}}\) are the velocities of the planet and the asteroid at the point that defines the MOID, \(U\) is the relative velocity at the same point, and \(T_{p},T_{NEO}\) are the respective orbit periods. The square root term of equation 1 adjusts the probability for a non-zero MOID. If MOID \(>d\), the probability is assumed to be zero. This expression can be averaged for a MOID uniformly distributed between 0 and \(d\). However, in this work we do not need to make this assumption as we keep track of the MOID throughout our long-term propagation and the distribution can be far from uniform in the range 0 \(<\)MOID\(\leq d\).
Once we allow the orbit of the NEO to be time-varying, we can obtain the probability of collision as the combination of two terms: the probability that there is an intersection between the planet and near-Earth object and \(P_{MA}\). If we investigate a potential Earth collision, the condition is that the Earth MOID is smaller than the combined radii of the two bodies considering gravitational focusing as required. The gravitational focusing factor virtually extends the radius of the planet to account for trajectories that lead to a collision due to the planet's gravity, and is a function of the incoming velocity of the asteroid \(V_{\infty}\) and the mass and radius of the planet \(M_{p},R_{p}\):
\[\gamma=\sqrt{1+\frac{2GM_{p}}{R_{p}V_{\infty}^{2}}} \tag{2}\]
This approach has been used in the past to obtain the probability of collision for asteroids under the Lidov-Kozai cycle (Vokrouhlicky et al., 2012; Pokorny & Vokrouhlicky, 2013). In that case, the generalized probability of collision is obtained as the sum over all the crossing configurations (noted with *) of the fraction of time that the NEO spends withing the distance threshold times the timing probability:
\[P=\sum_{*}\left(\frac{\Delta t_{MOID<d}}{T_{sec}}\right)^{*}P_{MA}(d,K_{p}^{*}, K_{NEO}^{*}) \tag{3}\]
where \(K_{P},K_{NEO}\) are the Keplerian elements of the planet and the NEO and \(\Delta t_{MOID<d}\) is the amount of time that the NEO spends within the distance threshold \(d\). Vokrouhlicky et al. (2012) and (Pokorny & Vokrouhlicky, 2013) model the long-term asteroid dynamics with an analytical solution of the Lidov-Kozai cycle of the Jupiter perturbation, which defines the secular period \(T_{sec}\) as the Lidov-Kozai period. As a result, the fraction \(\Delta t\) and intersection configurations are computed analytically. As we show in the previous section, defining the times in which the MOID is small can be a complex problem under a wide range of dynamical contributions. In this work we propagate the orbit numerically to find the low-MOID periods. Because there is not a small discrete number of crossings along the long-term dynamics of the NEO, we estimate the probability as the average throughout the propagation time \(T\) using equation 4.
\[P=\frac{1}{T}\int_{T}\kappa P_{MA}(d,K_{p},K_{NEO})dt \tag{4}\]
This integral is computed numerically using the numerically integrated trajectories. The factor \(\kappa\) is introduced so we can null the contribution of the trajectory in which the position of the object is deterministic within its orbit, i.e., when the uncertainty in mean anomaly is small. \(\kappa\) is set to 0 before the first date in which we find that the standard deviation in mean anomalies is larger than 10 degrees, and set to 1 elsewhere. This distinction allows us to rule out the associated risk of objects that currently have a very low MOID but their position is properly constrained for the duration of their visit to the planet's vicinity. In addition, we check the close encounters that were recorded in the propagation before we can use the analytical expression for \(P_{MA}\) in equation 1.
## 3 KM-sized Neo population long-term characterization
We analyze the potential impact hazard of the km-sized NEO population in the next millennium. Using the very low-MOID necessary condition for a potential collision, we can rule out the collision hazard when this condition is not met. Then, considering the statistical evolution of the mean anomalies, we rank this group of NEOs depending on their long-term implicit impact hazard.
### NEOs frequently in Low-MOID regions
The orbits of the known km-sized NEO population are propagated starting from their orbit solutions in JPL's Small-body Database as of 2022-09-154. For each NEO we find the first date at which a low-MOID period is found between all of the Monte Carlo samples, with a threshold defined as MOID \(<0.01\) au (235 Earth Radii or 3.89 Lunar Distances). At this threshold, the incoming velocity \(V_{\infty}\) required for a collision is of 0.05 km s\({}^{-1}\) or less, from solving equation 2. From a statistical point of view, this relative velocity is extremely unlikely(Farnocchia and Chodas, 2021; Harris and Chodas, 2021).
Footnote 4: Small-body Database available for query at: ssd.jpl.nasa.gov/tools/sbdb.query.html
The first question we answer is how many km-sized NEOs currently have a MOID \(<0.01\) au, and how will this number evolve in the next 1000 years. As of the time in which the JPL's SBDB was queried, there are 40 NEOs that fulfill this condition. The evolution of this estimated number of NEOs is shown in Figure 3. As the uncertainty in the orbits of the NEOs grows into the future, only some of the MC samples may have MOID \(<0.01\) au. This phenomenon is shown in more detail in Figure 4, which shows the estimated range of km-sized NEOs with MOID \(<0.01\) au based in the Monte Carlo samples. The uncertainty remains very small (\(\pm 1\) body) throughout the next 500 years. By the end of the millennium, this number is in the range of 26-72 km-sized NEOs. As mentioned earlier, none of these objects pose a collision threat to Earth in the next 100 years.
Individual results of the MOID propagation are shown in Figure 4. We sorted the km-sized NEOs by the date in which they meet the MOID \(<0.01\) au condition. As defined by the length of their low-MOID periods, we show NEOs that are expected to be continuously in the vicinity of Earth as opposite to the ones that are for a brief period of time. We observe that even if the number of NEOs will never exceed an average value of 40-45 NEOs, the total number of unique low-MOID NEOs in the next 1000 years is of almost 150.
Figure 3: Number of km-sized NEOs that present a low-MOID throughout the next 1000 years. defined as an Earth MOID \(<\)0.01au. Minimum number of NEOs is estimated by NEOs in which there was an agreement between all Monte Carlo runs. Maximum number of NEOs is estimated by at least one Monte Carlo run fulfilling the low-MOID condition.
We list the km-sized NEOs that frequently experience MOID \(<\) 0.01 au in Table 1 based in the fraction of the next 1000 years that they meet the condition. Their implicit probability of collision is assessed in the next section. There are 4 objects whose MOID remains lower than 0.01 au throughout this millennium: 7482 (1994 PC1), 68950 (2002 QF15), 164121 (2003 YT1), 144332 (2004 DV24). In the second and third case, the mean anomaly remains well defined throughout the millennium.
The propagation of the MOID of a few of the top-ranked NEOs is shown in Figure 5. Most remarkably, we can see how low the MOID of 7482 (1994 PC1) persists throughout the next 1000 years. In 68950 (2002 QF15) and 164121 (2003 YT1) we observe a secular drift in the MOID. In the case of 68950 (2002 QF15), this secular drift predicts a near-zero MOID around year 2500. In the case of 164121 (2003 YT1), the MOID is increasing at a relatively slow rate. The last example, 143651 (2003 QQ104), shows a large amplitude of the MOID around zero, which motivates additional analysis of the long-term hazard.
Figure 4: Km-sized NEOs that meet the MOID \(<\) 0.01 au condition in the next 1000 years. Color code indicates the number of Monte-Carlo samples that show a MOID \(<\) 0.01 au at the given date. Black means an agreement between all Monte Carlo runs to show a low-MOID. The NEOs are sorted by the first date in which they meet the MOID \(<\) 0.01 au condition.
### Upcoming hazardous km-sized NEOs
In the previous section we inspected the necessary condition for very close encounters to occur: a MOID \(<0.01\) au. The next step is to estimate the collision probability by making assumptions on their mean anomalies. The method to compute this timing probability once the MOID is low was described in section 2.3. To study the potential impact hazard we set a smaller close approach threshold (1 LD) and take into account the deterministic parts of the NEO position during the orbit propagation. In addition, we study the list of close approaches generated in the Monte Carlo experiment to validate our predictions.
The analytical expressions for the probability of collision assume uniformly distributed mean anomalies of the bodies. The initial conditions of the propagation start from a well defined mean anomaly of the NEOs. Thus, we need to track the evolution of the uncertainty in mean anomaly to know when we can start using the analytical estimates. Using our Monte Carlo experiments we compute the standard deviation in mean anomaly separation from the nominal trajectory.
\begin{table}
\begin{tabular}{r r r r r r r} \hline NEO & \(\Delta t/T\) & \(T_{S>10^{\circ}}\) & a (au) & e & i (deg) & \(V_{\infty}\) (km s\({}^{-1}\)) \\ \hline
7482 (1994 PC1) & 1.000 & 2541 & 1.349 & 0.330 & 33.47 & 19.68 \\
68950 (2002 QF15) & 1.000 & 3288 & 1.057 & 0.344 & 25.15 & 16.06 \\
144332 (2004 DV24) & 1.000 & 3285 & 1.423 & 0.290 & 55.90 & 29.83 \\
164121 (2003 YT1) & 1.000 & 2341 & 1.110 & 0.292 & 44.06 & 23.71 \\
143651 (2003 QQ104) & 0.945 & 2297 & 2.136 & 0.524 & 11.61 & 9.72 \\
4179 Toutatis (1989 AC) & 0.927 & 2516 & 2.545 & 0.625 & 0.45 & 12.19 \\
314082 Dryope (2005 CZ36) & 0.750 & 2352 & 2.238 & 0.575 & 16.14 & 14.05 \\
86819 (2000 GK137) & 0.744 & 2565 & 1.996 & 0.506 & 10.06 & 10.07 \\
385343 (2002 LV) & 0.740 & 2960 & 2.315 & 0.605 & 29.53 & 20.14 \\
177614 (2004 HK33) & 0.702 & 3507 & 1.888 & 0.521 & 5.44 & 11.37 \\ \hline \end{tabular}
\end{table}
Table 1: 10 km-sized NEOs with the largest fraction of time with low MOID over the next 1000. Time fraction indicates the average fraction of time with MOID \(<0.01\) au among the Monte Carlo experiments. The first date of standard deviation in Mean Anomaly \(>10^{\circ}\) is shown with initial orbit elements at the ephemeris retrieval date, 2022-09-15. \(V_{\infty}\) is the relative velocity at the first time that MOID \(<\) 1 LD of the nominal solution.
Figure 5: Propagation of the Earth MOID of km-sized NEOs with a low-MOID for a large fraction of the next 1000 years, as shown in Table 1. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory. The dashed red line indicates the first date in which the standard deviation in mean anomaly was found greater than 10 degrees, which does not happen for 68950 (2002 QF15).
We propagate the orbits of the km-sized NEO population for 1000 years and study when the MOID is smaller than a Lunar Distance (1 LD). When the standard deviation in mean anomaly is large, we estimate the probability of close encounters. In Figure 6 we list the km-sized NEOs showing the dates in which we found a low-MOID and sorted by their estimated probability of close encounters. The low-MOID regions are color coded with the standard deviation in mean anomaly. The combination of this information highlights the future periods of time in which the position of the NEOs is unknown.
Among the 40 km-sized NEOs currently with an Earth MOID \(<\) 0.01 au, we find that their mean anomalies remain well defined typically for at least 200 years, and in some cases for thousands of years. On the other hand, there are a few examples of growth in mean anomaly uncertainty after 2200, such as 35396 (1997 XF11). Because the MOID becomes greater than 1 LD by the time the uncertainty in mean anomaly is large, the estimated probability for this NEO is zero.
The objects with the highest estimated probability are shown in Figure 6 and listed in Table 2. The asteroid with the largest estimated probability of a deep close encounter is 7482 (1994 PC1). This result is to be expected, as in section 3.1 and Figure 5 we show that 7482 (1991 PC1) has a continuous low MOID. In this analysis we find that 7482 spends about 98% of this millennium with an Earth MOID \(<\)1 LD. During this unusually lasting MOID \(<\)1 LD period the position is well determined until approximately year 2500.
The propagation of the MOID for the other km-sized NEOs on top of the list is shown in Figure 7. We find that either the Earth MOID of these bodies is secularly drifting to zero, or that the current low-MOID oscillates around zero for a longer period of time. The latter case was observed for 7482 (1994 PC1), but additionally 4179 Toutatis (1989 AC) and 314082 Dryope (2005 CZ36) are in similar situations. Figure 7 shows that deep encounters are expected for these bodies, both in a low-MOID format and as the result of the Monte Carlo experiment.
The fact that the position is well determined allows us to determine the geometry of the subsequent close encounters, until the uncertainty grows too large. This assumption leaves a brief period of time between a very well constrained position and the range of validity of the uniformly distributed mean anomaly assumption. For this reason we check if there were actually such very close encounters among the low-MOID NEOs that we found earlier. In general, no close encounters within the Lunar Distance were found in the deterministic part of the trajectories.
Figure 6: Km-sized NEOs with non-zero estimated probability of encounters closer than 1 LD. Color code indicates the standard deviation in mean anomaly, S(MA), in log10. S(MA) is shown only in dates in which the MOID is lower than 0.01 au.
There are a few exceptions that should be mentioned: 4179 Toutatis (1989 AC), 220839 (2004 VA) (Both in Figure 7), 20236 (1998 BZ7), 214869 (2007 PA8) and 175114 (2004 QQ) experience close encounters right before or right after the date in which \(S(\mathrm{MA})>10^{\circ}\). In all of these cases, the MOID tends to zero around the dates in which a deep encounter is expected. In some cases, uncertainty in the position grows largely due to preceding close encounters. In general, we find that the Monte Carlo experiment agrees in finding deep encounters. Thus, the current method is successful in identifying their potential for very deep encounters within the next millennium.
Figure 7: Propagation of the Earth MOID of km-sized NEOs with a non-zero probability of having an encounter closer than 1 LD by year 3000. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory. The dashed red line indicates the first date in which the standard deviation in mean anomaly was found greater than 10 degrees. Close encounters are indicated with circles in colors and close encounters of the nominal trajectory are shown as vertical black nodes. Encounters of closest approach distance \(<1LD\) (0.0026 au) are highlighted with a larger red circle.
## 4 Individual Hazard Analyses
In this section we describe in more detail the hazardous nature of a few km-sized NEOs that were previously analyzed. We show the evolution of the MOID as well as the recorded track of close encounters. We study their orbital dynamics to provide context of the MOID evolution. In addition, we show the sequence of close encounters that precedes the growth in uncertainty and limits the accuracy of the prediction of the position of the NEO.
### Asteroid 7482 (1994 PC1)
7482 (1994 PC1) has been highlighted in every section of this work because of its remarkable MOID evolution. Its Earth MOID is currently \(6.09\cdot 10^{-4}\) au (0.237 LD), has been near zero for centuries and will remain very low for at least another 1000 years as shown in section 3.1. This condition is the reason why it is ranked as the most hazardous NEO in the list of Table 2.
The orbit elements of 7482 (1994 PC1) are shown for reference in Appendix 7. During the period in which the Earth MOID remains small there are close encounters that cause significant variations semi-major axis and eccentricity. However, arguments of node and perihelion follow a secular drift. In its current orbit within the inner solar system, there is not a large amplitude of short-period oscillations that could disperse the distributions further. However, it is important to highlight that after 500 years the mean anomalies become uncertain.
\begin{table}
\begin{tabular}{r l l l l l l} \hline NEO & P(yr\({}^{-1}\)) & \(\Delta t/T\) & \(t_{S>10^{\circ}}\) & a (au) & e & i (deg) & \(V_{\infty}\) (kms\({}^{-1}\)) \\ \hline
7482 (1994 PC1) & 1.51e-04 & 0.978 & 2541 & 1.349 & 0.330 & 33.47 & 19.68 \\
4179 Toutatis (1989 AC) & 5.19e-05 & 0.336 & 2516 & 2.545 & 0.625 & 0.45 & 12.19 \\
314082 Dryope (2005 CZ36) & 4.88e-05 & 0.312 & 2371 & 2.238 & 0.575 & 16.14 & 14.05 \\
86819 (2000 GK137) & 4.44e-05 & 0.229 & 2563 & 1.996 & 0.506 & 10.06 & 10.07 \\
143651 (2003 QO104) & 3.84e-05 & 0.306 & 2308 & 2.136 & 0.524 & 11.61 & 9.72 \\
5011 Pfah (6743 P-L) & 3.68e-05 & 0.152 & 2626 & 1.636 & 0.500 & 7.41 & 12.50 \\
220839 (2004 VA) & 3.05e-05 & 0.172 & 2856 & 1.902 & 0.595 & 3.69 & 14.88 \\
66391 Moshup (1999 KW4) & 1.59e-05 & 0.045 & 2987 & 0.642 & 0.688 & 38.88 & 21.08 \\
143404 (2003 BD44) & 1.45e-05 & 0.153 & 2587 & 1.968 & 0.606 & 2.66 & 15.93 \\
190135 (2005 QE30) & 1.43e-05 & 0.074 & 2899 & 2.019 & 0.688 & 6.22 & 19.14 \\
276732 (2004 EV9) & 1.07e-05 & 0.038 & 2809 & 1.471 & 0.781 & 40.83 & 32.09 \\
20236 (1998 BZ7) & 1.04e-05 & 0.087 & 2817 & 2.036 & 0.559 & 6.50 & 12.52 \\
4183 Cuno (1959 LM) & 7.37e-06 & 0.123 & 2913 & 1.982 & 0.636 & 6.67 & 17.01 \\
387793 (2003 WL25) & 4.46e-06 & 0.030 & 2880 & 2.395 & 0.742 & 23.76 & 25.34 \\
29075 (1950 DA) & 4.22e-06 & 0.104 & 2913 & 1.698 & 0.508 & 12.17 & 14.09 \\
214869 (2007 PA8) & 3.43e-06 & 0.091 & 2762 & 2.848 & 0.653 & 2.00 & 12.46 \\
175114 (2004 QQ) & 2.30e-06 & 0.045 & 2648 & 2.249 & 0.664 & 5.72 & 19.74 \\ (2016 CB194) & 2.27e-06 & 0.039 & 2897 & 2.512 & 0.632 & 9.88 & 12.81 \\
7092 Cadmus (1992 LC) & 1.99e-06 & 0.046 & 2680 & 2.542 & 0.695 & 17.77 & 19.74 \\
90075 (2002 VU94) & 1.95e-06 & 0.082 & 2606 & 2.134 & 0.576 & 8.91 & 12.81 \\ (2019 HC) & 1.78e-06 & 0.015 & 2883 & 2.670 & 0.551 & 35.32 & 19.48 \\
322966 (2002 KF4) & 1.64e-06 & 0.022 & 2960 & 2.903 & 0.577 & 37.02 & 19.43 \\
5143 Heracles (1991 VL) & 1.38e-06 & 0.032 & 2998 & 1.834 & 0.772 & 9.03 & 25.78 \\
529718 (2010 KY127) & 1.32e-06 & 0.011 & 2908 & 2.489 & 0.883 & 60.84 & 39.67 \\
508997 (2005 FL4) & 1.07e-06 & 0.012 & 2823 & 2.651 & 0.721 & 28.43 & 24.40 \\ (1999 XS35) & 5.39e-07 & 0.148 & 2409 & 17.780 & 0.948 & 19.62 & 18.28 \\
248590 (2006 CS) & 5.34e-07 & 0.005 & 2617 & 2.914 & 0.697 & 52.31 & 31.61 \\
1620 Geographos (1951 RA) & 4.47e-07 & 0.002 & 2861 & 1.246 & 0.335 & 13.34 & 11.88 \\ \hline \end{tabular}
\end{table}
Table 2: 28 km-sized NEOs with non-zero estimated probability of a deep encounter (\(d_{CA}<\)1 LD) in the next 1000 years, as averaged over Monte Carlo runs. Time fraction indicates the average fraction of time with MOID \(<\) 1 LD among the Monte Carlo experiments. The first date of standard deviation in Mean Anomaly \(>\) 10\({}^{\circ}\) is shown with initial orbit elements at the ephemeris retrieval date, 2022-09-15. \(V_{\infty}\) is the relative velocity at the first time that MOID \(<\) 1 LD of the nominal solution.
Figure 8 shows the sequence of close encounters that are recorded in the Monte Carlo numerical propagation. It appears that the uncertainty in the encounter of 2525 is large enough that the range of possible closest approach distances is between 0 and 0.04 au. Right after the 2525 encounter the standard deviation in mean anomaly increases beyond 10 degrees, and we start estimating its probability of collision using the methods of section 2.3. Encounters below the Lunar Distance were found after this period, which is consistent with the higher probability that we previously estimated.
### Asteroid 143651 (2003 Qo104)
The km-sized NEO with the shortest deterministic horizon is 143651 (2003 QO104), which was also previously introduced in Figure 6. The orbit solution of 143651 (2003 QO104) has an observation arc of decades, including light-curve observations (Birtwistle, 2009) and radar astrometry (Warner et al., 2009). Thus, we believe that the rapid increase in uncertainty is a dynamical effect of its orbit.
Among the list of NEOs in Table 3 with non-zero estimated probability of having encounters below the Lunar Distance, 143651 (2003 QO104) has the slowest close encounters. These relative velocities imply larger scatter during close encounters, including a rapid increase in mean anomaly uncertainty. As shown in Figure 9, there is a close encounter in 2220 after which the sequence of encounters becomes unique for each Monte Carlo run.
Figure 14 shows the evolution of the orbital elements. By the end of the millennium there is a wide variety of orbits in which 143651 (2003 QO104), product of an undetermined sequence of both close and slow Earth close encounters.
### Asteroid 66391 Moshup (1999 Kw4)
The binary asteroid 66391 Moshup (1999 KW4) has been the object multiple studies relative to its binary system condition. It consists of a primary and secondary of respectively 1.317 km and 0.59 km of diameter (Ostro et al., 2006; Scheirich et al., 2021). Its rotation states suggest that it is a product of YORP spin-up and disruption (Scheeres et al., 2006; Davis and Scheeres, 2020), and its orbit is expanding in time due to the BYORP effect (Scheirich et al., 2021).
Figure 8: Earth close encounters of 7482 (1994 PC1), bounded by the osculating Earth MOID and as obtained through numerical Monte Carlo analysis. Close encounters are highlighted with vertical lines and points, and the Earth MOID is shown in continuous lines. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory. Encounters of closest approach distance \(<1LD\) (0.0026 au) are highlighted with a larger red circle.
Figure 9: Earth close encounters of 143651 (2003 QO104), bounded by the osculating Earth MOID and as obtained through numerical Monte Carlo analysis. Close encounters are highlighted with vertical lines and points, and the Earth MOID is shown in continuous lines. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory.
The heliocentric orbit of 66391 Moshup (1999 KW4) is in resonance with the Earth, as it experiences resonant close encounters every 17 or 18 years. The apparition of 2019 allowed observations from multiple observatories, during the 0.0346 au encounter (Scheirich et al., 2021). The next close encounter will be in May 2036, with a closest distance of 0.0155 au, much closer than the first radar observations obtained using the Goldstone and Arecibo radar systems in May of 2001 (Ostro et al., 2006). When the MOID becomes small, which is expected to happen slightly before year 3000, many close encounters below the Lunar Distance are recorded in our Monte Carlo analysis. These will cause a large scattering of the orbit as shown in Figure 15. Because of how relatively late in the millennium the MOID \(<\)1LD condition is held, 66391 Moshup (1999 KW4) is not ranked higher in the list of Table 3.
### Asteroid 29075 (1950 AD)
Asteroid 29075 (1950 AD) is representative example of impact probability studies. After it was discovered and tracked for 17 days in 1950 (Wirtanen, 1950), it was lost for 50 years until re-discovered on 2000-12-31. Giorgini et al. (2002) found a close approach in 2880 with the possibility of an impact. Farnocchia and Chesley (2014) modeled the Yarkovsky effect on 29075 (1950 AD) and estimated an impact probability of \(2.5\cdot 10^{-4}\).
The example of 29075 (1950 AD) is paradigmatic in MOID evolution of NEOs. As shown in Figure 2, its Earth MOID is secularly drifting to zero. During the decades that this condition is maintained, the probability of experiencing a deep encounter is non-zero. 29075 (1950 AD) was found among the list of km-sized NEOs for which we estimated this probability. As we show in Figure 11, the encounters of 2860 and 2880 will occur although with an uncertain closest approach distance.
Figure 11: Earth close encounters of 29075 (1950 AD), bounded by the osculating Earth MOID and as obtained through numerical Monte Carlo analysis. Close encounters are highlighted with vertical lines and points, and the Earth MOID is shown in continuous lines. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory.
Figure 10: Earth close encounters of 66391 Moshup (1999 KW4), bounded by the osculating Earth MOID and as obtained through numerical Monte Carlo analysis. Close encounters are highlighted with vertical lines and points, and the Earth MOID is shown in continuous lines. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory. Encounters of closest approach distance \(<1LD\) (0.0026 au) are highlighted with a larger red circle.
### Asteroid 2022 Ap7
The km-sized 2022 AP7 is one of the largest PHAs recently discovered (Sheppard et al., 2022). The orbit of 2022 AP7 is in near-resonance with the orbit of the Earth, meaning that even if its MOID will become small in the next hundreds of years, almost no close encounters are expected in this period of time. The only likely exception is a close encounter in 2363 which will probably to be at a closest approach distance larger than 0.05au. An interesting finding is that 2022 AP7 comes from a sequence of resonant encounters every 5 years during the 19th century.
## 5 Conclusions
We characterized the long-term collision hazard of the known km-sized NEOs by the evolution of the MOID. The MOID can be accurately propagated beyond the dates in which the position within the orbit becomes unknown for certain NEOs. We first showed the km-sized NEOs with an Earth MOID \(<0.01\) au of the next centuries. This classification already allowed us to rule out impacts for the majority of the population in the next 1000 years. When the position within the orbit is unknown and the MOID is small, we used an analytical estimation of the impact probability. We used this method to rank the km-sized population by the estimated probability of an Earth encounter of \(d_{CA}<1\) LD. We found that there are a few km-sized near-Earth asteroids whose Earth MOID remains \(<0.01\) au for thousands of years, such as 7482 (1994 PC1), 314082 Dryope (2005 CZ36), or 143651 (2003 QO104).
In this work we push past the typical horizon for impact hazard assessment. Long-term impact hazard assessment can be limited by naturally chaotic dynamics. For example, the orbit of 143651 (2003 QO104) is scattered after a sequence of close encounters. However, in the cases in which the MOID can indeed be propagated confidently for thousands of years, we can point to the dates of interest for hazard characterization or rule out their risk.
As we propagate into larger orders of magnitude in time, it would be possible to simplify further our dynamics and use analytical (Vokrouhlicky et al., 2012) or semi-analytical tools that accounted for the growth in uncertainty due close encounters (Fuentes-Munoz et al., 2022). The timescales of this study are long enough that the position is stochastic, but short enough that the precise modeling of the long-term effects is required. The range of orbits of the km-sized population allows widely different dynamical regimes. For these reasons, the use of numerical integration is left as the most reliable option.
The metric derived in section 2.3 uses an analytical expression that assumes that the mean anomalies are uniformly distributed. This assumption holds when the uncertainty in mean anomaly is large, yet the transition between the deterministic part of the trajectory and this regime must be carefully analyzed. In some cases, these dates contribute the most to the probability of collision of the low-MOID period, as seen in the case of 29075 (1950 AD). We manually checked all the top-ranked asteroids for the presence encounters in this period of time, and displayed some of these examples in section 4. The measure of the uncertainty in mean anomaly proves to be useful not only to validate the uniform distribution assumption, but to highlight dates of interest for hazard characterization. With this purpose in mind we find no need to increase the number of Monte Carlo samples to increase the accuracy in our predictions. The present work provides a list of asteroids and dates in which impact monitoring tools can be used to more accurately determine impact probabilities far beyond the default dates reported by impact monitoring systems.
Natural extensions of this work would be to broaden the selected group of asteroids from the km-sized population to PHAs or the whole NEO population. The MOID evolution as characterized in this work suggests a significant flux
Figure 12: Earth close encounters of 2022 AP7, bounded by the osculating Earth MOID and as obtained through numerical Monte Carlo analysis. Close encounters are highlighted with vertical lines and points, and the Earth MOID is shown in continuous lines. Individual Monte-Carlo runs are shown in colors, black continuous line shows the nominal trajectory.
in and out of Earth's vicinity, implying a significant flux in and out of the PHA category in timescales of decades to centuries. The long-term hazard ranking could be made available to the planetary defense community, as the most hazardous NEOs should be objects of interest for more detailed observations and future exploration missions.
## 6 Acknowledgements
This research is supported by grant 80NSSC22K0240 of the YORPD program of the National Aeronautics and Space Administration. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
|
2310.00846 | Generalized spectral characterization of signed trees | Let $T$ be a tree with an irreducible characteristic polynomial $\phi(x)$
over $\mathbb{Q}$. Let $\Delta(T)$ be the discriminant of $\phi(x)$. It is
proved that if $2^{-\frac n2}\sqrt{\Delta(T)}$ (which is always an integer) is
odd and square free, then every signed tree with underlying graph $T$ is
determined by its generalized spectrum. | Yizhe Ji, Wei Wang, Hao Zhang | 2023-10-02T01:43:35Z | http://arxiv.org/abs/2310.00846v1 | # Generalized spectral characterization of signed trees
###### Abstract
Let \(T\) be a tree with an irreducible characteristic polynomial \(\phi(x)\) over \(\mathbb{Q}\). Let \(\Delta(T)\) be the discriminant of \(\phi(x)\). It is proved that if \(2^{-\frac{9}{2}}\sqrt{\Delta(T)}\) (which is always an integer) is odd and square free, then every signed tree with underlying graph \(T\) is determined by its generalized spectrum.
**Keywords:** Graph spectra; Cospectral graphs; Determined by spectrum; Rational orthogonal matrix; Signed graph
**Mathematics Subject Classification:** 05C50
## 1 Introduction
It is well known that the spectra of graphs encode a lot of combinatorial information about the given graphs. A major unsolved question in spectral graph theory is: "What kinds of graphs are determined (up to isomorphism) by their spectrum (DS for short)?". The problem originates from chemistry and was raised in 1956 by Gunthard and Primas [1], which relates Huckle's theory in chemistry to graph spectra. The above problem is also closely related to a famous problem of Kac [12]: "Can one hear the shape of a drum?" Fisher [11] modelled the drum by a graph, and the frequency of the sound was characterized by the eigenvalues of the graph. Hence, the two problems are essentially the same.
It was commonly believed that every graph is DS until the first counterexample (a pair of cospectral but non-isomorphic trees) was found by Collatz and Sinogowitz [2] in 1957. Another famous result on cospectral graphs was given by Schwenk [16], which states that almost every tree is not DS. For more constructions of cospectral graphs, see, e.g., [10, 13, 17]. However, it turns out that showing a given graph to be DS is generally very hard and challenging. Up to now, only a few graphs with very special structures are known to be DS. We refer the reader to [7, 8] for more background and known results.
In recent years, Wang and Xu [19] and Wang [20, 21] considered a variant of the above problem. For a simple graph \(G\), they defined the _generalized spectrum_ of \(G\) as the spectrum of \(G\) together with that of its complement \(\bar{G}\). A graph \(G\) is said to be _determined by its generalized spectrum_ (DGS for short), if any graph having the same generalized spectrum as \(G\) is necessarily isomorphic to \(G\).
Let \(G\) be a graph on \(n\) vertices with adjacency matrix \(A=A(G)\). The _walk-matrix_ of \(G\) is defined as
\[W(G)=[e,Ae,\ldots,A^{n-1}e],\]
where \(e\) is the all-one vector. Wang [20, 21] proved the following theorem.
**Theorem 1.1** ([20, 21]).: _If \(2^{-\lfloor\frac{n}{2}\rfloor}\det(W)\) is odd and square-free, then \(G\) is DGS._
The problem of spectral determination of ordinary graphs naturally extends to signed graphs. This paper is a continuation along this line of research for signed graphs in the flavour of Theorem 1.1.
Let \(\Delta(T)\) be the discriminant of a tree \(T\) (see Section 4 for the definition). The main result of the paper is the following theorem.
**Theorem 1.2**.: _Let \(T\) be a tree on \(n\) vertices with an irreducible characteristic polynomial over \(\mathbb{Q}\). If \(2^{-\frac{n}{2}}\sqrt{\Delta(T)}\) (which is always an integer) is odd and square free, then every signed tree with underlying graph \(T\) is DGS._
As an immediately consequence of Theorem 1.2, we have
**Corollary 1.3**.: _Let \(T\) and \(T^{\prime}\) be two cospectral and non-isomorphic trees with a common irreducible characteristic polynomial. Suppose \(2^{-\frac{n}{2}}\sqrt{\Delta(T)}\) is odd and square free. Then no two signed trees with underlying graphs \(T\) and \(T^{\prime}\) respectively are generalized cospectral._
**Example 1**.: Let \(T\) and \(T^{\prime}\) be two cospectral non-isomorphic trees on 14 vertices (see Fig. 1) with a common irreducible characteristic polynomial
\[\phi(T)=\phi(T^{\prime})=-1+16x^{2}-79x^{4}+157x^{6}-143x^{8}+63x^{10}-13x^{1 2}+x^{14}.\]
It can be easily computed that \(2^{-7}\sqrt{\Delta(T)}=2^{-7}\sqrt{\Delta(T^{\prime})}=5\times 11\times 4754599\), which is odd and square-free. Thus, according to Theorem 1.2, every signed tree with underlying graph \(T\) (resp. \(T^{\prime}\)) are DGS. In particular, no two signed trees with underlying graphs \(T\) and \(T^{\prime}\) respectively are generalized cospectral.
Theorem 1.2 shows that whenever the underlying tree \(T\) with \(n\) vertices satisfies a simple arithmetic condition, then all the \(2^{n-1}\) signed trees (including \(T\) itself) whose underlying is \(T\) is DGS. That is, the DGS property of all these signed trees only depends on the underlying graph \(T\). This is somewhat unexpected, since given a pair of trees \(T\) and \(T^{\prime}\), it seems time
consuming even to check whether there exist two signed trees with underlying graphs \(T\) and \(T^{\prime}\) respectively that are generalized cospectral; see Example 1.
We mention that Theorem 1.2 is the best possible in the sense that it is no longer true if \(2^{-\frac{n}{2}}\sqrt{\Delta(T)}\) has a multiple odd prime factor. Moreover, the irreducibility assumption of the characteristic polynomial of the tree is essential which cannot be removed; see Remarks 1 and 2 in Section 4.
The rest of the paper is organized as follows. In Section 2, we give some preliminary results that will be needed in the proof of Theorem 1.2. In Section 3, we give a structure theorem, which plays a key role in the paper. In Section 4, we present the proof of Theorem 1.2. Conclusions and future work are given in Section 5.
## 2 Preliminaries
For the convenience of the reader, we give some preliminary results that will be needed later in the paper. For more results in spectral graphs theory, we refer to [4, 6]
Let \(G=(V,E)\) be a simple graph. A _signed graph_ is a graph obtained from \(G\) by assigning a sign 1 or \(-1\) to every edge according to a mapping \(\sigma:E\rightarrow\{1,-1\}\). We use \(\Gamma=(G,\sigma)\) to denote a signed graph with _underlying graph_\(G\) and sign function (signature) \(\sigma\). We call a signed graph a _signed bipartite graph_, if its underlying graph is bipartite.
Let \(U\) be a subset of \(V\) such that \((U,V\setminus U)\) is a partition of \(V\). A _switching_ w.r.t. \(U\) (or \(V\setminus U\)) is an operation that changes all the signs of edges between \(U\) and \(V\setminus U\), while keeps the others unchanged. Two signed graphs \(\Gamma\) and \(\Gamma^{\prime}\) are _switching-equivalent_ if \(\Gamma^{\prime}\) can be obtained from \(\Gamma\) by a switching operation, or equivalently, there exists a diagonal matrix \(D\) with all diagonal entry \(\pm 1\) such that \(DA(\Gamma)D=A(\Gamma^{\prime})\). A signed graph is _balanced_ if every cycle contains an even number of edges with sign -1. It is well-known that a signed graph is
Figure 1: A pair of cospectral non-isomorphic trees on 14 vertices
balanced if and only it is switching equivalent to a unsigned graph.
Let \(\Gamma\) be a signed graph with adjacency matrix \(A(\Gamma)\). The _characteristic polynomial_ of \(\Gamma\) is defined as the characteristic polynomial of \(A(\Gamma)\), i.e., \(\phi(\Gamma;x)=\det(xI-A(\Gamma))\), where \(I\) is the identity matrix. Two signed graphs \(\Gamma\) and \(\Gamma^{\prime}\) with adjacency matrices \(A(\Gamma)\) and \(A(\Gamma^{\prime})\) respectively are called _generalized cospectral_ if
\[\det(xI-A(\Gamma))=\det(xI-A(\Gamma^{\prime}))\text{ and }\det(xI-(J-I-A( \Gamma)))=\det(xI-(J-I-A(\Gamma^{\prime}))),\]
where \(J\) is the all-one matrix and \(J-I-A(\Gamma)\) formally denotes the complement of \(\Gamma\) (it is indeed the complement of \(\Gamma\) if every edge of \(\Gamma\) has been assigned a positive sign \(+1\)). A signed graph \(\Gamma\) is said to be _determined by the generalized spectrum_ (DGS for short), if any signed graph that is generalized cospectral with \(\Gamma\) is isomorphic to \(\Gamma\).
A polynomial \(f(x)\in\mathbb{Q}[x]\) is _irreducible_ if it cannot be factored into two polynomials with rational coefficients of lower degree. Let \(f(x)\in\mathbb{Q}[x]\) be an irreducible polynomial with degree \(n\) and \(\alpha\) be one of its root. Then \(\mathbb{Q}(\alpha)=\{c_{0}+c_{1}\alpha+\cdots+c_{n-1}\alpha^{n-1}:c_{i}\in \mathbb{Q},\ 0\leq i\leq n-1\}\) is a _number field_ which is isomorphic to \(\mathbb{Q}[x]/(f(x))\) and is obtained by adding \(\alpha\) to \(\mathbb{Q}\); see e.g. [9].
An orthogonal matrix \(Q\) is a square matrix such that \(Q^{\mathrm{T}}Q=I_{n}\). It is called _rational_ if every entry of \(Q\) is a rational number, and _regular_ if each row sum of \(Q\) is \(1\), i.e., \(Qe=e\), where \(e\) is the all-one column vector. Denote by \(\mathrm{RO}_{n}(\mathbb{Q})\) the set of all \(n\) by \(n\) regular orthogonal matrices with rational entries.
In 2006, Wang and Xu [19] initiated the study of the generalized spectral characterization of graphs. For two generalized cospectral graphs \(G\) and \(H\), they obtained the following result (see also [5]), which plays a fundamental role in their method.
**Theorem 2.1** ([5],[19]).: _Let \(G\) be a graph. Then there exists a graph \(H\) such that \(G\) and \(H\) are generalized cospectral if and only if there exists a regular orthogonal matrix \(Q\) such that_
\[Q^{\mathrm{T}}A(G)Q=A(H). \tag{1}\]
_Moreover, if \(\det W(G)\neq 0\), then \(Q\in\mathrm{RO}_{n}(\mathbb{Q})\) is unique and \(Q=W(G)W^{-1}(H)\)._
A graph \(G\) with \(\det W(G)\neq 0\) is called _controllable_ (see [14]), denoted by \(\mathcal{G}_{n}\) the set of all controllable graphs on \(n\) vertices. For a graph \(G\in\mathcal{G}_{n}\), define
\[\mathcal{Q}(G):=\{Q\in\mathrm{RO}_{n}(\mathbb{Q}):\ Q^{\mathrm{T}}A(G)Q=A(H) \text{ for some graph }H\}.\]
Then according to Theorem 2.1, it is easy to obtain the following
**Theorem 2.2** ([19]).: _Let \(G\) be a controllable graph. Then \(G\) is DGS if and only if the set \(\mathcal{Q}(G)\) contains only permutation matrices._
The above theorems extend naturally to signed graphs. By Theorem 2.2, finding out the possible structure of all \(Q\in\mathcal{Q}(G)\) is a key to determine whether a (signed) graph \(G\) is DGS.
**Notations**: We use \(e_{n}\) (or \(e\) if there is no confusion arises) to denote an \(n\)-dimensional column all-one vector, and \(J\) the all-one matrix. For a vector \(\alpha=(a_{1},a_{2},\ldots,a_{n})^{\mathrm{T}}\in\mathbb{R}^{n}\), we use \(||\alpha||_{2}=(a_{1}^{2}+a_{2}^{2}+\cdots+a_{n}^{2})^{1/2}\) to denote the Euclidean norm of \(\alpha\).
## 3 A structure theorem for \(Q\)
The key observation of this paper is the following theorem which shows that for two generalized cospectral signed bipartite graphs with a common irreducible characteristic polynomial, the regular rational orthogonal matrix carried out the similarity of their adjacency matrices has a special structure.
**Theorem 3.1**.: _Let \(\Gamma\) and \(\tilde{\Gamma}\) be two generalized cospectral signed bipartite graphs with a common irreducible characteristic polynomial \(\phi(x)\) over \(\mathbb{Q}\). Suppose that the adjacency matrices of \(\Gamma\) and \(\tilde{\Gamma}\) are given as follows, respectively:_
\[A=A(\Gamma)=\begin{bmatrix}O&M\\ M^{\mathrm{T}}&O\end{bmatrix},\ \tilde{A}=A(\tilde{\Gamma})=\begin{bmatrix}O& \tilde{M}\\ \tilde{M}^{\mathrm{T}}&O\end{bmatrix}.\]
_Then there exists a regular orthogonal matrix \(Q\) such that \(Q^{\mathrm{T}}AQ=\tilde{A}\), where_
\[Q=\begin{bmatrix}Q_{1}&O\\ O&Q_{2}\end{bmatrix}\ \mathrm{or}\ Q=\begin{bmatrix}O&Q_{1}\\ Q_{2}&O\end{bmatrix}\]
_with \(Q_{1}\) and \(Q_{2}\) being regular rational orthogonal matrices, respectively._
**Corollary 3.2**.: _The matrix \(Q\) in Theorem 3.1 is the unique rational orthogonal matrix such that \(Q^{\mathrm{T}}AQ=\tilde{A}\)._
Proof.: The irreducibility assumption of the characteristic polynomial of \(A\) implies that \(\Gamma\) is controllable. Then the corollary follows immediately from Theorem 2.1.
To give the proof of Theorem 3.1, we need several lemmas below.
**Lemma 3.3**.: _Let \(\Gamma\) and \(\tilde{\Gamma}\) be two generalized cospectral signed graphs with adjacency matrices \(A\) and \(\tilde{A}\), respectively. Then \(e^{\mathrm{T}}(\lambda I-A)^{-1}e=e^{\mathrm{T}}(\lambda I-\tilde{A})^{-1}e\)._
Proof.: It can be easily computed that
\[\det(\lambda I-(A+tJ))\] \[= \det(\lambda I-A)\det(I-t(\lambda I-A)^{-1}ee^{\mathrm{T}})\] \[= \det(\lambda I-A)(1-te^{\mathrm{T}}(\lambda I-A)^{-1}e).\]
Similarly, \(\det(\lambda I-(\tilde{A}+tJ))=\det(\lambda I-\tilde{A})(1-te^{\mathrm{T}}(\lambda I -\tilde{A})^{-1}e)\). Thus, the lemma follows.
**Lemma 3.4** ([15]).: \((\lambda I-A)^{-1}=\sum_{i=1}^{n}\frac{\xi_{i}\xi_{i}^{\mathrm{T}}}{\lambda- \lambda_{i}}\)_, where \(\xi_{i}\)'s are normalized eigenvectors of \(A\) associated with \(\lambda_{i}\), for \(\leq i\leq n\)._
**Lemma 3.5** ([18]).: _Let \(A=(a_{ij})\) be a symmetric integral matrix with an irreducible characteristic polynomial \(\phi(x)\). Let \(\lambda_{1},\ldots,\lambda_{n}\) be the distinct eigenvalues of \(A\). Then there exist polynomials \(\phi_{i}(x)\in\mathbb{Q}[x]\) with \(\deg\phi_{i}<n\) such that the eigenvectors \(\xi_{i}\) of \(A\) associated with \(\lambda_{i}\) can be expressed as_
\[\xi_{i}=(\phi_{1}(\lambda_{i}),\phi_{2}(\lambda_{i}),\ldots,\phi_{n}(\lambda_ {i}))^{\mathrm{T}}\]
_for \(1\leq i\leq n\)._
Proof.: Let \(\lambda_{1}\) be an eigenvalue of \(A\) with corresponding eigenvector \(\xi_{1}\). Consider the linear system of equations \((\lambda_{1}I-A)\xi_{1}=0\). By Gaussian elimination, there exist \(x_{i}\in\mathbb{Q}(\lambda_{1})\) such that \(\xi_{1}=(x_{1},x_{2},\ldots,x_{n})^{\mathrm{T}}\). Note \(\mathbb{Q}(\lambda_{1})\) is a number field. There exist polynomials \(\phi_{i}(x)\in\mathbb{Q}[x]\) with \(\deg\phi_{i}<n\) such that \(x_{i}=\phi_{i}(\lambda_{1})\).
By the \(k\)-th equation of \((\lambda_{1}I-A)\xi_{1}=0\), we have \(\psi(\lambda_{1}):=\sum_{j=1}^{n}a_{k,j}\phi_{j}(\lambda_{1})-\lambda_{1}\phi_ {j}(\lambda_{1})=0\), for \(1\leq k\leq n\). Note \(\psi(x)\in\mathbb{Q}[x]\) and \(\psi(\lambda_{1})=0\). By the irreducibility of \(\phi(x)\), we have \(\phi(x)\) divides \(\psi(x)\). Thus, \(\psi(\lambda_{i})=0\) for for \(1\leq i\leq n\), and \(\xi_{i}=(\phi_{1}(\lambda_{i}),\phi_{2}(\lambda_{i}),\ldots,\phi_{n}(\lambda_ {i}))^{\mathrm{T}}\) is an eigenvector associated with \(\lambda_{i}\).
Next, we collect some simple facts about the relationships of eigenvalues/eigenvectors between the adjacency matrix \(A\) of a signed bipartite graph \(\Gamma\) and its bipartite-adjacency matrix \(M\).
**Lemma 3.6**.: _Let \(\Gamma\) be a signed bipartite graph with an irreducible characteristic polynomial over \(\mathbb{Q}\). Let the adjacency matrix of \(\Gamma\) be \(A=A(\Gamma)=\begin{bmatrix}O&M\\ M^{\mathrm{T}}&O\end{bmatrix}\). Suppose that \(\begin{bmatrix}u\\ v\end{bmatrix}\) is an eigenvector of \(A\) associated with an eigenvalue \(\lambda\). Then_
1. \(\lambda^{2}\) _is an eigenvalue of_ \(MM^{\mathrm{T}}\) _and_ \(M^{\mathrm{T}}M\) _with corresponding eigenvectors_ \(u\) _and_ \(v\)_, respectively;_
2. \(u\) _and_ \(v\) _have the same length, i.e.,_ \(||u||_{2}=||v||_{2}\)_;_
3. \(\begin{bmatrix}u\\ -v\end{bmatrix}\) _(resp._ \(\begin{bmatrix}-u\\ v\end{bmatrix}\)_) is an eigenvector of_ \(A\) _associated with eigenvalue_ \(-\lambda\)_;_
4. _The characteristic polynomials of_ \(MM^{\mathrm{T}}\) _(resp._ \(M^{\mathrm{T}}M\)_) is irreducible over_ \(\mathbb{Q}\)
Proof.: Note that the characteristic polynomial \(\phi(x)\) of \(A\) is irreducible, it follows that zero can never be an eigenvalue of \(A\), and hence \(M\) must be a square matrix of order \(m:=n/2\).
Let \(\lambda\neq 0\) be any eigenvalue of \(A\) with corresponding eigenvector \(\begin{bmatrix}u\\ v\end{bmatrix}\). Then
\[A\begin{bmatrix}u\\ v\end{bmatrix}=\begin{bmatrix}Mv\\ M^{\mathrm{T}}u\end{bmatrix}=\lambda\begin{bmatrix}u\\ v\end{bmatrix}\Longleftrightarrow\begin{cases}Mv=\lambda u,\\ M^{\mathrm{T}}u=\lambda v.\end{cases} \tag{2}\]
Thus, we have \(u\neq 0\) and \(v\neq 0\), for otherwise we would have \(u=v=0\), since \(\lambda\neq 0\). It follows that
\[MM^{\mathrm{T}}u=\lambda^{2}u,\ M^{\mathrm{T}}Mv=\lambda^{2}v.\]
It follows from \(Mv=\lambda u\) that \(u^{\mathrm{T}}Mv=\lambda u^{\mathrm{T}}u\). By \(M^{\mathrm{T}}u=\lambda v\) we get \(v^{\mathrm{T}}M^{\mathrm{T}}u=\lambda v^{\mathrm{T}}v\). Note \(u^{\mathrm{T}}Mv=(u^{\mathrm{T}}Mv)^{\mathrm{T}}=v^{\mathrm{T}}M^{\mathrm{T}}u\). It follows that \(\lambda u^{\mathrm{T}}u=\lambda v^{\mathrm{T}}v\), and hence \(u^{\mathrm{T}}u=v^{\mathrm{T}}v\) since \(\lambda\neq 0\).
Note that
\[A\begin{bmatrix}u\\ -v\end{bmatrix}=\begin{bmatrix}-Mv\\ M^{\mathrm{T}}u\end{bmatrix}=-\lambda\begin{bmatrix}u\\ -v\end{bmatrix}.\]
Hence, \(\begin{bmatrix}u\\ -v\end{bmatrix}\) is an eigenvector of \(A\) associated with eigenvalue \(-\lambda\). Since the characteristic polynomial of \(A\) is irreducible, the set of all the eigenvalues of \(A\) can be written as \(\{\lambda_{1},\lambda_{2},\ldots,\lambda_{m},-\lambda_{1},-\lambda_{2},\ldots, \lambda_{m}\}\).
Hence, the set of all the eigenvalues of \(MM^{\mathrm{T}}\) (or \(M^{\mathrm{T}}M\)) can be write as \(\{\lambda_{1}^{2},\lambda_{2}^{2},\ldots,\lambda_{m}^{2}\}\). Since \(\phi(A;x)=(x^{2}-\lambda_{1}^{2})\cdots(x^{2}-\lambda_{m}^{2})\) is irreducible over \(\mathbb{Q}\), \(\phi(MM^{\mathrm{T}};x)=\phi(M^{\mathrm{T}}M;x)=(x-\lambda_{1}^{2})\cdots(x- \lambda_{m}^{2})\) is also irreducible \(\mathbb{Q}\).
Now, we present the proof of Theorem 3.1.
Proof of Theorem 3.1.: Set \(m:=n/2\). By Lemma 3.6, let \(\lambda_{1},\lambda_{2},\ldots,\lambda_{m},-\lambda_{1},-\lambda_{2},\ldots,- \lambda_{m}\) be the eigenvalues of \(A\) and \(\tilde{A}\) with corresponding normalized eigenvectors
\[\frac{1}{\sqrt{2}}\begin{bmatrix}u_{1}\\ v_{1}\end{bmatrix},\ldots,\frac{1}{\sqrt{2}}\begin{bmatrix}u_{m}\\ v_{m}\end{bmatrix},\frac{1}{\sqrt{2}}\begin{bmatrix}u_{1}\\ -v_{1}\end{bmatrix},\ldots,\frac{1}{\sqrt{2}}\begin{bmatrix}u_{m}\\ -v_{m}\end{bmatrix}, \tag{3}\] \[\frac{1}{\sqrt{2}}\begin{bmatrix}\tilde{u}_{1}\\ \tilde{v}_{1}\end{bmatrix},\ldots,\frac{1}{\sqrt{2}}\begin{bmatrix}\tilde{u}_{ m}\\ \tilde{v}_{m}\end{bmatrix},\frac{1}{\sqrt{2}}\begin{bmatrix}\tilde{u}_{1}\\ -\tilde{v}_{1}\end{bmatrix},\ldots,\frac{1}{\sqrt{2}}\begin{bmatrix}\tilde{u}_{ m}\\ -\tilde{v}_{m}\end{bmatrix}, \tag{4}\]
respectively, where \(u_{i},\tilde{u}_{i},v_{i},\tilde{v}_{i}\in\mathbb{R}^{n}\) are \(m\)-dimensional unit vectors.
By Lemma 3.3, we have \(e^{\mathrm{T}}(xI-A)^{-1}e=e^{\mathrm{T}}(xI-\tilde{A})^{-1}e\). It follows from Lemma 3.4 that
\[\sum_{i=1}^{m}\frac{(\frac{1}{\sqrt{2}}e_{2m}^{\mathrm{T}} \begin{bmatrix}u_{i}\\ v_{i}\end{bmatrix})^{2}}{x-\lambda_{i}}+\sum_{i=1}^{m}\frac{(\frac{1}{\sqrt{2} }e_{2m}^{\mathrm{T}}\begin{bmatrix}u_{i}\\ -v_{i}\end{bmatrix})^{2}}{x+\lambda_{i}}=\sum_{i=1}^{m}\frac{(\frac{1}{\sqrt{2} }e_{2m}^{\mathrm{T}}\begin{bmatrix}\tilde{u}_{i}\\ \tilde{v}_{i}\end{bmatrix})^{2}}{x-\lambda_{i}}+\sum_{i=1}^{m}\frac{(\frac{1}{ \sqrt{2}}e_{2m}^{\mathrm{T}}\begin{bmatrix}\tilde{u}_{i}\\ -\tilde{v}_{i}\end{bmatrix})^{2}}{x+\lambda_{i}}. \tag{5}\]
Hence, we have that for each \(1\leq i\leq m\),
\[\begin{cases}(e_{m}^{\mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i})^{2}=&(e_{m}^{ \mathrm{T}}\tilde{u}_{i}+e_{m}^{\mathrm{T}}\tilde{v}_{i})^{2},\\ (e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i})^{2}=&(e_{m}^{\mathrm{T}} \tilde{u}_{i}-e_{m}^{\mathrm{T}}\tilde{v}_{i})^{2}.\end{cases} \tag{6}\]
For a fixed \(i\), we distinguish the following two cases:
**Case 1.**\(e_{m}^{\mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i}\) and \(e_{m}^{\mathrm{T}}\tilde{u}_{i}+e_{m}^{\mathrm{T}}\tilde{v}_{i}\) have the same sign (resp. opposite sign), and \(e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i}\) and \(e_{m}^{\mathrm{T}}\tilde{u}_{i}-e_{m}^{\mathrm{T}}\tilde{v}_{i}\) have the same sign (resp. opposite sign). It follows from (6) that
\[\begin{cases}e_{m}^{\mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i}=&e_{m}^{\mathrm{ T}}\tilde{u}_{i}+e_{m}^{\mathrm{T}}\tilde{v}_{i},\\ e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i}=&e_{m}^{\mathrm{T}}\tilde{u}_{i }-e_{m}^{\mathrm{T}}\tilde{v}_{i},\end{cases}\text{ or }\begin{cases}e_{m}^{ \mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i}=&-(e_{m}^{\mathrm{T}}\tilde{u}_{i}+e _{m}^{\mathrm{T}}\tilde{v}_{i}),\\ e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i}=&-(e_{m}^{\mathrm{T}}\tilde{u} _{i}-e_{m}^{\mathrm{T}}\tilde{v}_{i}),\end{cases}\]
which implies that either i) \(e_{m}^{\mathrm{T}}u_{i}=e_{m}^{\mathrm{T}}\tilde{u}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=e_{m}^{\mathrm{T}}\tilde{v}_{i}\); or ii) \(e_{m}^{\mathrm{T}}u_{i}=-e_{m}^{\mathrm{T}}\tilde{u}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=-e_{m}^{\mathrm{T}}\tilde{v}_{i}\).
**Case 2.**\(e_{m}^{\mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i}\) and \(e_{m}^{\mathrm{T}}\tilde{u}_{i}+e_{m}^{\mathrm{T}}\tilde{v}_{i}\) have the same sign (resp. opposite sign), and \(e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i}\) and \(e_{m}^{\mathrm{T}}\tilde{u}_{i}-e_{m}^{\mathrm{T}}\tilde{v}_{i}\) have the opposite sign (resp. same sign). Then
\[\begin{cases}e_{m}^{\mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i}=&e_{m}^{\mathrm{ T}}\tilde{u}_{i}+e_{m}^{\mathrm{T}}\tilde{v}_{i},\\ e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i}=&-(e_{m}^{\mathrm{T}}\tilde{u} _{i}-e_{m}^{\mathrm{T}}\tilde{v}_{i}),\end{cases}\text{ or }\begin{cases}e_{m}^{ \mathrm{T}}u_{i}+e_{m}^{\mathrm{T}}v_{i}=&-(e_{m}^{\mathrm{T}}\tilde{u}_{i}+e _{m}^{\mathrm{T}}\tilde{v}_{i}),\\ e_{m}^{\mathrm{T}}u_{i}-e_{m}^{\mathrm{T}}v_{i}=&e_{m}^{\mathrm{T}}\tilde{u}_{ i}-e_{m}^{\mathrm{T}}\tilde{v}_{i},\end{cases}\]
which implies that either i) \(e_{m}^{\mathrm{T}}u_{i}=e_{m}^{\mathrm{T}}\tilde{v}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=e_{m}^{\mathrm{T}}\tilde{u}_{i}\); or ii) \(e_{m}^{\mathrm{T}}u_{i}=-e_{m}^{\mathrm{T}}\tilde{v}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=-e_{m}^{\mathrm{T}}\tilde{u}_{i}\).
Thus, for a fixed \(i\), we may assume that either \(e_{m}^{\mathrm{T}}u_{i}=\tau_{i}e_{m}^{\mathrm{T}}\tilde{u}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=\tau_{i}e_{m}^{\mathrm{T}}\tilde{v}_{i}\) or \(e_{m}^{\mathrm{T}}u_{i}=\sigma_{i}e_{m}^{\mathrm{T}}\tilde{v}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=\sigma_{i}e_{m}^{\mathrm{T}}\tilde{u}_{i}\), where \(\tau_{i},\sigma_{i}\in\{1,-1\}\). Next, we show that uniformly, either \(e_{m}^{\mathrm{T}}u_{i}=\tau_{i}e_{m}^{\mathrm{T}}\tilde{u}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=\tau_{i}e_{m}^{\mathrm{T}}\tilde{v}_{i}\) for all \(1\leq i\leq m\) or \(e_{m}^{\mathrm{T}}u_{i}=\sigma_{i}e_{m}^{\mathrm{T}}\tilde{v}_{i}\) and \(e_{m}^{\mathrm{T}}v_{i}=\sigma_{i}e_{m}^{\mathrm{T}}\tilde{u}_{i}\) for all \(1\leq i\leq m\). This is the key technical part of the proof, which highly depends on the irreducibility assumption of \(\phi\).
According to Lemma 3.5, the eigenvectors of \(MM^{\mathrm{T}}\) associated with eigenvalues \(\lambda_{i}^{2}\) can be expressed as \(\xi_{i}=(\phi_{1}(\lambda_{i}),\phi_{2}(\lambda_{i}),\ldots,\phi_{m}(\lambda_{ i}))^{\mathrm{T}}\), where \(\phi_{j}(x)\in\mathbb{Q}[x]\) with \(\deg\phi_{j}<n\).
By Lemma 3.6, \(u_{i}\) is an eigenvector of \(MM^{\mathrm{T}}\) associated with \(\lambda_{i}^{2}\). Note \(u_{i}\) is a unit vector. It follows that \(u_{i}\) and \(\xi_{i}/||\xi_{i}||_{2}\) differ by at most a sign, i.e., there exists a \(\epsilon_{i}\in\{1,-1\}\) such that \(u_{i}=\epsilon_{i}\frac{\xi_{i}}{||\xi_{i}||_{2}}\), and
\[v_{i} = \frac{1}{\lambda_{i}}M^{\mathrm{T}}u_{i}\] \[= \frac{\epsilon_{i}}{\lambda_{i}}M^{\mathrm{T}}(\phi_{1}(\lambda_{i} ),\phi_{2}(\lambda_{i}),\ldots,\phi_{m}(\lambda_{i}))^{\mathrm{T}}/||\xi_{i}||_{2}\] \[= \epsilon_{i}(\varphi_{1}(\lambda_{i}),\varphi_{2}(\lambda_{i}), \ldots,\varphi_{m}(\lambda_{i}))^{\mathrm{T}}/||\xi_{i}||_{2},\]
for some \(\varphi_{j}(x)\in\mathbb{Q}[x]\) with degree less than \(n\), for \(1\leq j\leq m\). The last equality follows since the entries of the vector \(\frac{1}{\lambda_{i}}M^{\mathrm{T}}(\phi_{1}(\lambda_{i}),\phi_{2}(\lambda_{i}), \ldots,\phi_{m}(\lambda_{i}))^{\mathrm{T}}\) belong to \(\mathbb{Q}(\lambda_{i})\), which is a number
field. Further note that \(||u_{i}||_{2}=||v_{i}||_{2}=1\), we have \(\varphi_{1}(\lambda_{i})^{2}+\varphi_{2}(\lambda_{i})^{2}+\cdots+\varphi_{m}( \lambda_{i})^{2}=||\xi_{i}||^{2}\), for \(1\leq i\leq m\).
The above discussions apply similarly to the signed bipartite graph \(\tilde{\Gamma}\) with adjacency matrix \(\tilde{A}\). Then we have \(\tilde{u}_{i}=\tilde{\epsilon_{i}}\frac{\tilde{\xi_{i}}}{||\xi_{i}||_{2}}\) for \(\tilde{\epsilon_{i}}\in\{1,-1\}\), where \(\tilde{\xi_{i}}=(\tilde{\phi_{1}}(\lambda_{i}),\tilde{\phi}_{2}(\lambda_{i}), \ldots,\tilde{\phi}_{m}(\lambda_{i}))^{\rm T}\), \(\tilde{\phi}_{j}(x)\in\mathbb{Q}[x]\) with \(\deg\tilde{\phi}_{j}<n\). Moreover, \(\tilde{v}_{i}=\tilde{\epsilon}_{i}(\tilde{\varphi}_{1}(\lambda_{i}),\tilde{ \varphi}_{2}(\lambda_{i}),\ldots,\tilde{\varphi}_{m}(\lambda_{i}))^{\rm T}/|| \tilde{\xi_{i}}||_{2}\) with \(\tilde{\varphi}_{j}(x)\in\mathbb{Q}[x]\) with degree less than \(n\), and \(\tilde{\varphi}_{1}(\lambda_{i})^{2}+\tilde{\varphi}_{2}(\lambda_{i})^{2}+ \cdots+\tilde{\varphi}_{m}(\lambda_{i})^{2}=||\tilde{\xi_{i}}||^{2}\).
**Claim 1**.: _If \(e_{m}^{\rm T}u_{1}=\tau_{1}e_{m}^{\rm T}\tilde{u}_{1}\) and \(e_{m}^{\rm T}v_{1}=\tau_{1}e_{m}^{\rm T}\tilde{v}_{1}\) with \(\tau_{1}\in\{1,-1\}\), then \(e_{m}^{\rm T}u_{i}=\tau_{i}e_{m}^{\rm T}\tilde{u}_{i}\) and \(e_{m}^{\rm T}v_{i}=\tau_{i}e_{m}^{\rm T}\tilde{v}_{i}\) for some \(\tau_{i}\in\{1,-1\}\), for all \(2\leq i\leq m\)._
Proof.: Actually, it follows from \(e_{m}^{\rm T}u_{1}=\tau_{1}e_{m}^{\rm T}\tilde{u}_{1}\) that
\[\epsilon_{1}\frac{\sum_{j=1}^{m}\phi_{j}(\lambda_{1})}{\sqrt{\sum_{j=1}^{m} \phi_{j}^{2}(\lambda_{1})}}=\tau_{1}\tilde{\epsilon_{1}}\frac{\sum_{j=1}^{m} \tilde{\phi}_{j}(\lambda_{1})}{\sqrt{\sum_{j=1}^{m}\tilde{\phi}_{j}^{2}( \lambda_{1})}}. \tag{7}\]
Taking squares on both sides of (7), it follows that
\[\Phi(\lambda_{1}):=(\sum_{j=1}^{m}\phi_{j}(\lambda_{1}))^{2}\sum_{j=1}^{m} \tilde{\phi}_{j}^{2}(\lambda_{1})-(\sum_{j=1}^{m}\tilde{\phi}_{j}(\lambda_{1} ))^{2}\sum_{j=1}^{m}\phi_{j}^{2}(\lambda_{1})=0.\]
Note that \(\phi(x)\) is irreducible and \(\Phi(x)\in\mathbb{Q}[x]\). It follows that \(\phi(x)\mid\Phi(x)\). Hence \(\Phi(\lambda_{i})=0\) and \(e_{m}^{\rm T}u_{i}=\tau_{i}e_{m}^{\rm T}\tilde{u}_{i}\) for some \(\tau_{i}\in\{1,-1\}\), for \(2\leq i\leq m\). Similarly, we have \(e_{m}^{\rm T}v_{i}=\tilde{\tau}_{i}e_{m}^{\rm T}\tilde{v}_{i}\) for some \(\tilde{\tau}_{i}\in\{1,-1\}\), for \(2\leq i\leq m\). Next, we show that \(\tau_{i}\) and \(\tilde{\tau}_{i}\) coincide, i.e., \(\tau_{i}=\tilde{\tau}_{i}=\pm 1\), for all \(2\leq i\leq m\).
In fact, it follows from \(e_{m}^{\rm T}v_{1}=\tau_{1}e_{m}^{\rm T}\tilde{v}_{1}\) that
\[\epsilon_{1}\frac{\sum_{j=1}^{m}\varphi_{j}(\lambda_{1})}{\sqrt{\sum_{j=1}^{m} \phi_{j}^{2}(\lambda_{1})}}=\tau_{1}\tilde{\epsilon_{1}}\frac{\sum_{j=1}^{m} \tilde{\varphi}_{j}(\lambda_{1})}{\sqrt{\sum_{j=1}^{m}\tilde{\phi}_{j}^{2}( \lambda_{1})}}. \tag{8}\]
It is easy to see that all the numerators in Eqs. (7) and (8) are non-zero. For example, if \(\sum_{j=1}^{m}\phi_{j}(\lambda_{1})=0\), then \(\sum_{j=1}^{m}\phi_{j}(\lambda_{i})=0\) for \(1\leq i\leq m\) by the irreducibility of \(\phi\). That is, \(e_{m}^{\rm T}\xi_{i}=0\) for \(1\leq i\leq m\), which is ridiculous since \(\xi_{i}\) (\(1\leq i\leq m\)) are eigenvectors of \(MM^{\rm T}\) constituting a basis of \(\mathbb{R}^{m}\).
Eq. (8) divides Eq. (7), it follows that
\[\frac{\sum_{j=1}^{m}\phi_{j}(\lambda_{1})}{\sum_{j=1}^{m}\varphi_{j}(\lambda_{1} )}=\frac{\sum_{j=1}^{m}\tilde{\phi}_{j}(\lambda_{1})}{\sum_{j=1}^{m}\tilde{ \varphi}_{j}(\lambda_{1})}, \tag{9}\]
or equivalently, \(\Psi(\lambda_{1}):=\sum_{j=1}^{m}\phi_{j}(\lambda_{1})\sum_{j=1}^{m}\tilde{ \varphi}_{j}(\lambda_{1})-\sum_{j=1}^{m}\varphi_{j}(\lambda_{1})\sum_{j=1}^{m} \tilde{\phi}_{j}(\lambda_{1})=0\). By the irreducibility of \(\phi(x)\), we obtain that \(\phi(x)\mid\Psi(x)\), and hence \(\Psi(\lambda_{i})=0\) for \(2\leq i\leq m\). So Eq. (9) still holds if we replace \(\lambda_{1}\) with any \(\lambda_{i}\), i.e.,
\[\frac{\sum_{j=1}^{m}\phi_{j}(\lambda_{i})}{\sum_{j=1}^{m}\varphi_{j}(\lambda_{i })}=\frac{\sum_{j=1}^{m}\tilde{\phi}_{j}(\lambda_{i})}{\sum_{j=1}^{m}\tilde{ \varphi}_{j}(\lambda_{i})},\ \text{for}\ 2\leq i\leq m. \tag{10}\]
By the previous discussions, we get that
\[\epsilon_{i}\frac{\sum_{j=1}^{m}\phi_{j}(\lambda_{i})}{\sqrt{\sum_{j=1}^{m}\phi_{j} ^{2}(\lambda_{i})}}=\tau_{i}\tilde{\epsilon}_{i}\frac{\sum_{j=1}^{m}\tilde{ \phi}_{j}(\lambda_{i})}{\sqrt{\sum_{j=1}^{m}\tilde{\phi}_{j}^{2}(\lambda_{i})}},\text{ for for }2\leq i\leq m. \tag{11}\]
\[\epsilon_{i}\frac{\sum_{j=1}^{m}\varphi_{j}(\lambda_{i})}{\sqrt{\sum_{j=1}^{m} \phi_{j}^{2}(\lambda_{i})}}=\tilde{\tau}_{i}\tilde{\epsilon}_{i}\frac{\sum_{j= 1}^{m}\tilde{\varphi}_{j}(\lambda_{i})}{\sqrt{\sum_{j=1}^{m}\tilde{\phi}_{j}^{ 2}(\lambda_{i})}},\text{ for }2\leq i\leq m. \tag{12}\]
Eq. (12) divides Eq. (11), we obtain \(\frac{\sum_{j=1}^{m}\phi_{j}(\lambda_{i})}{\sum_{j=1}^{m}\varphi_{j}(\lambda_ {i})}=\frac{\tau_{i}}{\tilde{\tau}_{i}}\frac{\sum_{j=1}^{m}\tilde{\phi}_{j}( \lambda_{i})}{\sum_{j=1}^{m}\tilde{\varphi}_{j}(\lambda_{i})}\), together with Eq. (10), we get the conclusion that \(\tau_{i}=\tilde{\tau}_{i}=\pm 1\) for \(2\leq i\leq m\).
**Claim 2**.: _: If \(e_{m}^{\rm T}u_{1}=\sigma_{1}e_{m}^{\rm T}\tilde{v}_{1}\) and \(e_{m}^{\rm T}v_{1}=\sigma_{1}e_{m}^{\rm T}\tilde{u}_{1}\) with \(\sigma_{1}\in\{1,-1\}\), then \(e_{m}^{\rm T}u_{i}=\sigma_{i}e_{m}^{\rm T}\tilde{v}_{i}\) and \(e_{m}^{\rm T}v_{i}=\sigma_{i}e_{m}^{\rm T}\tilde{u}_{i}\) for some \(\sigma_{i}\in\{1,-1\}\), for all \(2\leq i\leq m\)._
Proof.: This follows by using the same argument as Claim 1; we omit the details here.
Write
\[U=[u_{1},u_{2},\ldots,u_{m}],\ V=[v_{1},v_{2},\ldots,v_{m}],\]
\[\tilde{U}=[\tilde{u}_{1},\tilde{u}_{2},\ldots,\tilde{u}_{m}],\ \tilde{V}=[\tilde{v}_{1},\tilde{v}_{2},\ldots,\tilde{v}_{m}].\]
If the condition of Claim 1 holds, we may replace \(u_{i}\) and \(v_{i}\) with \(-u_{i}\) and \(-v_{i}\) respectively, whenever \(\tau_{i}=-1\) for \(1\leq i\leq m\). Then we have \(e_{m}^{\rm T}u_{i}=e_{m}^{\rm T}\tilde{u}_{i}\) and \(e_{m}^{\rm T}v_{i}=e_{m}^{\rm T}\tilde{v}_{i}\) for \(1\leq i\leq m\). Let \(R=\frac{1}{\sqrt{2}}\begin{bmatrix}U&U\\ V&-V\end{bmatrix}\) and \(\tilde{R}=\frac{1}{\sqrt{2}}\begin{bmatrix}\tilde{U}&\tilde{U}\\ \tilde{V}&-\tilde{V}\end{bmatrix}.\) Define
\[Q:=R\tilde{R}^{\rm T}=\begin{bmatrix}U\tilde{U}^{\rm T}&O\\ O&V\tilde{V}^{\rm T}\end{bmatrix}. \tag{13}\]
Then \(Q\) is an orthogonal matrix and
\[R^{\rm T}AR=\tilde{R}^{\rm T}\tilde{A}\tilde{R}=\text{diag}(\lambda_{1},\ldots,\lambda_{m},-\lambda_{1},\ldots,-\lambda_{m}).\]
Thus, \(Q^{\rm T}AQ=\tilde{A}\). Next, it remains to show that \(Q\) is regular, i.e., \(Qe_{2m}=e_{2m}\), which is equivalent to \(\tilde{U}^{\rm T}e_{m}=U^{\rm T}e_{m}\) and \(\tilde{V}^{\rm T}e_{m}=V^{\rm T}e_{m}\). That is, \(e_{m}^{\rm T}u_{i}=e_{m}^{\rm T}\tilde{u}_{i},\ e_{m}^{\rm T}v_{i}=e_{m}^{\rm T }\tilde{v}_{i},\ \text{for}\ 1\leq i\leq m\), which are precisely that we have obtained before, as desired.
If the condition of Claim 2 holds, similarly, we may replace \(u_{i}\) and \(v_{i}\) with \(-u_{i}\) and \(-v_{i}\) respectively, whenever \(\sigma_{i}=-1\). Then \(e_{m}^{\rm T}u_{i}=e_{m}^{\rm T}\tilde{v}_{i}\) and \(e_{m}^{\rm T}v_{i}=e_{m}^{\rm T}\tilde{u}_{i}\), for \(1\leq i\leq m\). Now let \(R=\frac{1}{\sqrt{2}}\begin{bmatrix}U&U\\ V&-V\end{bmatrix}\) and \(\tilde{R}=\frac{1}{\sqrt{2}}\begin{bmatrix}\tilde{U}&-\tilde{U}\\ \tilde{V}&\tilde{V}\end{bmatrix}.\) Define
\[Q:=R\tilde{R}^{\rm T}=\begin{bmatrix}O&U\tilde{V}^{\rm T}\\ V\tilde{U}^{\rm T}&O\end{bmatrix}. \tag{14}\]
Then \(Q\) is an orthogonal matrix and still \(Q^{\rm T}AQ=\tilde{A}\) holds. Moreover, it is easy to verify that \(Qe_{2m}=e_{2m}\). So \(Q\) is regular.
The proof is complete.
## 4 Proof of Theorem 1.2
In this section, we present the proof of Theorem 1.2.
Recall that for a monic polynomial \(f(x)\in\mathbb{Z}[x]\) with degree \(n\), the _discriminant_ of \(f(x)\) is defined as:
\[\Delta(f)=\prod_{1\leq i<j\leq n}(\alpha_{i}-\alpha_{j})^{2},\]
where \(\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\) are all the roots of \(f(x)\).
Then it is clear that \(\Delta(f)\) is always an integer for \(f(x)\in\mathbb{Z}[x]\), and \(\Delta(f)=0\) if and only if \(f\) has a multiple root. Define the _discriminant_ of a matrix \(A\), denoted by \(\Delta(A)\), as the discriminant of its characteristic polynomial, i.e., \(\Delta(A):=\Delta(\det(xI-A))\). The _discriminant_ of a graph \(G\), denoted by \(\Delta(G)\), is defined to be the discriminant of its adjacency matrix.
In [22], Wang and Yu give the following theorem, which is our main tool in proving Theorem 1.2.
**Theorem 4.1** ([22]).: _Let \(A\) be a symmetric integral matrix. Suppose there exists a rational orthogonal matrix \(Q\) such that \(Q^{\rm T}AQ\) is an integral matrix. If \(\Delta(A)\) is odd and square-free, then \(Q\) must be a signed permutation matrix._
However, Theorem 4.1 cannot be used directly, since the \(\Delta(\Gamma)\) is always a perfect square for a signed bipartite graph \(\Gamma\) with an equal size of bipartition, as shown by the following lemma.
**Lemma 4.2**.: _Let \(\Gamma\) be a signed bipartite graph with bipartite-adjacency matrix \(M\), where \(M\) is a square matrix of order \(m:=n/2\). Then \(\Delta(\Gamma)=2^{n}\det^{2}(M)\Delta^{2}(M^{\rm T}M)\)._
Proof.: Let the eigenvalues of \(\Gamma\) be \(\pm\lambda_{1},\pm\lambda_{2},\ldots,\pm\lambda_{m}.\) Then the eigenvalues of \(M^{\rm T}M\) are \(\lambda_{1}^{2},\lambda_{2}^{2},\ldots,\lambda_{m}^{2}.\) So we have
\[\Delta(\Gamma) = \prod_{1\leq i<j\leq m}(\lambda_{i}-\lambda_{j})^{2}\prod_{1\leq i,j\leq m}(\lambda_{i}+\lambda_{j})^{2}\prod_{1\leq i<j\leq m}(-\lambda_{i}+ \lambda_{j})^{2}\] \[= 2^{n}\lambda_{1}^{2}\lambda_{2}^{2}\cdots\lambda_{m}^{2}\prod_{1 \leq i<j\leq m}(\lambda_{i}^{2}-\lambda_{j}^{2})^{4}\] \[= 2^{n}\det(M^{\rm T}M)\Delta^{2}(M^{\rm T}M)\] \[= 2^{n}(\det(M))^{2}\Delta^{2}(M^{\rm T}M).\]
This completes the proof.
Let \(a_{0}\) be the constant term of the characteristic polynomial of \(G\) defined as above. Then
\[a_{0}=(-1)^{m}\det(M^{\mathrm{T}}M)=(-1)^{m}\det{}^{2}(M).\]
Note that for a tree with an irreducible characteristic polynomial \(\phi(x)\), the constant term of \(\phi(x)\) is always \(\pm 1\). Thus we have
**Corollary 4.3**.: _Let \(T\) be a tree with an irreducible characteristic polynomial. Then \(\Delta(T)=2^{n}\Delta^{2}(M^{\mathrm{T}}M)\)._
Finally, we are ready to present the proof of Theorem 1.2.
Proof of Theorem 1.2.: Let \(\tilde{\Gamma}\) be any signed graph that is generalized cospectral with \(\Gamma=(T,\sigma)\). We shall show that \(\tilde{\Gamma}\) is isomorphic to \(\Gamma\). Note that \(\tilde{\Gamma}\) has the same number of edges as \(\Gamma\) and moreover, the assumption that \(\phi(\tilde{\Gamma})=\phi(\Gamma)\) is irreducible forces \(\tilde{\Gamma}\) to be connected. Thus, \(\tilde{\Gamma}\) is signed graph whose underlying graph is a tree (say \(\tilde{T}\)), and \(\tilde{\Gamma}=(\tilde{T},\tilde{\sigma})\).
Note that both \(T\) and \(\tilde{T}\) are balanced as signed graphs, we have \(\phi(T)=\phi(\Gamma)\) and \(\phi(\tilde{T})=\phi(\tilde{\Gamma})\). Let \(A(\Gamma)=D_{1}A(T)D_{1}\) and \(A(\tilde{\Gamma})=D_{2}A(\tilde{T})D_{2}\), where \(D_{1}\) and \(D_{2}\) are diagonal matrices whose diagonal entries are \(\pm 1\).
By Theorem 2.1, the fact that \(\Gamma\) and \(\tilde{\Gamma}\) are generalized cospectral implies that there exists a regular rational orthogonal matrix \(Q\) such that
\[Q^{\mathrm{T}}A(\Gamma)Q=A(\tilde{\Gamma}), \tag{15}\]
i.e., \(Q^{\mathrm{T}}(D_{1}A(T)D_{1})Q=D_{2}A(\tilde{T})D_{2}\), which is equivalent to \(\hat{Q}^{\mathrm{T}}A(T)\hat{Q}=A(\tilde{T})\), where \(\hat{Q}=D_{1}QD_{2}\) is a rational orthogonal matrix.
Let
\[A(T)=\begin{bmatrix}O&M\\ M^{\mathrm{T}}&O\end{bmatrix},A(\tilde{T})=\begin{bmatrix}O&\tilde{M}\\ \tilde{M}^{\mathrm{T}}&O\end{bmatrix}.\]
By Theorem 3.1, assume without loss of generality that \(Q=\begin{bmatrix}Q_{1}&O\\ O&Q_{2}\end{bmatrix}\) and \(\hat{Q}=\begin{bmatrix}\hat{Q}_{1}&O\\ O&\hat{Q}_{2}\end{bmatrix}.\) Then we have \(\hat{Q}_{1}^{\mathrm{T}}M\hat{Q}_{2}=\tilde{M}\). It follows that
\[\hat{Q}_{1}^{\mathrm{T}}MM^{\mathrm{T}}\hat{Q}_{1}=\tilde{M}\tilde{M}^{ \mathrm{T}}\text{ and }\hat{Q}_{2}^{\mathrm{T}}M^{\mathrm{T}}M\hat{Q}_{2}=\tilde{M}^{ \mathrm{T}}\tilde{M}.\]
Note that \(\Delta(M^{\mathrm{T}}M)=\Delta(MM^{\mathrm{T}})=2^{-n/2}\sqrt{\Delta(T)}\), which is odd and square-free. Thus, according to Theorem 4.1, both \(\hat{Q}_{1}\) and \(\hat{Q}_{2}\) are signed permutation matrices. It follows that \(Q=D_{1}\hat{Q}Q_{2}\) is a signed permutation matrix. Moreover, note that \(Q\) is regular. Therefore, \(Q\) is a permutation matrix, and by Eq. (15), we conclude that \(\tilde{\Gamma}\) is isomorphic to \(\Gamma\). The proof is complete.
**Remark 1.** The condition of Theorem 1.2 is tight in the sense that Theorem 1.2 is no longer true if \(2^{-\frac{n}{2}}\sqrt{\Delta(T)}\) has a multiple odd prime factor. Let the signed bipartite-adjacency matrices of two signed trees \(T\) and \(\tilde{T}\) be given as follows, respectively:
\[M=\left(\begin{array}{cccccccc}-1&0&0&0&0&0&0&0&0\\ -1&-1&0&0&0&0&0&0&0\\ 1&0&-1&0&0&0&0&0&0\\ 0&0&1&-1&0&0&0&0&0\\ 0&-1&0&0&-1&1&0&0&0\\ 0&0&0&0&-1&0&0&0&0\\ 0&0&0&0&-1&0&-1&1&0\\ 0&0&0&0&0&0&-1&0&0\\ 0&0&0&0&0&-1&0&0&-1\end{array}\right),\tilde{M}=\left(\begin{array}{cccccccc} 0&0&0&0&0&0&0&1&-1\\ 0&0&0&-1&-1&0&1&0&0\\ -1&-1&0&0&0&0&0&0\\ -1&-1&0&0&0&0&0&0\\ 0&-1&-1&0&0&0&0&0\\ -1&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&-1&0&0\\ -1&0&0&0&-1&0&0&1\end{array}\right).\]
Then
\[\phi(T)=\phi(\tilde{T})=-1+22x^{2}-162x^{4}+538x^{6}-897x^{8}+809x^{10}-410x^{ 12}+116x^{14}-17x^{16}+x^{18},\]
which is irreducible over \(\mathbb{Q}\). However, \(2^{-9}\sqrt{\Delta(T)}=7^{2}\times 347\times 357175051\), i.e., \(2^{-9}\sqrt{\Delta(T)}\) has a multiple factor \(7\) and the condition of Theorem 1.2 is not satisfied. Actually, there indeed exists a regular rational orthogonal matrix \(Q\in\mathcal{Q}(G)\) such that \(\tilde{A}=Q^{\mathrm{T}}AQ\), where \(Q=\mathrm{diag}(Q_{1},Q_{2})\) and \(Q_{1}\) and \(Q_{2}\) are given as follows respectively.
\[Q_{1}=\frac{1}{7}\left(\begin{array}{cccccccc}-1&-1&-2&-2&4&3&3&2&1\\ -2&-2&3&3&1&-1&-1&4&2\\ 2&2&4&-3&-1&1&1&3&-2\\ 4&-3&1&1&-2&2&2&-1&3\\ 3&3&-1&-1&2&-2&-2&1&4\\ 1&1&2&2&3&4&-3&-2&-1\\ 2&2&-3&4&-1&1&1&3&-2\\ 1&1&2&2&3&-3&4&-2&-1\end{array}\right),Q_{2}=\frac{1}{7}\left(\begin{array}{ ccccccccc}2&2&4&-3&-1&1&1&3&-2\\ 2&2&-3&4&-1&1&1&3&-2\\ -2&-2&3&3&1&-1&-1&4&2\\ 4&-3&1&1&-2&2&2&-1&3\\ 1&1&2&2&3&4&-3&-2&-1\\ -3&4&1&1&-2&2&-1&3\\ 3&3&-1&-1&2&-2&-2&-2&1&4\\ 2&2&-3&4&-1&1&1&3&-2\\ 1&1&2&2&3&-3&4&-2&-1\end{array}\right).\]
**Remark 2.** Theorem 3.1 does not hold without the assumption that the characteristic polynomial of \(\Gamma\) is irreducible over \(\mathbb{Q}\), even if \(\Gamma\) is controllable. Let \(\Gamma\) and \(\tilde{\Gamma}\) be two signed trees with bipartite-adjacency matrices \(M\) and \(\tilde{M}\) given as follows respectively:
It is easy to verify that
\[\phi(\Gamma;x)=(-1+x)(1+x)(-1-x+x^{2})(-1+x+x^{2})(1-21x^{2}+95x^{4}-119x^{6}+60x ^{8}-13x^{10}+x^{12}),\]
which is reducible over \(\mathbb{Q}\) and \(\Gamma\) is controllable. Nevertheless, the unique regular rational orthogonal matrix \(Q\) (shown as above) such that \(Q^{\rm T}A(\Gamma)Q=A(\tilde{\Gamma})\) is not the form as in Theorem 3.1.
## 5 Conclusions and Future Work
In this paper, we have given a simple arithmetic condition on a tree \(T\) with an irreducible characteristic polynomial, under which every signed tree with underlying graph \(T\) is DGS. This is a little bit surprising in contrast with Schwenk's remarkable result stating almost every tree has a cospectral mate.
However, there are several questions remained to be answered. We end the paper by proposing the following questions:
**Question 1**.: _How can Theorem 1.2 be generalized to signed bipartite graphs?_
**Question 2**.: _Is it true that every tree with an irreducible characteristic polynomial is DGS?_
**Question 3**.: _Is Theorem 3.1 true for controllable bipartite graphs?_
For Question 1, the difficulty lies in the fact that for a signed bipartite graph \(\Gamma\), a signed graph \(\tilde{\Gamma}\) generalized cospectral with \(\Gamma\) is not necessarily bipartite. For Question 2, we know that it is not true for signed trees. For Question 3, we know that it is not true for controllable signed bipartite graphs. But generally we do not know any single counterexample to Questions 2 and 3. The above questions need further investigations in the future.
## Acknowledgments
The research of the second author is supported by National Natural Science Foundation of China (Grant Nos. 11971376 and 12371357) and the third author is supported by Fundamental Research Funds for the Central Universities (Grant No. 531118010622).
The authors would like to thank Professor Huiqiu Lin from East China University of Science and Technology for useful discussions.
|
2305.15743 | TransWorldNG: Traffic Simulation via Foundation Model | Traffic simulation is a crucial tool for transportation decision-making and
policy development. However, achieving realistic simulations in the face of the
high dimensionality and heterogeneity of traffic environments is a longstanding
challenge. In this paper, we present TransWordNG, a traffic simulator that uses
Data-driven algorithms and Graph Computing techniques to learn traffic dynamics
from real data. The functionality and structure of TransWorldNG are introduced,
which utilize a foundation model for transportation management and control. The
results demonstrate that TransWorldNG can generate more realistic traffic
patterns compared to traditional simulators. Additionally, TransWorldNG
exhibits better scalability, as it shows linear growth in computation time as
the scenario scale increases. To the best of our knowledge, this is the first
traffic simulator that can automatically learn traffic patterns from real-world
data and efficiently generate accurate and realistic traffic environments. | Ding Wang, Xuhong Wang, Liang Chen, Shengyue Yao, Ming Jing, Honghai Li, Li Li, Shiqiang Bao, Fei-Yue Wang, Yilun Lin | 2023-05-25T05:49:30Z | http://arxiv.org/abs/2305.15743v1 | # TransWorldNG: Traffic Simulation via Foundation Model
###### Abstract
Traffic simulation is a crucial tool for transportation decision-making and policy development. However, achieving realistic simulations in the face of the high dimensionality and heterogeneity of traffic environments is a longstanding challenge. In this paper, we present TransWordNG, a traffic simulator that uses Data-driven algorithms and Graph Computing techniques to learn traffic dynamics from real data. The functionality and structure of TransWorldNG are introduced, which utilize a foundation model for transportation management and control. The results demonstrate that TransWorldNG can generate more realistic traffic patterns compared to traditional simulators. Additionally, TransWorldNG exhibits better scalability, as it shows linear growth in computation time as the scenario scale increases. To the best of our knowledge, this is the first traffic simulator that can automatically learn traffic patterns from real-world data and efficiently generate accurate and realistic traffic environments.
## I Introduction
Modeling and simulating transportation systems realistically pose a challenge due to the high variability and diversity of traffic behaviors, as well as the spatial and temporal fluctuations that are difficult to model. Various traffic simulation models such as SUMO [1], MATSim [2], AimSun [3], VISSIM [4], and others have been developed to simulate traffic systems with diverse scales. Although these models are useful, they still encounter limitations in realistically simulating the growing complexity and heterogeneity of urban transportation systems due to the restricted capability of the underlying parametric models and manually encoded rules [5]. To address this gap, advanced traffic simulation techniques are necessary that can generate more realistic traffic behaviors from real-world data [6, 7]. This is critical for aiding traffic planners and policymakers in making well-informed decisions.
Traditional approaches often rely on physical dynamic models and implement data-driven approaches to learn parameters in the pre-defined models [8]. However, such approaches may introduce oversimplifications and assumptions that curtail their accuracy and applicability [9]. As a result, traditional models are suitable for specific tasks but not scalable or extensible, posing challenges in adapting to varying environments and managing large and complex data inputs. Furthermore, the intrinsic complexity of transportation systems, influenced by diverse agents and factors that affect traffic behavior, makes it a challenging task to realistically capture the temporal variability and complexity of traffic conditions. The dynamic and constantly evolving nature of transportation environments necessitates a flexible approach to simulating the traffic system that can quickly adapt to changes in the environment.
To solve these problems in traffic simulation, we have developed TransWorldNG (where NG denotes the new generation), which automatically generates simulation scenarios from multi-scale and high-dimensional data, the framework of TransWorldNG is shown in Fig. 1. The first generation of TransWorld was initially developed by CAST Lab that uses Agent-based modeling (ABM) technology and object-oriented programming [10, 11, 7]. Building on its framework, we have re-designed a data-driven traffic simulator that is empowered by the foundation model utilizing Data-driven algorithms and Graph Computing techniques to simulate intricate traffic systems [12].
One of the key features of TransWorldNG is the utilization of graph structures and dynamic graph generation algorithms to model the intricate relationships and interactions among agents in the traffic system. This approach enhances previous ABM-based traffic simulation techniques by providing a more comprehensive and adaptable representation of the changing environment. Additionally, the use of graph structures and dynamic graph generation algorithms can enhance the scalability and efficiency of TransWorldNG by enabling parallel processing of the simulation and supporting the handling of large-scale data.
To overcome the limitations of traditional modeling approaches that rely on physical dynamics models, TransWorldNG adopts a data-driven approach with behavior models that are directly learned from real-world data. This approach provides a more direct and dependable representation of the real scenario. Furthermore, the graph structure of TransWorldNG allows for adaptive scaling, which amplifies its flexibility. Users can easily modify the nodes or edges in the graph structure to input multi-source data, with varying
degrees of granularity.
This study presents the functionality and structure of TransWorldNG, the contributions of this paper are as follows:
* A unified graph-based structure has been proposed that permits a flexible representation of the varying traffic environments, facilitating TransWorldNG to adapt to the environment changes in real-time.
* A data-driven traffic simulation framework has been introduced which can realistically and efficiently learn traffic dynamics from real-world data.
* The underlying software designing principles, comprising of system structure, workflows, and interfaces for users, have been provided.
## II Related Works
### _Multi-agent Traffic Modeling and Simulation_
Agent-based modeling is a widely used technique for modeling and simulating transportation systems, which involves simulating the interactions of a large number of agents in a system with different characteristics, behaviors, and interactions with other agents [13, 14]. The theoretical framework has developed over several decades, including game theory, control theory, graph theory, and complex network theory [13, 15]. Transportation systems can be considered as multi-agent systems composed of different types of traffic participants, each with their own goals and behaviors, and their interactions affect the changes in the entire traffic system. Recent research on modeling and simulation of complex traffic systems is mostly based on multi-agent methods [14, 16].
The modeling of multi-agent systems involves using mathematical models to describe the behavior of individual agents or the entire system in order to better understand traffic evolution and complex traffic phenomena [17]. In the field of traffic systems, models are typically categorized into three types based on their modeling scales: macroscopic, mesoscopic, and microscopic models [9, 14, 2, 18].
### _Data-driven Traffic Modeling and Simulation_
The advancement of modeling complex transportation systems is expected to be driven by the availability of large-scale and multi-source data [19]. Data-driven techniques in transportation modeling utilize machine learning, deep learning, and other algorithms to analyze large-scale and multi-source data and learn rules directly from the data to models. This is in contrast to knowledge-driven approaches, which rely on human-defined rules and models to develop transportation models [20]. Urban big data can be used to assess the effects of different characteristics, such as road network topology and intersection shapes, on traffic flow in urban areas. Machine learning techniques, such as neural networks, support vector machines, and regression trees, can be trained using these data to anticipate traffic flow, speed, and congestion [21]. This can provide valuable insights into the behavior of urban transportation systems and inform effective transportation planning and management strategies. Previous data-driven approaches in transportation research are mostly employed for single-task research, such as forecasting vehicle trajectories, predicting traffic congestion, route optimization, and so on [22, 23, 24]. These approaches have limitations in their ability to handle the complex interactions between multiple types of agents in a heterogeneous environment for large-scale systems.
## III Framework, System Structure, and Workflows of TransWorldNG
### _The Framework of TransWorldNG_
Transportation system modeling traditionally involves defining the behavior of agents and their interactions beforehand, which is time-consuming and error-prone when new agents or scenarios need to be added. A graph-based approach
Fig. 1: The framework of TransWorldNG. TransWorldNG is built upon data-driven approaches, with the ability to handle multi-scale and multi-source data. This flexibility enables TransWorldNG to be used for a wide range of traffic-related tasks, making it a powerful tool for accurate and realistic simulations of urban transportation systems.
to transportation system modeling offers a more efficient and adaptable solution, as it allows for the representation of data and relationships between agents in a natural and straightforward way [25, 26]. By using a graph data structure, new data can be added to the system by introducing new nodes or edges to the graph, without the need to hard-code specific behaviors or rules. Fig. 2 illustrates the topology of the hierarchical graph of the dynamic traffic system at various scales, from the vehicle level presenting the dynamics of individual vehicles to the intersection level showing traffic signal control strategies and traffic conditions at bottlenecks, to the street, block, and city levels showing strategies and policies that have impacts on a larger scale. This multi-scale approach allows TransWorldNG NG to provide a comprehensive view of the traffic system and enable transportation planners to make informed decisions based on the simulation results.
#### Ii-B1 Representation of transportation system via heterogeneous dynamic graphs
TransWorldNG uses a unified graph data structure to represent traffic systems, this makes it flexible to changes in the environment, as it would allow for easy updates and modifications. New data can be added to the graph by introducing new nodes or edges, without the need to hard-code specific behaviors or rules. This flexibility and adaptability make it easier to model and simulate large and complex transportation systems. Fig. 3 illustrates an example of a traffic scenario represented as a graph, showing how the relationships and interactions in a transportation system can be represented using a graph.
Mathematically, we can define the traffic system as a dynamic heterogeneous graph, \(G_{n}(V_{n},E_{n},O_{n},R_{n})\). The graph consists of vertices (\(V_{n}\)) that represent agents and edges (\(E_{n}\)) that define the relationships between those agents. Each agent (\(v_{i}\)) is associated with a node type by a unique mapping:
\[\phi:V_{n}\to O_{n},o_{i}\in O_{n} \tag{1}\]
(\(\phi:V_{n}\to O_{n}\)). Similarly, each edge (\(e_{i}\)) is directed and associated with an edge type:
\[\psi:E_{n}\to R_{n},r_{i}\in R_{n},e_{i}=(u_{i},v_{i}) \tag{2}\]
The attributes of agents can be represented as node features on the graph. For instance, a vehicle agent might have attributes such as position, speed, and acceleration, which can be saved as node features. Assuming nodes and edges have feature dimensions \(D^{v}_{o_{i}}\)and \(D^{c}_{r_{i}}\) respectively, features can be represented as:
\[F^{v}_{n}\in\mathcal{R}^{|V_{n}|\times D^{c}_{r_{i}}},F^{e}_{n}\in\mathcal{R} ^{|E_{n}|\times D^{c}_{r_{i}}} \tag{3}\]
#### Ii-B2 Dynamic Graph Learning model to simulate traffic behavior and relationships
TransWorldNG can learn from the data and generate simulation scenarios without relying on pre-defined models or assumptions. The use of a data-driven and model-free approach allows TransWorldNG to discover new patterns and relationships in the data that may not have been previously known or considered. This can lead to insights and solutions that were not possible with traditional modeling approaches.
To simulate complex traffic behavior and relationships, the Heterogeneous Graph Transformer (HGT) model can be used to model heterogeneous graphs in transportation systems [27]. The HGT model is a powerful graph neural network that can handle the heterogeneity of graphs by utilizing specific representations for different types of nodes and edges. It uses a multi-head attention mechanism to aggregate information from neighbors of different node types, and a graph transformer layer to update the node embeddings, for details refer to [27].
We denote the output of the \(l\)-th HGT layer as \(H^{(l)}\), \(H^{(l)}[v]\) is the node representation of node \(v\) at the \(l\)-th HGT layer. By stacking \(L\) layers, the node representation for the whole graph can be represented as \(H^{(L)}\). Since the traffic network is time-varying, we consider the evolution of the traffic system
Fig. 3: Graph representation of a traffic scenario involving two cars, Car A moving straight from left to right and Car B turning left at a signalized intersection. (a) A picture of the real traffic scenario; (b) An abstract representation of the traffic scenario; (c) Graph representation of the traffic scenario.
Fig. 2: Hierarchical graph structure of TransWorldNG. A hierarchical graph structure that consists of sub-graphs, with the lowest level often representing the finest granularity of simulated interactions. These sub-graphs are interconnected, allowing information to flow seamlessly through the different levels of the hierarchy.
as a conditional graph translation process that consists of a sequence of static graphs:
\[T:G_{0}\to G_{1}\cdots\to G_{n} \tag{4}\]
Given the dynamic heterogeneous graph, \(G(V,E,O,R)\) shows the state of the traffic simulation system, with input node features, denoted as \(H^{(l-1)[v]}\). The output of the \(l\)-th HGT layer for the target node \(v\) is denoted as \(H^{(l)}[v]\). To aggregate information to the target node from all its neighbors, it can be achieved by updating the vector \(\tilde{H}^{(l)}[v]\):
\[\tilde{H}^{(l)}[v]=\sum_{\forall(u)\in N(v)}(Attention(u,e,v)\cdot Message(u,e,v)) \tag{5}\]
The \(l\)-th HGT layer's output \(H^{(l)}[v]\) for the target node \(v\) is equal to:
\[H^{(l)}[v]=A\text{-}\textit{Linear}_{\phi(v)}(\mathbf{\theta}\tilde{H}^{(l)}[v])+ H^{(l-1)}[v] \tag{6}\]
The MSE loss is employed to measure the difference between the predicted values and the true values. In practice, the MSE loss can be optimized using various optimization algorithms such as Stochastic Gradient Descent (SGD), Adam, or RMSProp to minimize the difference between predicted and true values [28, 29].
### _System Structure_
The overall system architecture of TransWorldNG is shown in Fig. 4. The system supports data inputs from different sources, including sensors, GPS devices, and other connected devices. These data inputs are processed and transformed into a graph data structure, which is then fed into the simulation core in the simulation layer. Using mathematical models and algorithms, the simulation core simulates traffic flow, predicts congestion, and optimizes the transport network based on different traffic scenarios. The software layers can be divided into three categories: data layer, simulation layer, and interface layer.
**Data Layer:** The data layer includes both graph and non-graph data. The non-graph data is stored in a relational database like MySQL and Mangodb, while the graph data structure is stored in a graph database. This allows for the efficient handling of different types of data in a complementary manner.
**Simulation Layer:** The simulation layer includes the simulation core, which consists of the simulation core, controllers, and analysis modules.
* _SimCore:_ The simulation core consists of the model libraries, graph engine, optimization module, and verification and validation processes. The model libraries provide a range of models for simulating and analyzing the transportation network data, while the graph engine provides algorithms for processing and analyzing the graph data. The optimization module uses algorithms to find the optimal parameters for the models, and the verification and validation processes ensure that the data and results are accurate and reliable.
* _Controller:_ The controller module is responsible for controlling network dynamics, traffic signals, and agent behaviors, and uses the simulation core to simulate different scenarios.
* _Analysis:_ The analysis module provides insights into the transportation network's performance by processing and analyzing simulation results, such as link statistics, trajectory analysis, traffic counts, congestion analysis, efficiency measures, and more.
**Interface Layer:** The interface layer includes the GUI interface that displays simulation results to the user, shown in Fig 5. The GUI interface provides visualizations and graphs to help the user understand and interpret the simulation results.
### _Workflow_
TransWorldNG is designed to be intuitive and user-friendly. The simulation core generates traffic patterns based on the input data and parameters specified by the user. These traffic patterns can be visualized in real-time or exported for further analysis. The workflow of TransWorldNG compared to traditional simulation models can be found in Fig. 6. The key modules in TransWorldNG are the following:
* _Graph Construction:_ TransWorldNG constructs a heterogeneous graph representation of the traffic environment from real-world traffic data. Nodes represent individual
Fig. 4: Overall System Architecture of TransWorldNG. This figure illustrates the key components and their relationships.
Fig. 5: GUI of TransWorldNG. The figure shows the graphical user interface of TransWorldNG, which is used to interact with the traffic simulator. The GUI is designed to be user-friendly and intuitive. The main window of the GUI displays a 3D visualization of the simulated traffic environment.
agents, while edges represent the relationships and interactions between agents.
* _Graph Embedding:_ The graph is embedded into a high-dimensional space using a heterogeneous graph transformer model.
* _Pre-training:_ The graph transformer model is pre-trained on a large dataset of real-world traffic scenarios, enabling it to learn the patterns and relationships.
* _Simulation:_ The pre-trained is then used to generate simulations of new traffic scenarios. The system can make dynamic adjustments during the simulation to model changes in the traffic environment.
* _Evaluation:_ The simulations generated by TransWorldNG can be evaluated based on various metrics, such as accuracy and efficiency, which can help to improve the system for better performance.
## IV Case Study
This section aims to demonstrate the capabilities and advantages of TransWorldNG compared to existing traffic simulators. A case study is conducted to compare TransWorldNG with SUMO [1], a widely used traffic simulator. A 4-way signalized intersection is simulated using both TransWorldNG and SUMO. Fig. 7 (a) shows the 4-way signalized intersection, which is a classic example scenario in SUMO. The road network has 8 roads and 16 lanes, and there are 768 vehicles running in this network. These vehicles all start from the left direction and have three default routes they can take: going down to the right road, turning left to the north road, or turning right to the south road. The scenario also has one traffic light located at the central intersection.
### _Data-Driven Traffic Behavior Learning with TransWorldNG_
We investigated the ability of TransWorldNG to learn car-following behaviors from data. To evaluate its performance, we compared the car-following behavior generated by TransWorldNG to the Intelligent Driver Model (IDM) and the Krauss model. The Krauss model is the default car-following model of the SUMO traffic simulation software. Fig. 8 presents the comparison of vehicle acceleration and speed for the front car, as predicted by the three models.
Fig. 9 presents a histogram of the frequency of speed deviations observed during a simulation, showing the performance of the two simulation environments in terms of speed control and accuracy. A narrower distribution with a smaller spread and a peak closer to zero typically indicates better speed control and accuracy in the simulation environment. Both histograms in the figure show similar distributions, indicating that TransWorldNG performs car-following behavior as well as the classic models. This suggests that the automatically generated car-following behavior in TransWorldNG is effective.
### _Impact of Data Collection Interval on Model Performance_
Since TransWorldNG is a data-driven approach, to understand the trade-off between prediction accuracy and
Fig. 6: Comparison of traffic simulation workflows: (a) TransWorldNG, which uses graph construction and embedding techniques to obtain a pre-trained model for different simulation tasks, and (b) Traditional traffic simulators that require building and calibrating pre-defined behavior models. When the environment changes to new states, TransWorldNG can quickly adapt by fine-tuning the pre-trained model with new data, while traditional simulators need to start from scratch and repeat the entire simulation process, which is highlighted in the red dotted box.
Fig. 7: The case study scenario: (a) Traffic network of the simulated environment and (b) corresponding graph representation of the traffic system at one time step. The graph representation captures the structure and connections of the traffic system.
data collection frequency in TransWorldNG, we conducted experiments with different data collection intervals (5 and 10 steps) and compared the results with SUMO. The findings reveal interesting insights into the relationship between data collection interval and prediction accuracy. As expected, with shorter data collection intervals (e.g., 5 steps), the TransWorldNG model can capture more frequent updates in traffic dynamics, resulting in higher prediction accuracy. However, as the data collection interval increases (e.g., 10 steps), the prediction accuracy decreases, indicating that the model's ability to capture real-time changes in traffic dynamics is reduced. These findings highlight the importance of data collection frequency in the TransWorldNG model and emphasize the need for careful consideration of data collection intervals to achieve optimal prediction accuracy.
### _Assessing the Computational Performance of TransWorldNG_
Evaluating the computational performance of TransWorldNG is an important aspect of assessing the system's efficiency and scalability for large-scale traffic simulations. Simulation time and the number of agents are typically inversely proportional, meaning that as the number of agents increases, the simulation time will also increase. One way to evaluate the computational performance of TransWorldNG is to measure the percentage increase in runtime as the percentage increase in the number of agents.
Fig. 11 compares the percentage increase in simulation calculation time between TransWorldNG and SUMO as the system scale increases. The results demonstrate that as the system scale increases, the percentage increase in simulation calculation time grows substantially more slowly for TransWorldNG than for SUMO. This indicates that the proposed framework can dramatically improve the computing efficiency of large-scale traffic simulation. These results highlight the benefits of using TransWorldNG as a framework for large-scale traffic simulation.
One of the key reasons for TransWorldNG's good performance is its use of a graph structure and pre-trained models. The use of a graph structure enables parallel processing of traffic data, which can significantly reduce simulation calculation time. Additionally, the pre-trained models used in TransWorldNG can help reduce the amount of computation
Fig. 8: Comparison of the car-following model generated by TransWorldNG, IDM, and the Krauss mode, which is the default car-following model of SUMO. Subplots (a) and (b) present the vehicle acceleration and speed for the front car, as predicted by the three models, respectively, showing the ability of the TransWorldNG model to learn car following behavior from data and can generate similar patterns compared to those well-known models.
Fig. 10: Impact of Data Collection Interval (DCI) on Model Performance of TransWorldNG. This result shows the comparison of predicted vehicle speed between SUMO (as a reference) and TransWorldNG with data collection intervals of 5 and 10 steps, respectively. The result provides insights into the trade-off between prediction accuracy and data collection frequency in the TransWorldNG model.
Fig. 9: Histogram of the distribution of speed deviation in the car following behavior. The speed deviation is defined as the speed difference between the lead and follower vehicles. A speed deviation of zero indicates that the front and follower vehicles are traveling at a relatively consistent speed.
required during simulation, as the models have already learned many of the underlying patterns and relationships in the traffic data. Another factor that contributes to TransWorldNG's superior performance is its model-free approach. Unlike SUMO, which relies on pre-defined models of traffic behavior, TransWorldNG is able to adapt to different traffic scenarios and levels of abstraction without the need for extensive model development and calibration. This allows for more efficient and flexible simulation of complex traffic scenarios.
## V Conclusion
This study introduced the simulation framework and system structure of TransWorldNG, which utilize a traffic foundation model with data-driven automatic modeling capabilities to resolve the issues of limited structural complexity and high computation complexity of traditional simulators. The graph structure and data-driven method permit dynamic adjustments during simulation to reflect real-time changes in the urban system environment, allowing for the insertion of new data and expert knowledge for real-time mapping of the simulation system to the actual city. TransWorldNG can facilitate event-driven causal analysis of urban phenomena and can combine multi-field data to provide a simulation test platform for integrated decision-making. Future directions for TransWorldNG could include the integration of emerging technologies such as Mobility as a Service (MaaS) and AI-driven simulation technologies [19], as well as the development of more robust functionality using the framework presented in this study.
While TransWorldNG offers many advantages for traffic simulation, there are also some potential challenges that should be explored in future research. One potential challenge is the need for high-quality data that accurately represent real-world traffic patterns and behaviors. TransWorldNG relies on large amounts of data to generate its simulations, so the accuracy and quality of this data can significantly impact the reliability and usefulness of the simulations. Additionally, the collection, processing, and storage of such large amounts of data can also be a challenge [30]. In addition, while TransWorldNG is designed to be highly scalable and flexible, it may still face challenges in terms of computational resources and processing power. Running large-scale simulations can require significant computing resources, potential solutions include using cloud computing, distributed computing, and parallel processing, which need to be studied in future research. Furthermore, traffic simulation often involves predicting traffic flow over extended time periods. The potential use of large language models (LLMs), such as GPT, to generate a wider range of realistic scenarios may improve the accuracy and effectiveness of traffic simulations [31].
## Data Availability
The SUMO simulation platform and data is publicly available at [https://www.eclipse.org/sumo](https://www.eclipse.org/sumo). The simulation data of the 4-way intersection scenario is available from SUMO at [https://github.com/eclipse/sumo/blob/main/docs/web/docs/Tutorials](https://github.com/eclipse/sumo/blob/main/docs/web/docs/Tutorials).
## Code availability
The code of TransWorldNG was implemented in Python using the deep learning framework of PyTorch. Code, trained models, and scripts reproducing the experiments of this paper are available at [https://github.com/PJSAC/TransWorldNG](https://github.com/PJSAC/TransWorldNG).
|
2306.15213 | Validating a virtual human and automated feedback system for training
doctor-patient communication skills | Effective communication between a clinician and their patient is critical for
delivering healthcare maximizing outcomes. Unfortunately, traditional
communication training approaches that use human standardized patients and
expert coaches are difficult to scale. Here, we present the development and
validation of a scalable, easily accessible, digital tool known as the
Standardized Online Patient for Health Interaction Education (SOPHIE) for
practicing and receiving feedback on doctor-patient communication skills.
SOPHIE was validated by conducting an experiment with 30 participants. We found
that participants who underwent SOPHIE performed significantly better than the
control in overall communication, aggregate scores, empowering the patient, and
showing empathy ($p < 0.05$ in all cases). One day, we hope that SOPHIE will
help make communication training resources more accessible by providing a
scalable option to supplement existing resources. | Kurtis Haut, Caleb Wohn, Benjamin Kane, Tom Carroll, Catherine Guigno, Varun Kumar, Ron Epstein, Lenhart Schubert, Ehsan Hoque | 2023-06-27T05:23:08Z | http://arxiv.org/abs/2306.15213v1 | Validating a virtual human and automated feedback system for training doctor-patient communication skills
###### Abstract
Effective communication between a clinician and their patient is critical for delivering healthcare maximizing outcomes. Unfortunately, traditional communication training approaches that use human standardized patients and expert coaches are difficult to scale. Here, we present the development and validation of a scalable, easily accessible, digital tool known as the Standardized Online Patient for Health Interaction Education (SOPHIE) for practicing and receiving feedback on doctor-patient communication skills. SOPHIE was validated by conducting an experiment with 30 participants. We found that participants who underwent SOPHIE performed significantly better than the control in overall communication, aggregate scores, empowering the patient, and showing empathy (\(p<0.05\) in all cases). One day, we hope that SOPHIE will help make communication training resources more accessible by providing a scalable option to supplement existing resources.
Doctor-Patient Communication, Artificial Intelligence, Web-based Feedback System
## I Introduction
60% of late-stage cancer patients leave their doctor's office without fully understanding their prognosis [1] and 79% of patients feel emotionally unsupported by their doctors [2]. Past research has shown that poor communication by doctors leads to lower quality healthcare outcomes at a higher cost [2][3][4][5][6]. Unfortunately, low cost communication training videos or reading materials have been shown to have little effect [7][8]. Training courses using standardized patients (SPs) are a viable remedy widely used in medical schools [9][10]. For example, our institution offers interdisciplinary workshops for practicing patient care professionals (e.g., physicians, nurses, advanced practice providers, social workers, and chaplains) through the Advanced Communication Training (ACT) program [11], which teaches the MVP (Medical situation, Values, Plan) paradigm and emphasizes the 3 E skills: Empower, be Explicit, Empathize skills [12]. Receiving feedback has been found to improve the communication skills of clinicians. For example, feedback from communication coaching experts based on recorded interactions with real patients has been shown to improve a clinician's ability to empathize with their patient and empower them by eliciting questions [13]. However, due to the cost and limited availability of human SPs and coaches who can provide relevant feedback, these traditional approaches are hard to scale. The need for a scalable solution is compounded by the diminishing effects of communication training over the course of a physician's career [14].
We developed SOPHIE (Standardized Online Patient for Health Interaction Education) [15] to address this need. SOPHIE is a fully automated web-based system allowing medical professionals to have a conversation with a virtual human using their computer's speakers and microphone. After the conversation, the system automatically analyzes the transcript to provide immediate, quantified, and personalized feedback.
Using virtual patients for educating health professionals is not a new concept [16]. Prior work has shown the value of virtual patients in practicing empathy in a low stress environment [17], and much promise is granted to using virtual patients as a cost-effective pedagogical approach for developing countries [18]. The recent advancements in avatar generation and natural language understanding have opened up exciting possibilities for creating more realistic, interactive systems capable of providing user feedback that was previously not possible.
Indeed, the feedback component of the SOPHIE system represents a distinct contribution (see Fig. 1). Although prior work has shown that receiving feedback helps clinicians improve their communication skills [19], there are few existing tools to generate feedback automatically [20]. Our feedback system is unique in that it utilizes the previously validated MVP/3E's model of doctor-patient communication. It provides a quantitative analysis of the conversation for medical professionals to review, as well as text recommendations for improvement.
We validated the feasibility of this system in a experiment with 30 participants (See Fig. 4). We found that participants who underwent the educational intervention with SOPHIE performed significantly better in overall communication and achieved higher aggregate scores compared to participants who did not (\(p<0.05\)). We also observed statistically significant results for empowering the patient and showing empathy. We hope the SOPHIE system will eventually be utilized as a scalable solution to supplement existing communications training or as a low-cost alternative for resource deprived communities.
## II Methods
### _The SOPHIE System_
The educational intervention with the SOPHIE system has three components. The user begins by watching an instructional video about the MVP/3E's communication paradigm followed by viewing a tutorial video on how to use the SOPHIE system. The final component of the intervention is two conversations with SOPHIE, including feedback after each conversation. SOPHIE portrays an older female patient with advanced lung cancer who is seeking information about the prognosis. The feedback page is split into 4 main sections: a transcript, and one section for each of the three E's (see Fig. 1). The transcript section allows the user to review their conversation. Segments of the conversation where the medical professional engaged in lecturing (i.e. spoke for too long) are given a red background, and segments where the medical professional empowered the patient by asking a question are given a green background. Some segments in the transcript display suggestions for open-ended questions or empathetic statements that the medical professional could have used. The feedback system was developed through an iterative design process with close collaboration between programmers and palliative care specialists, and many of the metrics are based on statistical analysis of doctor-patient communication, as discussed in [21].
### _Dialogue Management_
SOPHIE's dialogue manager uses a symbolic, schema-based approach. Although LLMs have recently achieved impressive results [22][23], at the time of development, they were deemed ill-suited to this task for a variety of issues. Bender et al. argued that large language models (e.g., the current state-of-the-art) are generally insufficient for true language understanding as well as carry their own risks and potential ethical issues [24]. Large language models also come with the additional risk of going "off the rails" of the conversation parameters which makes them unpredictable and difficult to control. Without having the ability to control the dialogue, presenting the user with opportunities to practice specific
Fig. 1: **SOPHIE Feedback System** β The feedback is divided into four sections; Transcript, Empower, be Explicit, and Empathize. The upper left contains the transcript with embedded conversational suggestions. The Empower section contains the metrics number of questions asked, number of open-ended questions asked, and turn-taking with lecture and question coloring. The be Explicit section contains the metrics hedge words percentage with word cloud, speaking rate and reading level. The Empathize section contains the metrics personal pronouns percentage, average empathy score (1-7) with word cloud, and positive emotion (sentiment) over time graphs for the user, SOPHIE, and the βidealβ sentiment trajectory.
communication skills poses a real challenge in application consistency. As a result of these issues, we chose to take a symbolic approach. The conversations with SOPHIE are driven by eta, which uses flexible, modifiable dialogue schema (i.e., expected event types expressed as conversational statements) to imitate natural human conversations. The dialogue manager dynamically plans and enacts the conversation in real time by combining a user interpretation process with these dialogue schema [25][26][27]. See figure 3 for an example of the dialogue and see Fig. 2 for an overview of the dialogue manager's architecture). The user interpretation process is handled by a set of pattern transduction rules that map user utterances into simplified context-independent "gist clauses" given the immediate context of the preceding dialogue turn. The gist clause provides an explicit representation of the meaning of the user's utterances that the system can then respond to. Response generation, which is also handled by a pattern transduction process, can involve the selection of a particular reaction by the system to the user's gist clause, or the invocation of a new schema (e.g., the system may invoke a schema for SOPHIE discussing her medical concerns if asked a relevant question by the doctor). In the case where the system fails to extract a gist-clause, it may either ask the user to repeat and clarify their utterance, or give a generic default reaction specific to the current schema. Ultimately, a schema guided approach to dialogue management was chosen over using a large, neural language model (such as GPT3) in order to have more control over the dialogue.
### _Quantifying Complex Human Communication Skills_
**Empower -** We quantified empowerment using three key metrics; Questions-asked, open-questions asked and turn-taking. The need to quantify both types of questions was made apparent after consultation with Oncologists and Palliative Care specialists at University of Rochester Medical Center (URMC). A closed question helps the medical professional check patient understanding of the medical situation and prognosis while an open-ended question gives the patient an opportunity to reveal more sensitive information about their emotional state. We used expression matching to determine what type of question was asked and kept track of the total number in each category. Questions are useful for quantifying empowerment because this inevitably invites the user to take control of the conversation and thus tends to empower them. For example, by asking a question, the patient can express their concerns and voluntarily reveal external factors that would otherwise be hidden to the medical professional.
Turn-taking was quantified by keeping track of the total time for both the user and SOPHIE respectfully. Whether or not the user was lecturing for their turn was based off a previous voice study [21] and informally takes place when a medical professional is speaking for too long. This quantifies empowerment because unequal turn taking could signal a lack of empowerment for the user if their turns are too long and frequent in comparison to SOPHIE's.
**Empathize -** We quantified empathize using three key metrics; sentiment trajectory, empathy word cloud and personal pronouns. Sentiment trajectory was computed based off the work of Ali et al. [21], who analyzed sentiment in the VOICE dataset led by Sen et al. [28]. They defined a "sentiment trajectory" as an average sentiment vector across time. Sentiment was computed using VADER [29] and is on a -1 to +1 scale. They used k-means clustering to identify "sytles" of sentiment trajectory and logistic regression to identify if any style was associated with good conversation outcomes where "good" is determined by the level of patient prognosis understanding. The best style was "dynamic," having high sentiment early on in the conversation, low sentiment in the middle (likely to match patient sentiment after hearing the prognosis), and high sentiment at the end (likely to express encouragement, care and support). Our idea was that perhaps some of the complexity of empathy could be captured and quantified overall using this sentiment trajectory.
The empathy word cloud was computed using work from Sedoc et al. [30] who developed a lexicon which maps 10k words to empathy ratings on a 1-7 scale using a Mixed-Level, Feed Forward Network. We mapped every word spoken by the user through this lexicon and computed the average empathy from that. The word cloud is creating using the 15 most frequent words. We hoped that this would roughly quantify empathy for the entire conversation, but recognize the weakness in its inability to quantify statements on a sentence-level (which is where most empathy is likely to take place).
Personal pronouns were chosen to quantify empathy based of work by Sen et al. in 2017 [28] who used LIWC to analyze the correlation between different categories of words and patient ratings of doctors in the VOICE dataset. Interestingly, they found that higher rankings were correlated with the use of personal pronouns (e.g. I, You, etc.). We computed pronoun usages using NLTK part-of-speech tagging [31]. Intuitively, using pronouns makes a conversation more empathetic by appearing more personable as compared to generic, disease specific speech.
**be Explicit -** We quantified being Explicit using three metrics; speaking rate, reading grade and hedge words. Speaking rate was found to be associated with patient understanding [21] and is related to being explicit (e.g., if a physician speaks too slowly, or too quickly they probably are not communicating in an explicit manner). This is computed simply by taking the average words spoken per minute.
Reading grade was chosen quantify the complexity of the speech to address the issue of doctors using too much medical jargon when communicating a prognosis. Complex speech makes it more difficult for patients to understand their medical situation and we make the assumption that less complex speech is more explicit. We computed the reading grade using the standard measure of the Flesch-Kincaid readability test [32] which outputs the linguistic complexity in terms of U.S. school grades (e.g., 1st grade to 12th grade).
Hedge words were computed using a simple list of hedge words established from prior work by counting the percentage
of a user's words that appeared on the list. The most frequent 10 hedge words are made note of and a word cloud is constructed. The 2020 Horowitz et al's paper on the MVP paradigm which SOPHIE attempts to model mentions avoiding hedging as a key part of being explicit [12]. Thus, measuring hedging helps to quantify this skill directly for the user.
### _Experiment_
We conducted an experiment consisting of 30 participants with medical backgrounds (12 medical students, 9 nurses, 4 internal medicine residents, 2 physician's assistants, 2 psychologists, and 1 hospital chaplain). Participants were randomized (1:1 ratio) into intervention and control groups, stratified by professional/training background. The intervention group underwent the educational intervention with the SOPHIE system while the control group received no training (see Fig. 4a and 4b).
After the educational intervention, we evaluated the communication skills of the participants using human standardized patients (SPs). Every participant had a conversation with a SP via a video call. Immediately after each interaction, the SP rated the conversation using a standard scale developed with assistance from palliative care specialists. These ratings were statistically compared using a Mann Whitney U test to determine whether there were any significant differences between the the two groups as a result of undergoing the educational intervention with the SOPHIE system (see Fig. 4c). Additionally, participants in the treatment group completed a UI/UX survey designed to inform future iterations of the SOPHIE System.
### _Justification for Evaluation Metrics_
**SP Rating Scale -** Our rating scale was developed in close collaboration with UMRC Oncologists and Palliative Care Specialists. There was no existing rating scale that was appropriate as is for our experiment although prior work exists [33]. We wanted to measure whether a clinician improved based on behaviors the SOPHIE system was designed to give feedback on and reinforce. The full rating scale can be seen in table I and is based on behaviors the human SP observes during their interation.
**UX/UI Rating Scale -** We broke the UX scale down into three components; namely system usability, virtual human and dialogue (see Figure 5). We chose a representative sample of system usability statements from the well-established System Usability Scale (SUS) [34][35]. The statements for the virtual human and dialogue sections were developed by the research team after robust discussions. We wanted to evaluate the realism of the virtual human (e.g., the ability to look and sound like a real cancer patient). Realism in this context is meant to include holistic aspects of the interaction such as lip syncing. Discussions with our medical collaborators indicated the importance of emotional expression in real patient encounters and we therefore incorporated statements to quantify the user's perception of the virtual human's ability to emote. Our dialogue statements were focused on quantifying the quality of the dialogue itself. Aspects of the dialogue such as whether the responses were fluent, natural, relevant, logical and/or emotionally expressive were selected as the criteria. All statements were evaluated on a 1-5 likert scale with 5 representing strong agreement.
Our UI rating scale was also developed by the research team. We simply had the research subjects evaluate the utility
Fig. 2: **Eta Dialogue Management Architecture -** The dialogue manager relies on a database of general schema knowledge including dialogue schemas (top), as well as dialogue context and episodic memory (bottom). The dialogue manager interleaves processing of several submodules for processing input (blue; left), guiding system behavior through dynamic instantiation of a dialogue schema (green; center), and generating output (red; right). The numbered edges represent the flow for how the system interacts with the user.
of each UI element on the system feedback screen (see Fig. 1 using a 1-5 likert scale with 5 representing high utility. Likert scales allows for more nuance [36] in survey responses and are appropriate for gathering feedback for system improvement.
## III Results
### _Ratings Comparison_
We used four SPs for the experiment. Each SP had an equal number of intervention and control participants (\(\pm 1\)). A Bonferroni corrected pairwise t-test showed no significant differences between ratings given by the different SPs.
We found that the intervention group performed significantly better on the "overall communicator" (intervention: 6.000, control: 5.067, \(p<0.05\)) and "aggregate score" (intervention: 36.067, control: 29.600 \(p<0.05\)) metrics. For every other question, there was a trend towards the intervention group, but the difference was not always statistically significant. See Table I for the full results.
### _UI/UX Surveys_
Participants in the intervention group rated each feedback metric shown in figure 1 on a 1-5 Likert scale and the results can be seen in table II. Overall, we found that the most useful feedback metrics were the reading level, speaking rate, hedge words, transcript and turn-taking. The least useful metrics were positive emotion, empathy words and personal pronouns. It is important to note that user's ratings of the feedback metrics may not equate to what they actually learned. For example, participants rated the empathy metrics relatively low, yet still performed significantly better on 2 of the 5 empathy ratings according to the Human SP evaluation.
Additionally, participants rated four components of the SOPHIE system; System Usability, Virtual Human, Dialogue, and Feedback. Every question was likewise asked on a 1-5 point Likert scale with 5 meaning most strongly agree. Fig. 5 depicts the UX experience for system usability, virtual human and dialogue. Overall, participants rated the system as easy to use with the virtual human having a realistic voice and appearance. However, the dialogue appears to be a major point of weakness in the experiment. Its responses were not rated as natural, logical, or realistic, and it did not appear to understand the user. Importantly, though, SOPHIE kept the conversation relevant, despite the variety of ways in which users could respond. SOPHIE's ability to display emotion received mixed ratings.
Despite these limitations, debriefing interviews with participants in the intervention group indicated that participants saw overall utility in our system, and participants expressed an interest in using the tool if improvements to the dialogue and virtual human allowed the interaction to be more realistic.
## IV Discussion
### _Improving Communication Skills_
Human communication is pragmatic, with patterns developing over time to become habitual and difficult to change [37]. We speculate that the extent to which an individual's communication behaviors can be modified is dependent on how well a person's subjective experience and recollection can be meaningfully connected to clearly-presented and actionable feedback. Based on our experiment, we see that the combination of interacting with SOPHIE and receiving automated feedback improved participant's use of the Empower and Empathize skills and their overall communication. To what extent this increase was the result of simply interacting with SOPHIE versus receiving feedback on the interaction cannot be established based on this experiment, as we didn't have a population which had the conversation but not the feedback. However, one indicator of the importance of feedback may be that the users rated the feedback system more highly than the dialogue and virtual human. We suspect that the feedback system is a major contributor to the differences observed. For example, a user, upon reviewing the transcript of their most recent conversation with SOPHIE and observing that they only asked three questions, may realize that they
Fig. 3:
can better empower the patient by asking more questions. Similarly, as the user is reviewing the transcript, they become aware of empathetic statements they could have used. This knowledge seems to inform subsequent conversations based on the ratings from the human SPs (see q3 - asking questions - and q15 - empathetic statements - in table I). This suggests that the system's feedback could result in an actionable plan for improvement. Further experiments with SOPHIE will be needed to confirm these intuitions about the efficacy of the feedback system.
Although we did see differences in the being Explicit skill, they were not statistically significant. This may change as we run future experiments with larger sample sizes. Additionally, we are planning further improvements to the SOPHIE system based on our UI/UX feedback from study participants (see section IV-C). Ultimately, we believe the consistency of the virtual human, dialogue and feedback system would allow a healthcare professional to hone a variety of communication skills through repeated practice.
### _Promoting Equity and Access_
The SOPHIE communication resources can be made accessible to anybody with a computer, microphone and internet connection. The accessibility of the system is highlighted by the fact that users rated SOPHIE as easy to use, and in particular disagreed with the statement "I needed to learn a lot of things before I could get going with this system." The scalable, web-based nature of SOPHIE makes it a low
Fig. 4: **Experiment with 30 participants** - a) Intervention group, underwent educational intervention with SOPHIE before speaking to SP. b) Control group, received no training before speaking to SP. c) Overall ratings comparison between control (blue) and intervention (tan) **bold** denotes significant differences. The numbers have been normalized to a 0-1 scale with 1 being βgood.β The raw numbers and full question text can be found in Table I by looking up the question ID. (Images of participants used with permission).
cost alternative to synchronous training with human SPs, and could make communication training more readily available to rural or low-income regions. This would promote equity by making communication training available to more healthcare professionals regardless of the financial resources available to them.
Specific aspects of SOPHIE could also be customized to reflect a diverse range of patients. Attributes like SOPHIE's age, race, gender, language, and personality could be modified to represent all demographics of patient populations. Additionally, the context of the module could be readily changed. In the future, users could choose from dozens of healthcare modules focusing on specific types of conversations with customized virtual humans uniquely suited for the purpose. Different types of patient personalities could be programmed to help practice responding to different reactions and attitudes from diverse patients.
### _Future of SOPHIE_
Despite efforts to improve communication skills, a clinician may fall back into their old habits unconsciously. Thus, there is a need to consistently practice these difficult conversations for maintaining, or, even enhancing, a medical professional's skill proficiency. Future generations of SOPHIE aim to satisfy this need by improving upon the system weaknesses discovered from the UI/UX responses: namely, the dialogue management as well as the emotional expressiveness of the SOPHIE virtual human. Additionally, we plan to iterate further with medical professionals to perfect the feedback system (especially in regards to feedback elements that received low scores).
The feedback system may be extended to other applications in the future as well. For example, clinicians could have an application on their phones that could be used during real patient encounters. With the patient's consent, a clinician could use the app to analyze the conversation. The app could generate a checklist that the clinician could quickly review to determine if they need to spend more time addressing a specific area
with their patient. The data could even be tracked over time to help the clinician monitor their performance and obtain user-led, personalized insights. For example, the clinician could view the system's recommended SOPHIE modules to refresh or improve certain communication skills. What the clinician decides to do based on this feedback is entirely their own and the design focus should always be to empower the clinician.
### _Limitations_
The limitations for this study are our small sample size, lack of clarity of which factors of the SOPHIE system caused the improvements and a lack of formal validation for the rating scales used. We elaborate on these limitations in our ethical statement which can be found after the conclusion.
### _Contributing to SOPHIE_
Regretfully, the SOPHIE code base is not open-source. The SOPHIE project is an ongoing venture between the University
Fig. 5: **UX survey responses -** _System Usability:_ The system appears easy to use, however, participants would not use system frequently. _Virtual Human:_ SOPHIE looked realistic, is capable of showing emotion through voice, yet lacks ability to express emotions through facial expression. _Dialogue:_ SOPHIEβs responses were relevant to her medical condition. However, she did not appear to understand the user and her responses were judged as not fluent or natural, illogical, and unlike a real patient.
of Rochester Computer Science (URCS) Department and the University of Rochester Medical Center (URMC). Once deployed, the research staff may release a starter kit for researchers who wish to run similar experiments. However, we are actively recruiting participants for clinical trials and seeking additional collaborations from other medical schools. If interested, please email Kurtis Haut at [email protected]_.
## V Conclusion
We developed and validated a new digital tool for improving serious illness communication training for health care professionals. We observed significantly better performance on overall communication and higher aggregate scores as a result of interacting with SOPHIE. This study suggests the potential for practicing conversations with a virtual human and an automated feedback system to improve communication skills in a scalable, on-demand fashion. By improving access to communication training, SOPHIE could improve the equity of our local institutions, and perhaps even the global healthcare system.
## VI Ethical Impact Statement
The impacts of this system will be felt by real patients, whose experiences of receiving tragic news will be shaped by the behaviors their clinicians developed during their communication training. Given the effect that communication has on patient healthcare outcomes (see section I), the ethical considerations of this work must be taken seriously. We have an obligation to ensure the efficacy of the SOPHIE system and mitigate any potential harms it could cause. Thus, the virtual patient dialogue and the automated feedback must be based on the highest standard of established medical practices. Otherwise, learned communication deficiencies could cause patient prognosis misunderstandings, leading to healthcare choices that are unaligned with patient values. To mitigate this risk, the development of SOPHIE has been heavily shaped through several design iterations with expert oncologists, palliative care specialists and other stakeholders. We plan to conduct further experiments to validate and continually refine each subsequent generation of SOPHIE before deploying the final version. We believe that continuous system evaluations that appropriately keep pace with development will help maintain high ethical standards.
Careful ethical considerations must also be made in experimental design for the validation process and ensure participant confidentiality. In our experiment, all participants provided informed, written consent before beginning the study. The methods were performed in accordance with relevant guidelines and regulations that were approved by our university's Institutional Review Board. All data collected has been identified such that it can not be traced to a specific participant.
One limitation of our study is the small, relatively homogeneous sample size consisting of predominantly white healthcare professionals from our local area which reduces the generalizability of our results. Future experiments will aim to recruit a larger, more demographically diverse sample to ensure that the needs of all users are met.
Additionally, the rendering of the virtual human is a potential source of bias. We are depicting a white, elderly, female with terminal lung cancer and set personality. This could pose an ethical issue because real patients are demographically diverse and come from a variety of backgrounds. It may be the case that communication skills learned from interacting with SOPHIE do not translate perfectly when communicating with patients who do not match SOPHIE's race, gender, age or personality. Making these features more customizeable will be a focus of future iterations of the system. This would help healthcare professionals prepare to communicate equally well regardless of their patient's demographic traits, background or personality.
## VII Data Availability
De-identified results from the experiment are available upon request.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.